Nantes Université

Skip to content
Extraits de code Groupes Projets
Valider e5417be6 rédigé par rlaz's avatar rlaz
Parcourir les fichiers

Fixed bug for perfect results, updated README

parent a1a7e355
Aucune branche associée trouvée
Aucune étiquette associée trouvée
Aucune requête de fusion associée trouvée
...@@ -10,17 +10,7 @@ ...@@ -10,17 +10,7 @@
# Label Graph Evaluation Tools (LgEval) # Label Graph Evaluation Tools (LgEval)
## Version 0.3.8 (March 2022)
**Changes Aug 2021 - Feb 2022**
* Improved HTML error visualization (aks)
* README update / refresh, updates for MathJax-3 and support (rz)
* Added CROHME 2019 conversion scripts (rz)
* Made translation/MathML unknown symbol errors optional (rz)
* Standardized import statements, added PYTHONPATH to installation instructions (aks)
* New 'lgeval' conda environment for python package install + environment variables (rz)
## Version 0.3.7.2 (Feb 2022)
*Copyright (c) 2012-2022, Richard Zanibbi, Harold Mouchère, and Ayush Kumar Shah* *Copyright (c) 2012-2022, Richard Zanibbi, Harold Mouchère, and Ayush Kumar Shah*
...@@ -36,6 +26,22 @@ IRCCyN/IVC Lab, University of Nantes, France ...@@ -36,6 +26,22 @@ IRCCyN/IVC Lab, University of Nantes, France
Document and Pattern Recognition Lab, Document and Pattern Recognition Lab,
Rochester Institute of Technology, USA Rochester Institute of Technology, USA
## License
*These tools are provided 'as is' without any guarantee of suitability for non-research use. No commercial use is permitted. The tools are being distributed under a Creative Commons license (please see the LICENSE file, and the directory cc_license contains a file detailing the specifics of the license).*
## Changes Aug 2021 - Mar 2022
* Improved HTML error visualization (aks)
* README update / refresh, updates for MathJax-3 and support (rz)
* Added CROHME 2019 conversion scripts (rz)
* Made translation/MathML unknown symbol errors optional (rz)
* Standardized import statements, added PYTHONPATH to installation instructions (aks)
* New 'lgeval' conda environment for python package install + environment variables (rz)
[rlaz]: https://www.cs.rit.edu/~rlaz "Richard Zanibbi's Home Page" [rlaz]: https://www.cs.rit.edu/~rlaz "Richard Zanibbi's Home Page"
[hm]: https://www.irccyn.ec-nantes.fr/~mouchere "Harold Mouchère's Home Page" [hm]: https://www.irccyn.ec-nantes.fr/~mouchere "Harold Mouchère's Home Page"
...@@ -43,52 +49,57 @@ Rochester Institute of Technology, USA ...@@ -43,52 +49,57 @@ Rochester Institute of Technology, USA
## License
*These tools are provided 'as is' without any guarantee of suitability for non-research use. No commercial use is permitted. The tools are being distributed under a Creative Commons license (please see the LICENSE file, and the directory cc_license contains a file detailing the specifics of the license).*
## Contents ## Contents
* [Notes](#notes)
* [Purpose](#purpose) * [Overview](#purpose)
* [Important Notes](#notes)
* [Installation](#install) * [Installation](#install)
* [Label Graph File Format](#lgdescription) * [Getting Started](#quickstart)
* [First Example: Results without Errors](#firsteg)
* [Summary.txt - The Evaluation Results Summary](#summary)
* [Another Example: Results with Errors](#egtwo)
* [Automated Error Analysis](#confhist)
* [Finishing Up and Next Steps](#next)
* [LgEval Tools](#tools)
* [Evaluation and Visualization](#evaltools)
* [File Manipulation](#files)
* [Label Graph Files](#lgdescription)
* [I. Primitive Format](#primitive) * [I. Primitive Format](#primitive)
* [II. Object Format](#object) * [II. Object Format](#object)
* [III. Multiple Levels of Structure](#multiple) * [III. Multiple Levels of Structure](#multiple)
* [IV. Symbol Label Graphcs](#symlg) * [IV. Symbol Label Graphcs](#symlg)
* [LgEval Tools](#tools)
* [Additional Programs](#other)
* [References](#refs) * [References](#refs)
## <a name="notes"></a>Notes
1. For those wishing to use LgEval with the CROHME competition data, you will also need to install CROHMELib, which is provided separately.
2. We also ask that you cite the following paper describing label graphs and associated
metrics in any publications that you produce using LgEval:
R\. Zanibbi, H. Mouchere, and C. Viard-Gaudin (2013) [Evaluating Structural Pattern Recognition for Handwritten Math via Primitive Label Graphs][drr2013paper]. Proc. Document Recognition and Retrieval, Proc. SPIE vol. 8658, pp. 17-1 - 17-11, San Francisco, CA.
3. Information about file differences, metrics and key data structures used for evaluation are provided in the [README_MetricsData.txt][readme2] file.
[readme2]: README_MetricsData.txt "README: Metrics and Data" ## <a name="purpose"></a>Overview
## <a name="purpose"></a>Purpose The Label Graph Evaluation tools (LgEval) were originally developed for scoring handwritten math recognition systems for the [Competition on Recognition of Online Handwritten Mathematical Expressions][crohmeurl] which has been run annually between 2011 and 2014, and then again in 2016 and 2019.
For CROHME, the library was used to obtain stroke and symbol-level evaluation of handwritten math expressions. However, label graphs are a very general representation, and may be used to represent and evaluate structural similarity for other problems.
The Label Graph Evaluation tools (LgEval) were originally used for scoring handwritten math recognition systems for the [Competition on Recognition of Online Handwritten Mathematical Expressions][crohmeurl] which has been run annually between 2011 and 2014, and then again in 2016 and 2019. A *label graph* is simply a labeled directed graph. Both nodes and edges are labeled, representing the grouping of input primitives into objects (e.g. grouping strokes into symbols), object types (e.g. symbol names) and relationships between objects. The section [Label Graph Files](#lgdescription) describes the representation in detail. The current version of the library may be used to r**epresent and evaluate multiple levels of structure** (e.g. for matrices, which contains symbols, cells, rows, and columns).
For CROHME, the library was used to obtain stroke and symbol-level evaluation of handwritten math expressions. However, label graphs are a very general formalism, and may be used to represent and evaluate structure recognition for other problems.
A *label graph* is simply a labeled directed graph. Both nodes and edges are labeled, representing the grouping of input primitives into objects (e.g. grouping strokes into symbols), object types (e.g. symbol names) and relationships between objects. The section [Label Graph Files](#lgdescription) describes the representation in detail. The current version of the library may be used to represent and evaluate multiple levels of structure (e.g. for matrices, which contains symbols, cells, rows, and columns). Label graphs allow an *absolute* difference between two structure representations to be computed, even when the segmentation of *input* *primitives* (e.g., a list of handwritten strokes) into objects (e.g., symbols) disagree, and even when primitives are missing in one or other interpretation. This difference is obtained directly from disagreeing edge and node labels, along with associated Hamming distances (i.e., counting disagreeing node and edge labels). Input primitives are assumed to be a fixed set, but can represent any object (e.g. connected components, bounding boxes, pixels in an image, or a combination of these).
Label graphs allow an *absolute* difference between two structure representations to be computed, even when the segmentation of primitives into objects disagree, and even when primitives are missing in one or other interpretation. This difference is obtained directly from disagreeing edge and node labels, along with associated Hamming distances (i.e., counting disagreeing node and edge labels). Input primitives are assumed to be a fixed set, but can represent any object (e.g. connected components, bounding boxes, pixels in an image, or a combination of these).
In addition to metrics, the library provides visualization of label graphs at the primitive and object levels using the **lg2dot** program. Additional information about label graphs and CROHME may be found in the [References](#refs) section. In addition to metrics, the library provides visualization of label graphs at the primitive and object levels using the **lg2dot** program. Additional information about label graphs and CROHME may be found in the [References](#refs) section.
[Graphviz]: http://www.Graphviz.org/ "Graphviz web site" [Graphviz]: http://www.Graphviz.org/ "Graphviz web site"
## <a name="notes"></a>Important Notes
1. For those wishing to use LgEval with the CROHME competition data, you will also need to install CROHMELib, which is provided separately [here](https://gitlab.com/dprl/crohmelib).
2. We also ask that you cite the following paper describing label graphs and associated
metrics in any publications that you produce using LgEval:
R\. Zanibbi, H. Mouchere, and C. Viard-Gaudin (2013) [Evaluating Structural Pattern Recognition for Handwritten Math via Primitive Label Graphs][drr2013paper]. Proc. Document Recognition and Retrieval, Proc. SPIE vol. 8658, pp. 17-1 - 17-11, San Francisco, CA.
3. Information about file differences, metrics and key data structures used for evaluation are provided in the [README_MetricsData.txt][readme2] file.
[readme2]: README_MetricsData.txt "README: Metrics and Data"
...@@ -100,8 +111,9 @@ In addition to metrics, the library provides visualization of label graphs at th ...@@ -100,8 +111,9 @@ In addition to metrics, the library provides visualization of label graphs at th
1. bash 1. bash
2. perl (**with LibXML**) 2. perl (**with LibXML**)
3. python 3.x 3. python 3.x
4. Graphviz (for 'dot') 4. [Graphviz](https://graphviz.org) (for 'dot')
5. Conda (e.g., from Anaconda) and pip for Python environments 5. [Pandoc](https://pandoc.org)
6. Conda (e.g., from Anaconda) and pip for Python environments
**Conda LgEval Environment Install (preferred approach).** To install **Conda LgEval Environment Install (preferred approach).** To install
the conda environment for lgeval, which will set up both Python packages and the conda environment for lgeval, which will set up both Python packages and
...@@ -132,6 +144,274 @@ tools to your search path. ...@@ -132,6 +144,274 @@ tools to your search path.
export PATH=$PATH:$CROHMELibDir/bin:$LgEvalDir/bin export PATH=$PATH:$CROHMELibDir/bin:$LgEvalDir/bin
export PYTHONPATH=$PYTHONPATH:$(dirname "$LgEvalDir"):$(dirname "$CROHMELibDir"): export PYTHONPATH=$PYTHONPATH:$(dirname "$LgEvalDir"):$(dirname "$CROHMELibDir"):
## <a name="quickstart"></a>Getting Started
The easiest way to evaluate recognition results using LgEval is the `evaluate`
script with two directories, the first containing output .lg (label graph)
files, and the second containing .lg files containing the corresponding ground
truth (i.e., target/'correct' interpretation) files.
### <a name="firsteg"></a>First Example: Results without Errors
Try running the following from the `lgeval/` directory. In this first example, we start the lgeval conda environment so that our tools are available from the command line, and then compare a set of ground truth files with themselves.
```
conda activate lgeval
evaluate tests/io_tests/small-GT tests/io_tests/small-GT
```
This produces a new directory `Results_small_GT` with the following contents:
```
Results_small-GT
├── 00_NoErrors
├── ConfusionMatrices.csv
├── ConfusionMatrices.html
├── Correct.csv
├── FileMetrics.csv
├── Metrics
│ ├── UN_101_em_0.csv
│ ├── UN_101_em_1.csv
│ ├── UN_101_em_10.csv
│ ├── UN_101_em_2.csv
│ ├── UN_101_em_3.csv
│ ├── UN_101_em_4.csv
│ ├── UN_101_em_5.csv
│ ├── UN_101_em_6.csv
│ ├── UN_101_em_7.csv
│ ├── UN_101_em_8.csv
│ └── UN_101_em_9.csv
├── Summary.txt
├── labelsGT.txt
└── labelsOutput.txt
```
Here is quick tour of what this directory contains.
* The `00_NoErrors` file is empty, and simply indicates that no errors were produced.
* Confusion matrices are provided in two formats, one for reading (HTML), and one for computing. Try opening `ConfusionMatrices.html` in a web browser to see this.
* `Correct.csv` records which files were fully correct.
* `FileMetrics.csv` is simply a compilation of the individual evaluation metric files (one per input file) in the `Metrics` directory.
* The individual files in `Metrics/` provide individual file metric results, and are used in a quick-and-dirty way to allow `evaluate` to be run multiple times, for example to avoid having to re-evalate all output files after making corrections for just a handful of them.
* The `labelsGT.txt` and `labelsOutput.txt` provide a list of *all* node and edge labels present in label graphs for ground truth and the output files, respectively.
### <a name="summary"></a> Summary.txt - The Evaluation Results Summary
Finally, `Summary.txt` provides a summary of various recognition metrics. These include recognition rates, as well as recall, precision, and F1 scores for detection metrics. The metrics are grouped by:
* *detecting* objects and object relationships (`Objects`,`Relations`)
* *detecting and classifying* objects and their relationships (`Objects + Classes`, `Relations + Classes`)
* *graphs with correct structure* (`Stucture`)
* *graphs with correct structure and object + relationship class labels* (`Structure + Classes`)
* For development and debugging (but **not as an evaluation metric**), we also provide the classification rate for correctly segmented objects and relationships (`Class/Det`)
The `Summary.txt` file is broken into four different sections, corresponding to three different levels of structure (primitive, object, and graph (i.e., input `File`)), and Hamming distances over directed, primitive-level label graphs.
* **Input Primitives and Edges Between Primitives:** classification rates and error types for input nodes (primitives) and edges between input primitives. Metrics are presented for both directed and undirected complete graphs.
* **Objects and Object Relationships:** metrics for segmentation/detection over primitives (input nodes), such as symbols comprised of one or more primitives, and metrics for edges over objects, such as for spatial relationships between symbols (e.g., 'superscript')
* **File Metrics:** Recognition rates for entire files (e.g., the percentage of files with correct segementation of objects, and correct segmentation of objects and their classes).
* **Label Error Histrogram:** Shows a histogram of the number of files with k or fewer primitive-level directed graph errors in input node or edge classification errors (0 <= k <= 5), along with the cumulative number of files with at most k errors.
**See the Section on [Label Graph Files](#lgdescription) for details on how primitives and objects are related.** Also, a detailed description of all metrics used in LgEval can found in the file [README_MetricsData.txt](README_MetricsData.txt).
### <a name="egtwo"></a>Another Example: Results with Errors
Now let's try running the tool wtih some output files containing many errors. Issue the following command:
```
evaluate tests/io_tests/small-out tests/io_tests/small-GT
```
This produce a new results directory called `Results_small-out` (named after the output file directory), with the following structure:
```
Results_small-out
├── ConfusionMatrices.csv
├── ConfusionMatrices.html
├── Correct.csv
├── FileMetrics.csv
├── Metrics
│ ├── UN_101_em_0.csv
│ ├── UN_101_em_0.diff
│ ├── UN_101_em_1.csv
│ ├── UN_101_em_1.diff
│ ├── UN_101_em_10.csv
│ ├── UN_101_em_10.diff
│ ├── UN_101_em_2.csv
│ ├── UN_101_em_2.diff
│ ├── UN_101_em_3.csv
│ ├── UN_101_em_3.diff
│ ├── UN_101_em_4.csv
│ ├── UN_101_em_4.diff
│ ├── UN_101_em_5.csv
│ ├── UN_101_em_5.diff
│ ├── UN_101_em_6.csv
│ ├── UN_101_em_6.diff
│ ├── UN_101_em_7.csv
│ ├── UN_101_em_7.diff
│ ├── UN_101_em_8.csv
│ ├── UN_101_em_8.diff
│ ├── UN_101_em_9.csv
│ └── UN_101_em_9.diff
├── Summary.txt
├── labelsGT.txt
└── labelsOutput.txt
```
The contents of the directories are the same as in the previous example, except that the `00_No_Errors` file is missing, and there are now `.diff` files in the `Metrics/` directory, containing the specific differences between each output file and its corresponding ground truth file. The `.diff` files are CSV files contain error entries of the following types:
* *N: input primitive (node) classification errors
* *E: input primitive relationship (directed edge) classification errors
* *S: input primitive segmentation (grouping) errors
Note that '_' represents 'no label', or 'undefined' (e.g., for edges without a label).
If we look at the `Summary.txt` file, we'll notice that only 1 file had all of its handwritten strokes correctly segmented into symbols, and that no files had fully correct (unlabeled) graph structure or labeling of symbols and relationships.
Looking at the `ConfusionMatrices.html` file in a web browser, we can also look at counts for specific classification errors. The page provides some description of the errors presented.
### <a name="confhist"></a>confHist - Automated Error Analysis for Subgraphs
The confusion matrices are informative, but they do not tell us things such as what the most commonly misrecognized symbol was, the types of errors made in segmentation and classification, and which files these occurred in.
To get this more detailed error analysis in the form of graph-level *confusion histograms*, we can issue:
```
confHist tests/io_tests/small-out tests/io_tests/small-GT --graph-size 1
```
This produces the following output directory, with an HTML file and subdirectories as shown:
```
confHist_outputs
├── CH_small-out_vs_small-GT__size_1_min_1.html
├── CH_small-out_vs_small-GT__size_1_min_1_gt_htmls
├── css
├── dotpdfs
├── js
└── small-out_vs_small-GT
```
For this first time through, we simply want to open the HTML file in a web browser (**Note:** Google Chrome works best). This file visualizes a histogram of confusions between ground truth graphs (at left) and output graphs (shown in the errors lists at right). Ground truth symbols are shown in the leftmost two columns - the symbol class in the leftmost column, and the **primitive-level graph structure** of the ground truth symbol in the second column. From the third column on are shown the **specific errors** seen for the primitives in the ground truth symbol.
The HTML file shows these symbol recognition errors in decreasing order of frequency, along with all specific instances of errors made for that symbol -- itself sorted by decreasing order of frequency from left to right.
**Examples of Errors.** If we look at row 4, we see errors made between a handwritten 'x' in ground truth (an 'object') drawn with two strokes. We can also see in row 14 errors for summation symbols (Sigma) drawn with three handwritten strokes. In that particular case, one error was a miclassification of the symbol as epsilon (`\in`), and in the second case the symbol was split in two, with one stroke treated as a `c`, to the right of which the two other strokes were correctly labeled `\sum` (indicated by yellow nodes).
**Saving and Viewing Files with Errors.** To obtain a list of files that contain specific errors, you can use the check boxes in the interface, and then press `Save Selected Files` at the top of the web page. This will create a text file `selectedFiles.txt` in the downloads folder for your browser.
In addition to this, we can browse through the specific files that cause the errors. For example, for the first errors shown for the most common misrecognition for a symbol (`-`).
If you click on the `4 errors` link above the red node with a `1` (indicating that four files misclassified this symbol comprised of one primitive (stroke) as a `1`), we are taken to another page showing the files where the errors occurred, and the graph representing the errors made in each file (indicated in red, and with alternate labels). The numbers at the bottom of each square node are the identifiers for primitives (strokes) belonging to each symbol.
**Object Edges and Larger Ground Truth SubGraphs:** `confHist` can also be used to view the most common errors between pairs of objects with an associated edge. To see this issue:
```
confHist tests/io_tests/small-out tests/io_tests/small-GT --graphSize 2
```
And then again look at the newly generated HTML file: `confHist_outputs/CH_small-out_vs_small-GT__size_2_min_1.html`, which can be used in the same ways as described above for `confHist` results for individaul symbols.
We can see generate errors for sub-graphs of larger `--graphSize` values (e.g., 3, 4, ...), but be warned that because of the complexity of subgraph matching, as the subgraph size increases, the time to compute the result increases, and this can be quite long for large file sets. Previously, we have found that graphSizes 1 and 2 for symbols and symbol relationships were most useful for the CROHME handwritten math recognition competitions, with size 3 sometimes also being useful (e.g., to check errors when for detecting fractions).
### <a name="next"></a>Finishing Up and Next Steps
At this point, if we are done using lgeval, issue:
```
conda deactivate
```
to close the conda lgeval environment.
**Next Steps.** There are other ways that the tools can be used (e.g., using manually created pairs of file lists with `evaluate`), and other tools in the library that we have not covered here. You can find more information about these tools and the underlying labeled graph representation and file formats (NE: primitive node-edge and OR: object-relationship) below.
As a reminder, more details about the specific metrics and differences computed by LgEval, along with key data structures are available in the file [README_MetricsData.txt](README_MetricsData.txt).
## <a name="tools"></a>LgEval Tools
The main tools for LgEval are provided in the *bin/* subdirectory. Call a script without arguments for usage instructions. A doxygen-generated summary of files and classes is available at [doc/html/files.html](doc/html/files.html).
### <a name="evaltool"></a>Evaluation, Error Analysis, and Graph/Error Visualization
**evaluate, evaluateMat**
*evaluate* is the main evaluation script for label graphs. It
automatically produces metrics, differences, a result summary,
and visualizations of recognition errors (requires Graphviz). The program
produces evaluation results given a directory of output files and
a corresponding directory of ground truth files, or a user-defined file list.
* **NOTE:** If a node is absent in one of two graphs being compared, it will be inserted as an 'ABSENT' node with unlabeled edges ('_') between the ABSENT node and all other nodes in the graph. See Lg.matchAbsent() in the file lg.py.
*evaluateMat* is used to evaluate output for expressions containing matrices
(used for the matrix recognition task in CROHME 2014).
**confHist**
Create structure confusion histograms as HTML pages, which show target structures (e.g. of
1-3 symbols) or stroke groups, along with corresponding error graphs and their
frequencies. To save space, the user can specify the minimum number of times
that an error must occur to be included in the output. This provides a detailed
summary of the specific segmentation and classification errors made by a
recognition algorithm. The structure confusion histograms at the object and
stroke levels are stored in a (large) .html file.
**lg2dot**
Create .dot and .pdf output for a label graph, or visualize the
difference between two label graphs. Different graph types may be
produced (requires Graphviz), and differences between
a pair of graphs may also be visualized. The following graph
types are supported:
* Primitive label graphs (e.g. as shown in Figs. 1-3 above)
* Bipartite primitive label graphs
* Bipartite segmentation graphs for primitives
* Directed Acylcic Graph (DAG) represention of objects and object relationships
* Rooted tree for hierarchical relationship structures (e.g. symbol layout in math expressions)
**Note:** The DAG and tree visualizations assume that a single level of structure is
being visualized.
**cdiff, ldiff and vdiff**
Used to compile labeling errors of given types (*cdiff*), or return the a list of the files containing these errors (*ldiff*) and view them (*vdiff*) using 'less.' Regular expression matching over node and edge labels is supported ('egrep' format), and files with or without segmentation errors may be selected for. These tools operate on the .diff files created by *evaluate.*
**src/metricDist.py**
Used to select a metric from a CSV file (.m) produced by the 'evallg.py'
program (used by **evaluate**). Useful for producing histograms.
### <a name="files"></a> File Generation, Conversion, and Filtering
**convert2symLG/**
Created for CROHME 2019. This directory contains scripts to convert LaTeX, MathML, and primtive-based LG files to the Symbol LG (symLG) format. Details are provided above. After using these converters, metrics may be computed between different representations using the LgEval tools.
**lg2OR and lg2NE**
Label graph format converters. Each takes a label graph as input, and outputs a label graph in OR (Object-Relation) or NE (primitive Node-Edge) format on standard output. OR files are equivalent to NE files, but more compact and easier to read.
**getlg, getinkml, getpdf**
From a file containing a list of .lg files (one per line), copy these files from one directory to another (*getlg*), or copy corresponding .inkml files or dot-generated pdf files from one directory to another (*getinkml*,*getpdf*).
**lg2lgtree**
Converts a directory of label graphs using lgfilter (i.e. producing
trees as output), writing the output files to another directory.
**lg2mml**
Create MathML output from a label graph (requires CROHMELib).
**src/lg2txt.py**
Convert a graph to a string encoding. Symbol and structure
mappings are be defined using rules in an accompanying .csv file. An example MathML mapping file is provided in *translate/mathMLMap.csv.* A (largely incomplete) LaTeX mapping (*translate/symbolMap.csv*) is also provided (used by `lg2mml`).
**lgfilter**
Removes non-tree edges from a hierarchically structured label graph (e.g. to obtain symbol
layout trees from a DAG with inherited relationships for math notation).
**relabelEdges and relabelOldCROHME**
Tools to replace edge labels in 'old' label graph files using '*' to indicate
merged primitives.
**src/mergeLg.py**
Reads two or more .lg files and merges them, printing the result on
standard output.
## <a name="lgdescription"></a>Label Graph File Format ## <a name="lgdescription"></a>Label Graph File Format
...@@ -327,112 +607,16 @@ Fig. 2 is very dense, as it represents the grouping of strokes into different ob ...@@ -327,112 +607,16 @@ Fig. 2 is very dense, as it represents the grouping of strokes into different ob
For CROHME 2019 the organizers needed a way to compare systems that produce LaTeX output with no reference to primitives or the input image at all with other systems (e.g., from encoder-decoder models). For CROHME 2019 the organizers needed a way to compare systems that produce LaTeX output with no reference to primitives or the input image at all with other systems (e.g., from encoder-decoder models).
Mahshad Mahdavi's solution was to use label graphs where symbols in the output formula are the primitives, without making any reference to the input file. Mahshad Mahdavi's solution was to use label graphs where symbols in the output formula are the primitives, without making any reference to the InkML input file (i.e., ignoring the original handwritten stroke primitives).
This allows comparison between symbols and relationships (structure), but with the caveat that when a correctly recognized symbol is located at the wrong place in the target formula tree (Symbol Layout Tree, or SLT), this is recorded as an error by LgEval. Similarly, correct relationships occuring at the wrong location in a tree will also be identified as errors. This allows comparison between symbols and relationships (structure), but with the caveat that when a correctly recognized symbol is located at the wrong place in the target formula tree (Symbol Layout Tree, or SLT), this is recorded as an error by LgEval. Similarly, correct relationships occuring at the wrong location in a tree will also be identified as errors.
To perform the comparison, Mahshad created converters for LaTeX, MathML, and label graphs with primitives. After formulas have been converted to the *Symbol LG* format, they can be converted using the tools in LgEval (`evaluate`, `lg2dot`, etc.). These converters are included in the new LgEval release (in the directory `convert2symLG`). To perform the comparison, Mahshad created converters for LaTeX, MathML, and label graphs with primitives. After formulas have been converted to the *Symbol LG* format, they can be used with LgEval (e.g., `evaluate`, `lg2dot`, etc.). These converters are included in the new LgEval release (in the directory `convert2symLG`).
**Conversion to Symbol LG:** When converting to Symbol LG format (*symLG*) from LaTeX, MathML, or a standard label graph, symbols in the formula tree are traversed depth-first using a fix ordering for relationships at each symbol. The resulting path positions visited during traversal are used as the identifiers for the symbol primitives in a standard label graph file. **Conversion to Symbol LG:** When converting to Symbol LG format (*symLG*) from LaTeX, MathML, or a standard label graph, symbols in the formula tree are traversed depth-first using a fix ordering for relationships at each symbol. The resulting path positions visited during traversal are used as the identifiers for the symbol primitives in a standard label graph file.
**Note:** the converters for non-LG files requires the [Pandoc](https://pandoc.org) library to be installed. **Note:** the converters for non-LG files requires the [Pandoc](https://pandoc.org) library to be installed.
## <a name="tools"></a>LgEval Tools
The main tools for LgEval are provided in the *bin/* subdirectory. Call a script without arguments for usage instructions. A doxygen-generated summary of files and classes is available at [doc/html/files.html](doc/html/files.html).
**lg2OR and lg2NE**
Label graph format converters. Each takes a label graph as input, and outputs a label graph in OR or NE format on standard output.
**lg2dot**
Create .dot and .pdf output for a label graph, or visualize the
difference between two label graphs. Different graph types may be
produced (requires Graphviz), and differences between
a pair of graphs may also be visualized. The following graph
types are supported:
* Primitive label graphs (e.g. as shown in Figs. 1-3 above)
* Bipartite primitive label graphs
* Bipartite segmentation graphs for primitives
* Directed Acylcic Graph (DAG) represention of objects and object relationships
* Rooted tree for hierarchical relationship structures (e.g. symbol layout in math expressions)
**Note:** The DAG and tree visualizations assume that a single level of structure is
being visualized.
**evaluate, evaluateMat**
*evaluate* is the main evaluation script for label graphs. It
automatically produces metrics, differences, a result summary,
and visualizations of recognition errors (requires Graphviz). The program
produces evaluation results given a directory of output files and
a corresponding directory of ground truth files, or a user-defined file list.
*evaluateMat* is used to evaluate output for expressions containing matrices
(used for the matrix recognition task in CROHME 2014).
**NOTE:** If a node is absent in one of two graphs being compared, it will be inserted as an 'ABSENT' node with unlabeled edges ('_') between the ABSENT node and all other nodes in the graph. See Lg.matchAbsent() in the file lg.py.
**confHist**
Create structure confusion histograms, which show target structures (e.g. of
1-3 symbols) or stroke groups, along with corresponding error graphs and their
frequencies. To save space, the user can specify the minimum number of times
that an error must occur to be included in the output. This provides a detailed
summary of the specific segmentation and classification errors made by a
recognition algorithm. The structure confusion histograms at the object and
stroke levels are stored in a (large) .html file.
**cdiff, ldiff and vdiff**
Used to compile labeling errors of given types (*cdiff*), or return the a list of the files containing these errors (*ldiff*) and view them (*vdiff*) using 'less.' Regular expression matching over node and edge labels is supported ('egrep' format), and files with or without segmentation errors may be selected for. These tools operate on the .diff files created by *evaluate.*
**getlg, getinkml, getpdf**
From a file containing a list of .lg files (one per line), copy these files from one directory to another (*getlg*), or copy corresponding .inkml files or dot-generated pdf files from one directory to another (*getinkml*,*getpdf*).
**lg2lgtree**
Converts a directory of label graphs using lgfilter (i.e. producing
trees as output), writing the output files to another directory.
**lg2mml**
Create MathML output from a label graph (requires CROHMELib).
**lgfilter**
Removes non-tree edges from a hierarchically structured label graph (e.g. to obtain symbol
layout trees from a DAG with inherited relationships for math notation).
**relabelEdges and relabelOldCROHME**
Tools to replace edge labels in 'old' label graph files using '*' to indicate
merged primitives.
## <a name="other"></a>Additional Programs
**convert2symLG/**
This directory contains scripts to convert LaTeX, MathML, and primtive-based LG files to the Symbol LG (symLG) format. Details are provided above. After using these converters, metrics may be computed between different representations using the LgEval tools.
The remaining programs below are located in the `src/` directory.
**metricDist.py**
Used to select a metric from a CSV file (.m) produced by the 'evallg.py'
program (used by **evaluate**). Useful for producing histograms.
**mergeLg.py**
Reads two or more .lg files and merges them, printing the result on
standard output.
**lg2txt.py**
Convert a graph to a string encoding. Symbol and structure
mappings are be defined using rules in an accompanying .csv file. An example MathML mapping file is provided in *translate/mathMLMap.csv.* A (largely incomplete) LaTeX mapping (*translate/symbolMap.csv*) is also provided.
**Unit Tests**
Programs to test functionality for basic label graph operations.
* testlg.py
* testNewSeg.py
## <a name="refs"></a>References ## <a name="refs"></a>References
......
...@@ -172,8 +172,12 @@ do ...@@ -172,8 +172,12 @@ do
# Match asterisk at beginning of line to select differences/errors # Match asterisk at beginning of line to select differences/errors
# WARNING: Double quotes are important to preserve newlines! # WARNING: Double quotes are important to preserve newlines!
# Only create DIFF files for differences.
DIFF=`echo "$OUT" | grep "\*"` DIFF=`echo "$OUT" | grep "\*"`
echo "$DIFF" > $ResultsDir/Metrics/$FNAME.diff if [ -n "$DIFF" ]
then
echo "$DIFF" > $ResultsDir/Metrics/$FNAME.diff
fi
echo "$OUT" | grep -v "\*" > $ResultsDir/Metrics/$FNAME.csv echo "$OUT" | grep -v "\*" > $ResultsDir/Metrics/$FNAME.csv
# If differences reported, record files with errors, generate visualizations # If differences reported, record files with errors, generate visualizations
......
0% Chargement en cours ou .
You are about to add 0 people to the discussion. Proceed with caution.
Terminez d'abord l'édition de ce message.
Veuillez vous inscrire ou vous pour commenter