Wednesday, June 6, 2012

!Closing Remarks!

This is the closing blog post! The quarter has gone fast!

All the code is and will continue to be available here

I'm going to write a bit about previous work (because I never wrote a post about that), list some papers I found helpful, go over each part of the pipeline and conclude with what I learned and what I would do different if I had a paradox-free time machine.



Previous Work:




The abstract for our project proposal was:


While handwriting provides an efficient means to write mathematical symbols quickly, it is a poor medium for rapid exchange and editing of documents. Meanwhile, advanced typesetting systems like LaTeX and MathML have provided an environment where mathematical symbols can be typeset with precision, but at the cost of typing time and a steep learning curve. In order to facilitate the exchange, preservation and ease of editing of mathematical documents, we propose a method of offline handwritten equational recognition. Our system takes a handwritten document, for example a students calculus homework, then partitions, classifies and parses the document into LaTeX.



The following two links show similar but on-line versions of what we were trying to accomplish:

http://webdemo.visionobjects.com/equation.html
http://detexify.kirelabs.org/classify.html





In reading through papers, I found that this project was highly ambitious. In survey papers, repetitively it was mentioned that classification and parsing verification were mostly tackled as separate research.

While there is the standard MNIST dataset for digits, there didn't seem to be as standard a dataset for handwritten mathematical symbols. The closest thing to that is the infty project. This dataset was used in some of the papers we read to research the project, unfortunately we never got the dataset to work.





Papers That Helped the Most:


Entire project:
This was the paper we read for beginning the project:
http://www.eecs.ucf.edu/courses/cap6938/fall2008/penui/readings/matsakis-MEng-99.pdf



Classification/Feature Descriptors
http://hal.archives-ouvertes.fr/docs/00/54/85/12/PDF/hog_cvpr2005.pdf
http://www.umiacs.umd.edu/~joseph/support-vector-machines4.pdf
http://eprints.pascal-network.org/archive/00004062/01/ShalevSiSr07.pdf
http://eprints.pascal-network.org/archive/00003046/01/bosch07a.pdf

Verification/Parsing:
http://www.springerlink.com/content/6217472316636n13/fulltext.pdf?MUD=MP


Synthetic data (something we considered but abandoned):
http://www.stanford.edu/~acoates/papers/coatesetal_icdar_2011.pdf


Meta-links (These links had links to useful papers):
http://www.inftyproject.org/en/articles_ocr.html
http://www.handwritten.net/mv/papers/ 




My Work:


Segmentation:

There were two schemes that were attempted for segmentation.

The two methods were based on the following two stackoverflow posts:

Both are based on finding connected-components.

This caused a major problem where it couldn't detect symbols with multiple disconnected components, most egregiously '='.

This was annoying but in the interest of finishing on time we postponed fixing this with possibly implementing a sliding window way of detecting symbols. 


Extraction:

Once the bounding boxes were found by the segmenter, we had to extract the symbols. This involved embedding the symbols in new matrices, and outputting positional information how each character related to previous ones. By calculating the centroid of a bounding box, we were able to output this information for help with parsing.


Classification:


We spent the first few weeks toying with logistic regression. After we implemented cross-validation we found the classifier worked abysmally.

Later, we switched to SVMs and used vl_feat. By modifying example code for object recognition, we were able to increase accuracy tremendously.

The details of this switch are described in this earlier post.

Parsing:

Parsing/Verificaiton ended up being an interesting project in itself. After A week or two of hacky and false starts, I was able to out together something with limited functionality using PLY, a python implementation of lex/yacc.



Future:


Despite this being the 'final post', there are a few things we're still working on.

Segmentation will probably not be modified this late.

Extraction isn't tested completely or integrated into the parser yet.

Classification has a few small adjustments to be tested.

Parsing can't handle all subscripts/superscripts yet, it fails in some edge cases like when multiple variables have both superscripts and subscripts one after the other.

What I've Learned/Would Do Different!:

I took this class at the same time as CSE141/141L. 141L is another class with a major project and in the first few weeks I had trouble balancing it. Time management is very important, and I think if I had planned more of the quarter earlier I could have gotten further.

I regret not getting segmentation for the '=' sign!!

I regret never using mechanical turk to generate a dataset.

I've learned a lot about SVM's, and working on a large open-ended project.

Scope and focus have been recurring themes. As the quarter progressed, I had to adjust the scope of what I wanted to accomplish while attempting to focus on the parts that were most essential.


Conclusion:

Thanks to everyone in class for their helpful comments and advice.

This post hasn't had any pictures, so please enjoy this picture of superman programming:


Pictured: Computer Vision





Monday, June 4, 2012

Parsing update

Accuracy with only 3 binary training types




All classes binary



Accuracy didn't increase.

Parsing

My parser now creates a syntax tree as a list of tuples. 

The form it outputs is like this:



('EXP', 
   [('QUANT', ' \\forall ', ' x', 

      ('BINOP', ' \\in', ('VAR', ' x'),

          ('BINOP', ' \\leftrightarrow', ('VAR', ' y'), 

              ('BINOP', ' \\subset', ('VAR', ' x'), ('VAR', ' y')
              )
           )
       )
    )]
)



This works for correctly classified examples from the first order logic.


TODO:

Parser:
   1) Add rules with positional information for {super,sub} scripts
   2) Figure out how to tree and print out LaTeX
   3) Add a dummy rules/node-types for when the classifier makes mistakes
   4) Figure out how to add back-off rules to make more possible trees,
          figure out how to select one as the most probable tree

Classifier:
    Add sigma and integral signs
    cross-validation!!!
    play with different settings.

Extraction:
    Output positional information



Wednesday, May 30, 2012

Parser Progress, Dataset update

We've completed adding digits to the dataset.

There is a disunity in our dataset where a few of the symbols were made binary. We're going to test to see if this helps classification by making all the data binary and re-training the classifier.

If accuracy improves, we'll make binary thresholding part of pre-processing. If it doesn't help, we'll revert the data and re-add the now binary images as non-binary.

_______________________________________________________________________________

I've begun work on the parser. As I haven't taken compilers yet , I've found it hard to begin but I've been exploring the problem space.

I've started to play with PLY, a python implementation of lex and yacc. Using this I've made a hacky thing that can substitute some of the characters.

The parser has three overlapping goals:

1) substituting the classifier's coded output with the corresponding LaTeX
0 -> \forall

2) Using the CFG and positional data to make the subscripts and superscripts work:
56 with positional data "6 is to the upper left of 5" -> x^{y}

3) When the parser encounters syntactic errors, the parser should "back-off" the soft classifier results (i.e., the top 3 classes the symbol could be) to a symbol that fits the CFG


I talked to professor Jhala and he gave me some advice:

I have to do 3 steps,

1) Create a parse tree based on the code/position info from the classifier.

2) Evaluate different possible trees by "backing-off" some of the classifications.

3) Read off the tree to translate it to LaTeX


He also pointed me to the following links:

An Ocaml parsing homework assignment for CSE130

Happy, a parser generator for Haskell

A blog post about using Happy to write a compiler that might be useful

Wednesday, May 23, 2012

Expanding to Digits, Beginning Parser Stuff

We are expanding our dataset to include digits, which will allow us to start fooling around with super/subscripts!


So far we are missing 9,7 from the dataset, but today after class I'm going to add the rest.




FO logic symbols and digits 0-9 except 9,7 (25 total classes)


Parsing

I've modified the basic python script I've been using for parsing to include stuff about subscripts. 

This code will most likely be completely changed within the week, but this is simple prototyping to get a feel for the problem space.

I'm searching out tools and papers to help. I'm planning to try to finish parsing as fast as possible so that I can get back to perfecting the classification/extraction parts of the pipeline.

Right now I'm assuming that the extraction file will output position information similar to what is described in this paper[0]. The following three images are all from [0]. I'm thinking that in addition to direction information distance will also have to be taken into account, so that implicit newlines can be added. 



This will be then used to create a graph like so:



Then a graph representation will be used to determine if a symbol is a superscript/subscript/other and if the symbol should back-off its classification to another possible classification (e.g., negation shouldn't be a superscript, so if the second most probable symbol is a 2 it should output)



I used some of the holdout data to make symbols to represent 2^12. I hardcoded the positional information and edited the parser file to use this data.

Output of the play parser for superscripts




Some papers I'm going to be looking at:

[0] Verification of Mathematical Formulae Based on a Combination of Context-Free Grammar and
Tree Grammar

http://www.mkm-ig.org/meetings/mkm06/Presentations/Sexton.pdf


Towards a Parser for Mathematical Formula Recognition


Tools I'm looking into are:










Monday, May 14, 2012

The Quest for the Proper Proper Subset Classification

Switch to SVM 

We’ve switched from logistic regression with HOG features to Support Vector Machines using a variant of SIFT descriptors. 

Our code is a slightly modified version of this vl_feat example code.

Here is a description of what the code does, as far as I currently understand it:

The code first trains a bag of visual words “vocabulary” based on PHOW descriptors (a variant of dense SIFT, implemented in vl_feat and described in this paper: Image Classification using Random Forests and Ferns. The "vocabulary" is created by running k-means on the PHOW descriptors.

The vocabulary comes from a subset of the training data, and afterwards all the
training data is given a feature description based on PHOW descriptors and the vocabulary.

A feature map is then computed using the chi-squared kernel.  


The classification is done through all-vs-one classification, I'm thinking about possibly altering it to training a bunch of binary classifiers then doing a decision tree or a voting thing. Also, I need to make the output soft for the parser.

This is a good tutorial for SVM's.





Experimental Results


Methodology: We have 300 samples for each of the 17 classes. I wrote a python script that randomly selected 20 of each of the class examples to be "holdout" data. I haven't modified the code yet to do cross-validation.

The 20 holdout data samples per class were used to create various formulas in first-order logic (the definition of function and the definition of proper subset). When the segmentation and extractor get better new novel data will be used instead of these hold-out samples for testing the effectiveness of the system.

At first I used 50 random samples as training and the remaining 230 as test. I observed that most of the errors were due to classifying symbols as negation.

Observing the (scores,classes) output of the svm one-vs-all classifier, I saw that for the "negation errors" the correct label usually was the 2nd most likely.


Here is an example of the scores for a "z" that was mistaken for a "negation":

    scores   class
   -0.2888   11 (neg)        
   -0.3795   17 (z)      
   -0.4663    9 (left paren)        
   -0.4823    7 (if then)        
   -0.5354   12 (or)            
   -0.5367   16 (y)        
   -0.5602   10 (ne)          
   -0.5620   13 (right paren)            
   -0.5640   14 (subset)        
   -0.5786    4 (elem)      
   -0.6220   15 (x)        
   -0.6455    2 (R)          
   -0.6618    5 (exist)      
   -0.6747    3 (and)      
   -0.6786    1 (F)      
   -0.7181    6 (for all)    
   -0.7594    8 (iff)      




I decided to create alternate samples that used a tilde instead of  the hook-like symbol for negation, to see how that would affect the results.

In the end I trained 4 models and tested each of them on 2 different formulas:

Models:
1) 50 train
2) 230 train
3) 50 train (tilde for negation)
4) 230 train (tilde for negation)

Formulas:
1)Function definition

Correct classification for function definition 



2)Proper Subset definition


Proper classification, definition of proper subset




PARAMS that may need to be tweaked:


There are a few parameters in the svm code. I need do more testing to see how tweaking these will affect classification results.

Number of words: This is the "vocabulary" created by k-means. It was originally set to 300, but I increased it to 600. I'm not sure yet if this helped or hurt, more testing is needed. 

K-means algorithm: The vl_feat implementation of k-mean can use the standard Lloyd's algorithm or UCSD professor Elkan's accelerated k-mean's algorithm. Current results use Elkan's algorithm, but I'm not sure if this sped-up version gives the same results for our (relatively) small dataset.

Kernel:  The feature map is computed using the vl_homkermap function. This function has a few parameters and potentially other kernel mappings could be swapped for it.



Confusion Matrices




"Hook" negation:
50 training examples, 230 test




230 training, 50 test




Tilde negation:


50 train, 230 test, tilde for negation



230 train, 50 test, tilde for negation




Parsing Results for Holdout Data Formulas






Function Formula(7 symbols from holdout):


50 training ( 0 errors!)



230 training ( 0 errors!)


50 training with tilde for negation (2 errors)



230 training with tilde for negation (1 error, left paren for x)




Proper Subset Formula (30 symbols from holdout):


50 training (7 errors)





230 train (8 errors)







50 training with tilde for negation (4 errors)
230 training with tilde for negation (4 errors)


Current Goals:






Classification seems "good enough", next to do is: 

1) Implementing the graph structure/Context Free Grammar for parsing
2) Improving segmentation and extraction 
3) Expanding the data to include numbers.


Sunday, May 6, 2012

Non-Maximal Suppression


One problem faced when locating characters in an image is extraneous bounding boxes. When the segmenter is done detecting characters and placing bounding boxes for the image, there will be some false positives found, as well as overlapping bounding boxes.

One way to deal with this problem is Non Maximal Suppression (NMS). The idea behind NMS is to cluster bounding box groups, and then apply a heuristic to determine the best box in each cluster. I used kmeans to cluster the bounding boxes, and for each cluster, chose the bounding box closest to that cluster point.

Before NMS, on an input containing 7 characters, there were 21 bounding boxes found. After NMS, there were only 13 bounding boxes left, much closer to the real number of characters input. Here is a plot of the bounding boxes after NMS has been applied. Although it is hard to see a visual difference between the plots before and after NMS, you can see that the boxes left still contain characters.

Image with NMS applied



Goals:
1)   Become familiar with VLFeat
2)   Run cross validation, and test some more data to see how NMS performs with different expressions
3)   Try SIFT vs HOG and compare accuracy results.
4)   Fix the extraction method of data from the bounding boxes. Currently, the centroid of the character in the bounding box is found, and then extracted by carving out the pixels around it in the original image. A more useful approach would be to pad the information contained in the bounding box with whitespace, which will help when extracting characters that are nearby each other, such as exponents, or the range of an integral.

Wednesday, April 25, 2012

!Useful Matlab Tips!

Matlab feature: varargin

Matlab documentation for the feature: http://www.mathworks.com/help/techdoc/ref/varargin.html

Description: This allows you to specify a variable amount of arguments as input.

How this helped me:  

A major part of our development process was working on parts of the pipeline as scripts to get functionality to work. In this phase, we set default values to load for many things.

After we got basic functionality, this became a slight problem as we had to turn the scripts into functions to piece together all the code.

By specifying variable number of arguments, we were able to keep the script functionality for quick testing while still having the ability to piece things together.

The most useful place this happened was in getDataMat.m
This function takes one of our dataset images, reads in each cell that contains a symbol, and outputs a matrix with of the data and the corresponding class descriptions.

getDataMat turns each of these symbols into a matrix

By using varargin, I was able to make it have 3 modes of operation:

1) The original script functionality

2) Ability to specify a directory, then all the *.jpg images from the directory will be grabbed.
EX:[x y] = getDataMat('directory', ''images/logic');

3) Ability to specify a variable number of file paths to read in:
EX: [x y] = getDataMat('images/logic/exist_1.jpg','images/logic/forAll_1.jpg');
[x y] = getDataMat('images/logic/forAll_1.jpg');

HOWTO use/Screenshots of relevant code: (I hope this doesn't contain bugs!!)

To use it, just include varargin as a parameter.
EX: function [data_x data_y] = getDataMat(varargin)

Then do:
    nVarargs = length(varargin);

and have an if statement that changes the functionality based on the number of arguments.

Note: varargin is a cell array, so you will have to index it like so: varargin{i}

This can cause some tricky type errors, so be careful!!

Example usage!





By using this feature, tedious coding to play with different subsets of the data was eliminated! I hope it helps you too!

Useful links to help use this feature:

http://blogs.mathworks.com/pick/2008/01/08/advanced-matlab-varargin-and-nargin-variable-inputs-to-a-function/
http://makarandtapaswi.wordpress.com/2009/07/15/varargin-matlab/