Friday, February 15, 2013

Computer Vision for Visual Effects


Useful Code

This page collects publicly-available resources and code that are useful for visual effects.
General Programs
Image Matting
Image Compositing and Editing
Features and Matching
Dense Correspondence and its Applications
Matchmoving
Motion Capture
3D Data Acquisition

Wednesday, January 16, 2013

Nomenclature in Texnic center for Latex Thesis/Dissertation


The nomencl package can be used to automatically generate a Nomenclature or List of Symbols, but it may require some changes to your regular LaTeX build process for full automation.

(The following steps have already been done on Windows systems in the CAE domain.)
  • Go to the Build / Define Output Profiles menu.
  • Click on "LaTeX => PDF" in the profiles list, and click the Copy button to create a new profile.
  • Name the new profile "LaTeX => PDF (Nomenclature)" and click OK.
  • Click on "LaTeX => PDF (Nomenclature): in the profiles list, and in the Makeindex section on the right, ensure the following are set:
    • Do not use MakeIndex in this profile is unchecked (should be by default).
    • Path to MakeIndex executable points to the correct makeindex.exe (should be by default).
    • Command line arguments to pass to MakeIndex (including all quotes shown below):
          "%bm".nlo -s nomencl.ist -o "%bm".nls
      
      Note for people using TeXLive 2010: use %tm instead of %bm ( reference here)
    • Click the OK button and the new profile is complete.
Now whenever you use the LaTeX => PDF (Nomenclature) profile, your document will automatically be scanned for \nomenclature commands, and the appropriate symbols and descriptions will be added to the nomenclature file.

The default column widths in the nomenclature are enough for most usage, but if you have an unusually long abbreviation (for example, FPGA-SoPC), you'll need more room. Change the \printnomenclature command in thesis.tex to include an optional argument for the abbreviation column width. For example, \printnomenclature[1in].

The default sort order for the nomenclature is symbols (everything that's not a number or string), numbers, and strings (everything starting with a letter). Your committee or advisor may other requirements on how to sort and group your list of symbols (for example: grouping lowercase letters, uppercase letters, lowercase Greek letters, uppercase Greek letters, abbreviations starting with a letter, and abbreviations starting with numbers).

You can use the optional argument to the \nomenclature command to adjust the sort order. Using \nomenclature[1]{$a$}{a variable in lowercase English} adds an additional prefix "1" to the sort order. All symbols with a prefix "1" will be grouped together, and then sorted by the default sort order. The default prefix for sort order is "a".
If you require complete control over the sort order, then add the [noprefix] option to the \usepackage{nomencl} line, and add the optional argument to every \nomenclaturecommand, but this will be a lot of work for a long list of symbols. Most people get by fine with the default sort order, or by grouping and sorting.

Tuesday, September 11, 2012

Kernel Functions


Below is a list of some kernel functions available from the existing literature. As was the case with previous articles, every LaTeX notation for the formulas below are readily available from their alternate text html tag. I can not guarantee all of them are perfectly correct, thus use them at your own risk. Most of them have links to articles where they have been originally used or proposed.

1. Linear Kernel

The Linear kernel is the simplest kernel function. It is given by the inner product <x,y> plus an optional constant c. Kernel algorithms using a linear kernel are often equivalent to their non-kernel counterparts, i.e. KPCAwith linear kernel is the same as standard PCA.
k(x, y) = x^T y + c

2. Polynomial Kernel

The Polynomial kernel is a non-stationary kernel. Polynomial kernels are well suited for problems where all the training data is normalized.
k(x, y) = (\alpha x^T y + c)^d
Adjustable parameters are the slope alpha, the constant term c and the polynomial degree d.

3. Gaussian Kernel

The Gaussian kernel is an example of radial basis function kernel.
k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert ^2}{2\sigma^2}\right)
Alternatively, it could also be implemented using
k(x, y) = \exp\left(- \gamma \lVert x-y \rVert ^2 )
The adjustable parameter sigma plays a major role in the performance of the kernel, and should be carefully tuned to the problem at hand. If overestimated, the exponential will behave almost linearly and the higher-dimensional projection will start to lose its non-linear power. In the other hand, if underestimated, the function will lack regularization and the decision boundary will be highly sensitive to noise in training data.

4. Exponential Kernel

The exponential kernel is closely related to the Gaussian kernel, with only the square of the norm left out. It is also a radial basis function kernel.
k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert }{2\sigma^2}\right)

5. Laplacian Kernel

The Laplace Kernel is completely equivalent to the exponential kernel, except for being less sensitive for changes in the sigma parameter. Being equivalent, it is also a radial basis function kernel.
k(x, y) = \exp\left(- \frac{\lVert x-y \rVert }{\sigma}\right)
It is important to note that the observations made about the sigma parameter for the Gaussian kernel also apply to the Exponential and Laplacian kernels.

6. ANOVA Kernel

The ANOVA kernel is also a radial basis function kernel, just as the Gaussian and Laplacian kernels. It is said toperform well in multidimensional regression problems (Hofmann, 2008).
k(x, y) =  \sum_{k=1}^n  \exp (-\sigma (x^k - y^k)^2)^d

7. Hyperbolic Tangent (Sigmoid) Kernel

The Hyperbolic Tangent Kernel is also known as the Sigmoid Kernel and as the Multilayer Perceptron (MLP) kernel. The Sigmoid Kernel comes from the Neural Networks field, where the bipolar sigmoid function is often used as an activation function for artificial neurons.
k(x, y) = \tanh (\alpha x^T y + c)
It is interesting to note that a SVM model using a sigmoid kernel function is equivalent to a two-layer, perceptron neural network. This kernel was quite popular for support vector machines due to its origin from neural network theory. Also, despite being only conditionally positive definite, it has been found to perform well in practice.
There are two adjustable parameters in the sigmoid kernel, the slope alpha and the intercept constant c. A common value for alpha is 1/N, where N is the data dimension. A more detailed study on sigmoid kernels can be found in the works by Hsuan-Tien and Chih-Jen.

8. Rational Quadratic Kernel

The Rational Quadratic kernel is less computationally intensive than the Gaussian kernel and can be used as an alternative when using the Gaussian becomes too expensive.
k(x, y) = 1 - \frac{\lVert x-y \rVert^2}{\lVert x-y \rVert^2 + c}

9. Multiquadric Kernel

The Multiquadric kernel can be used in the same situations as the Rational Quadratic kernel. As is the case with the Sigmoid kernel, it is also an example of an non-positive definite kernel.
k(x, y) = \sqrt{\lVert x-y \rVert^2 + c^2}

10. Inverse Multiquadric Kernel

The Inverse Multi Quadric kernel. As with the Gaussian kernel, it results in a kernel matrix with full rank (Micchelli, 1986) and thus forms a infinite dimension feature space.
k(x, y) = \frac{1}{\sqrt{\lVert x-y \rVert^2 + \theta^2}}

11. Circular Kernel

The circular kernel comes from a statistics perspective. It is an example of an isotropic stationary kernel and is positive definite in R2.
k(x, y) = \frac{2}{\pi} \arccos ( - \frac{ \lVert x-y \rVert}{\sigma}) - \frac{2}{\pi} \frac{ \lVert x-y \rVert}{\sigma} \sqrt{1 - \left(\frac{ \lVert x-y \rVert^2}{\sigma} \right)}
\mbox{if}~ \lVert x-y \rVert < \sigma \mbox{, zero otherwise}

12. Spherical Kernel

The spherical kernel is similar to the circular kernel, but is positive definite in R3.
k(x, y) = 1 - \frac{3}{2} \frac{\lVert x-y \rVert}{\sigma} + \frac{1}{2} \left( \frac{ \lVert x-y \rVert}{\sigma} \right)^3
\mbox{if}~ \lVert x-y \rVert < \sigma \mbox{, zero otherwise}

13. Wave Kernel

The Wave kernel is also symmetric positive semi-definite (Huang, 2008).
k(x, y) = \frac{\theta}{\lVert x-y \rVert \right} \sin \frac{\lVert x-y \rVert }{\theta}

14. Power Kernel

The Power kernel is also known as the (unrectified) triangular kernel. It is an example of scale-invariant kernel (Sahbi and Fleuret, 2004) and is also only conditionally positive definite.
k(x,y) = - \lVert x-y \rVert ^d

15. Log Kernel

The Log kernel seems to be particularly interesting for images, but is only conditionally positive definite.
k(x,y) = - log (\lVert x-y \rVert ^d + 1)

16. Spline Kernel

The Spline kernel is given as a piece-wise cubic polynomial, as derived in the works by Gunn (1998).
k(x, y) = 1 + xy + xy~min(x,y) - \frac{x+y}{2}~min(x,y)^2+\frac{1}{3}\min(x,y)^3
However, what it actually mean is:
k(x,y) = \prod_{i=1}^d 1 + x_i y_i + x_i y_i \min(x_i, y_i) - \frac{x_i + y_i}{2} \min(x_i,y_i)^2 + \frac{\min(x_i,y_i)^3}{3}
Withx,y \in R^d

17. B-Spline (Radial Basis Function) Kernel

The B-Spline kernel is defined on the interval [−1, 1]. It is given by the recursive formula:
k(x,y) = B_{2p+1}(x-y)
\mbox{where~} p \in N \mbox{~with~} B_{i+1} := B_i \otimes  B_0.
In the work by Bart Hamers it is given by:k(x, y) = \prod_{p=1}^d B_{2n+1}(x_p - y_p)
Alternatively, Bn can be computed using the explicit expression (Fomel, 2000):
B_n(x) = \frac{1}{n!} \sum_{k=0}^{n+1} \binom{n+1}{k} (-1)^k (x + \frac{n+1}{2} - k)^n_+
Where x+ is defined as the truncated power function:
x^d_+ = \begin{cases} x^d, & \mbox{if }x > 0 \\  0, & \mbox{otherwise} \end{cases}

18. Bessel Kernel

The Bessel kernel is well known in the theory of function spaces of fractional smoothness. It is given by:
k(x, y) = \frac{J_{v+1}( \sigma \lVert x-y \rVert)}{ \lVert x-y \rVert ^ {-n(v+1)} }
where J is the Bessel function of first kind. However, in the Kernlab for R documentation, the Bessel kernel is said to be:
k(x,x') = - Bessel_{(nu+1)}^n (\sigma |x - x'|^2)

19. Cauchy Kernel

The Cauchy kernel comes from the Cauchy distribution (Basak, 2008). It is a long-tailed kernel and can be used to give long-range influence and sensitivity over the high dimension space.
k(x, y) = \frac{1}{1 + \frac{\lVert x-y \rVert^2}{\sigma} }

20. Chi-Square Kernel

The Chi-Square kernel comes from the Chi-Square distribution.
k(x,y) = 1 - \sum_{i=1}^n \frac{(x_i-y_i)^2}{\frac{1}{2}(x_i+y_i)}

21. Histogram Intersection Kernel

The Histogram Intersection Kernel is also known as the Min Kernel and has been proven useful in image classification.
k(x,y) = \sum_{i=1}^n \min(x_i,y_i)

22. Generalized Histogram Intersection

The Generalized Histogram Intersection kernel is built based on the Histogram Intersection Kernel for image classification but applies in a much larger variety of contexts (Boughorbel, 2005). It is given by:
k(x,y) = \sum_{i=1}^m \min(|x_i|^\alpha,|y_i|^\beta)

23. Generalized T-Student Kernel

The Generalized T-Student Kernel has been proven to be a Mercel Kernel, thus having a positive semi-definite Kernel matrix (Boughorbel, 2004). It is given by:
k(x,y) = \frac{1}{1 + \lVert x-y \rVert ^d}

24. Bayesian Kernel

The Bayesian kernel could be given as:
k(x,y) = \prod_{l=1}^N \kappa_l (x_l,y_l)
where
\kappa_l(a,b) = \sum_{c \in \{0;1\}} P(Y=c \mid X_l=a) ~ P(Y=c \mid X_l=b)
However, it really depends on the problem being modeled. For more information, please see the work by Alashwal, Deris and Othman, in which they used a SVM with Bayesian kernels in the prediction of protein-protein interactions.

25. Wavelet Kernel

The Wavelet kernel (Zhang et al, 2004) comes from Wavelet theory and is given as:
k(x,y) = \prod_{i=1}^N h(\frac{x_i-c_i}{a}) \:  h(\frac{y_i-c_i}{a})
Where a and c are the wavelet dilation and translation coefficients, respectively (the form presented above is a simplification, please see the original paper for details). A translation-invariant version of this kernel can be given as:
k(x,y) = \prod_{i=1}^N h(\frac{x_i-y_i}{a})
Where in both h(x) denotes a mother wavelet function. In the paper by Li Zhang, Weida Zhou, and Licheng Jiao, the authors suggests a possible h(x) as:
h(x) = cos(1.75x)exp(-\frac{x^2}{2})
Which they also prove as an admissible kernel function.

Thursday, August 2, 2012

The Gurus of Image Classification



Francis Bach    http://www.di.ens.fr/~fbach/


Alyosha Efros    http://www.cs.cmu.edu/~efros/




Zaid Harchaoui    http://www.harchaoui.eu/zaid/en/


Martial Hebert    http://www.cs.cmu.edu/~hebert/

Christoph Lampert

Christoph Lampert    http://pub.ist.ac.at/~chl/

Ivan Laptev

Ivan Laptev    http://www.di.ens.fr/~laptev/

Aude Oliva

Aude Oliva     http://cvcl.mit.edu/aude.htm

Beckman Institute

Jean Ponce     http://www.di.ens.fr/~ponce/enindex.html

headshot
Deva Ramanan     http://www.ics.uci.edu/~dramanan/



Cordelia Schmid     http://lear.inrialpes.fr/people/schmid/



Josef Sivic     http://www.di.ens.fr/~josef/



Antonio Torralba     http://web.mit.edu/torralba/www/



Andrew zisserman    http://www.robots.ox.ac.uk/~vgg/

Tutorials collections



  1. Supervised learning, SVMs, kernel methods, Francis Bach
  2. Instance-level recognition (part 1), Cordelia Schmid
  3. Instance-level recognition (part 2), Josef Sivic
  4. Large-scale visual search (part 1), Josef Sivic
  5. Large-scale visual search (part 2), Cordelia Schmid

  1. Bag-of-Features models for category-level classification, Cordelia Schmid
  2. Sparse coding and dictionary learning for image analysis, Jean Ponce
  3. Category-level localization, Andrew Zisserman
  1. Learning with structured inputs and outputs, Christoph Lampert
  2. Large-scale learning, Zaid Harchaoui
  3. More words and bigger pictures, David Forsyth
  1. Human actions (part 1), Ivan Laptev
  2. Human actions (part 2), David Forsyth
  3. Human pose estimation, Deva Ramanan
  4. Geometry for recognition and scene analysis, Martial Hebert
  1. Human vision, Aude Olivia
  2. Large scale recognition / Context / many categories, Antonio Torralba
  3. Big data, Alyosha Efros

Tuesday, July 24, 2012

Useful Image Processing Tutorial Web Sites

Here is a short list of useful links for image processing that I have found over the last few years. Although most use Photoshop, you can still learn from them and apply the same principles using other software.
The first couple links aren't about image processing but I feel it necessary to learn much of the information contained within them to get a better understanding of Digital Photography.


This last one is for a (DDP) Digital Development Processing Sharpening technique used for the processing of individual Color Channels before colorizing and combining. However I have found the technique to be useful for much more.