In my continuing quest to upload all of my handwriting project code I have now got the utility tools up, including the required support modules:

hg: My homography module - pretty simple, though it did grow a bit beyond its original purpose. Includes the obvious code for constructing them and applying them (2D case only). Also includes some basic image querying stuff, as they also need access to the b-spline code. Plus an nD Gaussian blur, for no good reason.

ply2: I usually stick to json and hdf5 files, but hit a problem with the handwriting project, as neither was a good fit. json does not really do large amounts of data, whilst hdf5 is not human readable and has poor text support. Instead of creating an entirely new file format I decided to extend the ply format, as it could almost do what was required. I called it ply2, but have done so without the permission of the original developers. My apologies to them if they don't like this! Main changes are an additional type line in the header, to support its new role of containing stuff that isn't a mesh, plus support for typed meta lines in the header, as the comment system is crap. More importantly, I added support for elements/arrays with an arbitrary number of dimensions, string support (utf8), and cleaned it up. The module includes a specification that details this all properly.

handwriting/corpus: Builds a corpus from a load of books downloaded from Project Gutenberg, for the purpose of generating a text sample an author writes out, so the system can learn their handwriting. To use it you will need to download the documents, corpus_data.zip.

handwriting/calibrate_printer: Does a closed loop colour calibration of a scanner-printer pair (I am taking closed loop here to mean the calibration is relative to the devices, not a specified standard). You print out a calibration target then scan it in. A GUI then allows you to learn a colour transform that, if applied to an image in the colour space of the scanner, will adjust it to be as close as possible when printed with the printer. This works only if you use the same scanner to obtain your handwriting samples as to scan in the calibration target. Uses thin plate splines internally.

Moving to Django
2016-05-01
My web server has been in desperate need of replacement for some time - its version of Ubuntu server was reaching end of life, and it kept running our of RAM because it was using the smallest of the old type of Amazon web services virtual machines. This meant that every time someone tried to brute force the 3Dami websites password (disturbingly common - the problem with using Wordpress) or we had a major burst in traffic the database would use up too much memory, and be killed, to save the server as a whole. Rather irritating to say the least - the old setup actually had a cron job run every 10 minutes to start the db server again if it had gone down!

Have been waiting for the release of Ubuntu server 16.04, which is now installed on a 'one size bigger' server, with SSD storage instead of magnetic - this server should be a hell of lot faster, and much more robust. Doesn't matter for this, my personal website, but for the 3Dami website (same server!) it will really help, particularly as we are about to start the advertising push for this years event(s).

Unfortunately, as I discovered with the move, my personal website was not compatible with more recent versions of PHP, on account of running on a really old CMS. In fact, I have been using that CMS since I was an undergrad at university, which may give an indication of just how old and unsupported it is. As I am hardly downgrading PHP it seems my bank holiday weekend is now an exercise in reimplementing my website using Django. Am aiming for feature parity and maintaining the urls from the previous website (as redirects to the new and improved urls). At the time of writing this post I have only got the bare minimum up - its going to be at least another day before it's back to normal.

Sunday evening edit: Website now has feature parity with the old one, though is looking really ugly and I have copied over only the most recent articles from the old system.

Monday morning edit: All of the old content is now up. Now I need to go make sure all the old (external) links work and make it pretty.

Monday lunchtime edit: All done!
Last weekend I pushed two more modules to my Helit Github. Both are part of the 'My Text in Your Handwriting' project. Similarly to the mean shift module, they are reasonably generic, and can be used to solve many tasks:


Fast Random Forest (frf):

This is the result of my frustration with using the scikit-learn random forest implementation. Whilst the actually random forest is perfectly acceptable, the file I/O is a joke (its dependent on Python\'s pickle, which is not appropriate for loading/saving large numpy arrays). It had reached the point where I was spending more time loading the random forest from disk than actually using it, and this was in a GUI, so I could not ignore it. Hence I wrote my third (!) random forest implementation, as the other two did not have the necessary features.

To be clear, the use of 'fast' in the name is a reference to the file I/O more than the actual training/testing time. Whilst training/testing is fast, the code is also very generic, supporting many different scenarios, so there are certainly faster implementations out there. Its actually about the same speed as scikit-learn for training, though considerably more flexible. Its designed so that, with the exception of an index built after loading, the file layout and in-memory layout are identical - loading is therefore seriously fast, as its two reads for the header then two reads per tree. Two reads are because the first gets the size of the object, then the second gets the remaining data. Rebuilding the index takes almost no time as well. The file I/O is also exposed such that sending models over the network/between processes is trivial, so that you can distribute both training and testing if you want.

Feature set is currently fairly standard, but it supports multiple output features as well as the usual multiple input features, and any feature can be discrete or continuous. For output that is the difference between a classification tree or a regression tree, though you can also have mixed mode with multiple outputs. Code is very modular and all in C - being my third go at this its disturbingly neat;-)

To figure out how to use it I would focus on the many tests provided, even if some of them are quite silly. The minion type classifier is my favourite, but none of them can be accused of sanity! Its about 8000 additions according to git - I feel sorry for my fingers.


Gaussian Belief Propagation:

I originally used GBP for my paper 'Integrating Stereo with Shape-from-Shading derived Orientation Information', many moons ago, but reimplemented it with a Python interface for the 'My Text in Your Handwriting' paper. Unlike the previous version is allows for an arbitrary graph - you can use a chain to get Kalman smoothing, or a 2D grid to uncurl a normal map (which is an included demo), for instance. It can also be used to solve sparse linear equations - a demo is included. About 4000 additions according to git - this brings the total for the handwriting project up to 20K, and I am yet to publish anything handwriting-specific!

Whilst coding it I added support for TRW-S, in addition to the usual BP-with-momentum. This was an experiment to see what kind of difference it made. GBP is expected to converge to the global solution (if it exists), so you don't expect improved results, but it certainly converges faster. I also tried using it to solve linear equations - it can solve matrices that don't converge with normal BP, which surprised me. This result was only ever confirmed experimentally however, and I haven't figured out what the mathematical basis for it may be.

Yesterday I pushed a rather large update to my Github code repository, for the mean shift module. Over 8K lines were changed when I applied the patch! (development was in a different repository, so a patch was the easiest way to move it over)

Technically, I had already shared all of the mean shift modules code as relevant to my paper "My Text in Your Handwriting" some time ago, but the module had continued to be developed for another project, that then moved on and stopped using it. This is simply a good opportunity to share code that would otherwise remain unused.

Originally, the mean shift module just did mean shift, albeit for arbitrary data (I had needed to do the traditional 2D segmentation scenario, but also 3D mode finding in the RGB colour cube). But then I needed kernel density estimates, and of course mean shift is just a technique for finding the modes of a kernel density estimate, so it makes sense to support that, as the exact same interface works. It also gained the subspace constrained variant of mean shift (Gaussian kernel only) as an experiment that went nowhere - it was too computationally intensive to be justified. Plus I had gone to town with a proper set of spatial indexing data structures and a fully featured interface. This is the state the code was in before yesterdays update.

The improvements are fairly extensive. Firstly, a lot of bugs have been squished, and there are now loads of test files, 34 in fact. Some generate pretty pictures:-) Secondly, it has much better support for directional distributions and composite kernel support - you can, for instance, now have a feature vector with the first three entries position and the second three rotation (angle axis representation) and build a proper distribution over the location and orientation of an object, by setting the first three to be Gaussian, and the second three to be a mirrored Fisher distribution over the angle axis converted to a Quaternion (which will happen internally, so you can continue to use angle axis externally). This conversion system has to be configured, but is very flexible, within the constraints of the built in converters.

The biggest improvement however is support for multiplying the distributions represented by MeanShift objects, using the technique found in the paper "Nonparametric Belief Propagation", by Sudderth et al. This works for all kernels, though its only really optimised for Gaussian and Fisher (inc. composites of them and the mirror Fisher. For everything else it drops back to Metropolis-Hastings for drawing, and Monte-Carlo integration for estimating the probabilities fed to the Gibbs sampler. Slow!). This makes it trivial to implement the nonparametric belief propagation technique, as you can use MeanShift objects as messages/beliefs.

This is a little unusual for me. Normally, I only upload code when its attached to a paper, so anyone who uses the code will (I hope!) cite the associated paper. Sure, this module is used, heavily, by "My Text in Your Handwriting", but the entirety of this update is not. My only other module like this is my Dirichlet process Gaussian mixture model (which is also my most used module - people occasionally cite my background subtraction algorithm when using it, but there is no shared code, just a shared idea. Just my luck that my most popular code only rarely leads to citations!). But this code has been sitting around for a while, and it would be a waste to not publish it. So here it is - if anyone wants to collaborate on a research paper that uses it, then please email me!

All Posts