Can't claim much credit for this - the first two authors (Walter Del Pozzo and Christopher Berry) did most of the work, and I'm no physicist! From a machine learning point of view this paper is pretty straight forward however - readings from the LIGO detectors are run through a model that uses the known physics/noise to generate posterior samples of where binary neutron stars are. They then used my variational Dirichlet Process Gaussian Mixture Model code to fit a probability distribution, from which they can extract a credible region for the location of the stars. This could then be used, for instance, to drive a search to find a matching observation in the medium of light.

Ignoring the fact that giant machines that cost hundreds of millions are associated, which is always cool (and I have to beg for a computer with a GPU in...), what I really like about this paper is the rigorous testing. Simulated data is used to verify the model, with the Kolmogorov–Smirnov test used to demonstrate that the fitted distribution genuinely fits the distribution of the data (On which note, my DPGMM code is rock solid - this is the evidence!). ML papers rarely go this far. Maybe this can be attributed to ML being a new field, but ML systems are now making decisions that change lives - we need both theory and this level of rigour if we are to avoid catastrophe. Getting there is going to be a challenge though, as there is a genuine hostility towards it in some circles. And ignorance. Yes, doing things the proper way is more work, but it is also the future.

Anyway, that's enough of me hollering from my soap box - here is the arXiv link to it:
Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

Will update once it has been accepted somewhere, hopefully where it has been submitted to!
The title of this paper follows a classic structure - a technical main title followed by a descriptive subtitle, so everyone understands exactly what it does. The best part is it 'just works', even on photos you would expect to break it. A texture artist can avoid the tedious and time consuming activity of preparing tileable textures and focus on actually texturing.

To explain the technical part of the title, the 'obvious' solution to making tileable textures is to run a texture synthesis algorithm (in this case PatchMatch) with the constraints wrapping around the edges, so the output tiles. Sometimes this works, but more often than not it fails. This is because humans are really good at spotting the odd one out. This is where 'stationarization' comes in - a stationary detail is one repeated throughout a texture. A non-stationary detail is the odd one out that humans notice. So this algorithm adjusts the texture synthesis to avoid including non-stationary details. And that just works!

During production we didn't create a video, as Joep made this great online supplementary results navigator, so we decided to make up for it with a particularly crazy spotlight video:

(Extended edition. Original had to be 30 seconds long, so this adds back the stuff we had to drop, plus includes a title slide.)

Stuart came up with the initial storyboard, and Joep was our 'actor', whilst I did all the VFX and 3D graphics. Probably went too far, but they were my weekends and I like a good dose of absurdity:-)

You may find everything on the project page, but here is the paper:

"Texture Stationarization: Turning Photos into Tileable Textures." by Joep Moritz, Stuart James, Tom S. F. Haines, Tobias Ritschel & Tim Weyrich. Computer Graphics Forum (Proc. Eurographics), 36(2), 2017 (if 75 megabytes is too much there is also a low resolution version)

You can also download the supplementary material, though I would strongly recommend browsing it online instead.
Five Years of 3Dami
As tradition dictates here is my annual 3Dami post, about three months late. However, the lateness is advantageous, as there are now three things worth mentioning:

1. This years event was smaller than last year - we could not get the funding unfortunately. We still ran three teams at UCL, and whilst the budget was tight the event ran the same way it usually does. Further improvements were made to the process of running the event - we may not have mastered funding, but I think it's safe to say that after five years we have got pretty good at making films with college students. This was also the first year we went all out on Cycles - after last years experiment using the CS cluster we had the confidence to dial everything up to 11, with one team rendering frames that took as long as four hours. The final films are available on the 3Dami website.

2. I, alongside Peter and Monique, gave another talk on education at the Blender Conference 2016, this time on teaching younger students, or pre-3Dami students as I like to think of them;-) Here it is:

3. I finally got around to moving the custom render farm and asset manager we use as 3Dami from a zip file on the 3Dami website into a proper repository on GitHub: Render Farm Asset Manager. Its a little unusual, but hopefully others may start using it, and even contribute new code back to it!
In my continuing quest to upload all of my handwriting project code I have now got the utility tools up, including the required support modules:

hg: My homography module - pretty simple, though it did grow a bit beyond its original purpose. Includes the obvious code for constructing them and applying them (2D case only). Also includes some basic image querying stuff, as they also need access to the b-spline code. Plus an nD Gaussian blur, for no good reason.

ply2: I usually stick to json and hdf5 files, but hit a problem with the handwriting project, as neither was a good fit. json does not really do large amounts of data, whilst hdf5 is not human readable and has poor text support. Instead of creating an entirely new file format I decided to extend the ply format, as it could almost do what was required. I called it ply2, but have done so without the permission of the original developers. My apologies to them if they don't like this! Main changes are an additional type line in the header, to support its new role of containing stuff that isn't a mesh, plus support for typed meta lines in the header, as the comment system is crap. More importantly, I added support for elements/arrays with an arbitrary number of dimensions, string support (utf8), and cleaned it up. The module includes a specification that details this all properly.

handwriting/corpus: Builds a corpus from a load of books downloaded from Project Gutenberg, for the purpose of generating a text sample an author writes out, so the system can learn their handwriting. To use it you will need to download the documents,

handwriting/calibrate_printer: Does a closed loop colour calibration of a scanner-printer pair (I am taking closed loop here to mean the calibration is relative to the devices, not a specified standard). You print out a calibration target then scan it in. A GUI then allows you to learn a colour transform that, if applied to an image in the colour space of the scanner, will adjust it to be as close as possible when printed with the printer. This works only if you use the same scanner to obtain your handwriting samples as to scan in the calibration target. Uses thin plate splines internally.

All Posts