The mind boggles when you try to imagine combining deep neural network embedded vision systems with this 3D model extraction technology.

My chum Jay Dowling just pointed me at this jaw-dropping video that demonstrates an interactive technique for extracting 3D models from 2D photographs.

OK, I have to say that this one completely blew me away, and this video was created three years ago in 2013, so I cannot imagine how far this technology has progressed since that time.

As far as I'm concerned this is awesomely mindbogglingly clever. In the case of simple objects, like a glass bottle, for example, the user employs the mouse to sweep three strokes that inform the system as to the X, Y, and Z axis associated with an object. Larger, more complex objects may be quickly and easily composed as a number of smaller, simpler elements.

You just have to watch the video to see how elegant this all is. As Jay said in his email to me: "This is all the more impressive because they include failures so you know it's not a cooked demo."

The thing I keep coming back to in my head is that the technology in this video is from three years ago. A lot of Pooh Sticks have passed under the bridge since then. For example, the deep neural networks and deep learning have leapt onto the mainstream centre stage for things like machine vision and machine-generated sounds.

This all ties into machine learning and artificial intelligence and such like—can you imagine combining deep neural network embedded vision systems—possibly binocular versions—with this 3D model extraction technology? I can't even begin to think where this might all lead. What say you?