The advent of computational photography

November 6th, 2015

Ever since I started working in cinematic Virtual Reality I have fantasized about the time when cameras will evolve from optics based mechanical contraptions to sensor based computational machines. Instead of projecting light into a flat image using lenses, computational photography collects data from the environment and uses it to reconstruct the scene after the fact. I find this subject matter fascinating. In fact, I almost attended Frédo Durand’s Computational Photography class at MIT, but I got too busy fooling around with symbolic programming and pattern recognition instead. I was not surprised to find out that Frédo is an advisor for the upcoming Light L16 digital camera. It looks insane and I definitely want one.

Before we had a Light 16 we had Lytro, a company famous for their shoot-first, focus-later consumer level funny looking cameras. To my knowledge this was the first time ever a data driven photography device has ever hit the consumer market. I didn’t get one, and I didn’t get their next generation DSLR model, but I always believed the Lytro guys were up to something interesting. It made total sense to me when they announced a few months ago they had begun development of a light field camera for Virtual Reality, and I even thought they might actually be the ones to pull that off.

Later I learned Wevr had been selected as a development partner to try the first working prototypes of Lytro’s VR capture system, called Immerge, and I might get to play with it before the end of this year. It will be a great relief after a couple of years dealing with custom rigs made with GoPro cameras and the limitations and difficulties inherited from having to stitch a bunch of deformed images at the very beginning of the postproduction pipeline. And since capturing light fields delivers data instead of pictures, you can move inside the scene almost like you were actually there, instead of being limited to just look around it.

Lytro CEO Jason Rosenthal sums it up in a recent press release: “To get true live-action presence in VR, existing systems were never going to get you there. To really do this, you need to re-think it from the ground up.” I can’t agree more.

Lytro Immerge from Lytro on Vimeo.

Mit Media Lab 30th Anniversary

November 6th, 2015

Just came back from the MIT Media Lab 30th Anniversary celebration. It was a great excuse to spend a few days in Cambridge with the family and reconnect with old friends while catching up with a place where Science Fiction is everyday life.



October 16th, 2015

They say “build and share virtual reality for everyone”: relies on webgl and threejs to deliver a browser based tool that lets users create simple virtual reality pieces that can be explored using google cardboard. I tried and it took me no time to make a simple scene. As far as I know, the barrier of entry for VR can’t get lower than this, both in terms of creating and sharing. It is pretty fun to use and delivers interesting results in five minutes. I still don’t know if I’ll ever actually use it but I seriously recommend you to check it out. As VR becomes more relevant to the regular digital user, more of these tools will emerge offering simple ways to make and share VR, and I believe a lot of interesting stuff will emerge from this space.

PS: here is my profile page in case I ever do something worth publishing with this toolkit. For now I can already say it’s a useful prototyping tool for VR experience development and I have successfully used it a couple of times.



Cartoon Distortion

October 10th, 2015

Dribnet aka Tom White suggested that we teamed up to submit an application to Printed Matter’s LA Art Book Fair for next year.

“But this means we will need to make some books for it”, I said. “Exactly”, he replied.

Next thing you know we are talking about hunting down a Risograph printer and figuring out how to go nuts with it.

The LA Book Fair submission required a print oriented portfolio website, so I finally put one together and hosted it at cartoondistortion dot com. Please take a look. It’s nice to see most of my graphical morsels tightly organized and readily available like that.


Tom also had to make his own portfolio page. Hopefully we will get our little table at the book fair next year 😀

Congratulations Wevr

September 23rd, 2015

Wevr just won a couple of these Proto Awards, no big deal. The really cool thing is that the actual trophy is a very nice bronze tesseract. I totally loved it and wouldn’t mind smashing something with it.


Casey Reas Linear Perspective

September 6th, 2015

Casey Reas just opened a show at the Charlie James Gallery in Chinatown LA last night. It is interesting to see how his generative work has recently shifted from the purely algorithmic —using rules and numbers as a base to create form from scratch— to a deconstructive commentary on media that utilizes content units —like digital photographs and video streams— as a source of [not quite] raw data that generates his quasi abstract forms over an extended period of time. One of his pieces, the one I photographed for this article, retrieves the main photograph from the cover page of the New York Times every day, and uses it as is as a topological stripe that stretches across the digital frame over and over again, weaving a familiar, yet unrecognizable tapestry across the big television screen that Casey chose as his canvas. Well done.




June 14th, 2015

Everything is possible in Virtual Reality. Making you tiny for example, wouldn’t that be fun? Perhaps you want to live inside a box of lego blocks…