Archive for the 'media' Category

Run the Jewels Crown is out

Thursday, March 10th, 2016

A few months ago I produced a VR music video with my friends at Wevr for the hip-hop duo Run the Jewels. I’m proud to announce that The New York Times just dropped it in their nytvr mobile app.

Here is an insightful article written by @djabatt on the importance of matching hip-hop with virtual reality.

360° Premiere: Run The Jewels's "Crown"

Run The Jewels keeps pushing boundaries with their 360° video for "Crown," premiering on The New York Times's virtual-reality app.Check out the full video from Killer Mike GTO and EL-P: http://nyti.ms/1QOIbed

Posted by The New York Times on Thursday, March 10, 2016

crown-nyt

Synchrony 2016

Saturday, January 9th, 2016

I am attending a demo party called Synchrony NYC. It is hosted at a place near Union Square in Manhattan called Babycastles and organized by my old friend @nickmofo, who invited me to give a talk about virtual reality.

Synchrony103a_white1
Synchrony promotional image by Raquel Meyers.

Anamorphic iPhone lens

Sunday, December 27th, 2015

@djabatt just gave me a 1.33x anamorphic lens adapter from Moondog Labs for my iPhone. It’s great. Now I finally believe you can make movies with an iPhone, or at least use it to reliably preview real movie stuff. I love it.

IMG_8151D

IMG_8151C

IMG_8151b

A-FRAME, a markup language for browser-based VR

Wednesday, December 16th, 2015

aframe-1

I have been fooling around with ThreeJS and virtual reality boilerplates for desktop and mobile browsers using Oculus and Cardboard for a while, but this just takes things to a whole new level.

A-frame is described by its creators as

an open source framework for easily creating WebVR experiences with HTML. It is designed and maintained by MozVR (Mozilla’s virtual reality team research team). A-Frame wraps WebGL in HTML custom elements, enabling web developers to create 3D VR scenes that leverage WebGL’s power, without having to learn its complex low-level API. Because WebGL is ubiquitous in modern browsers on desktop and mobile, A-Frame experiences work across desktop, iPhone (Android support coming soon), and Oculus Rift headsets.

It is not the first time we see something like this —remember VRML and more recently GLAM— but this is the first time I sense a strong design and content oriented vision behind a toolset of this kind. It has been clearly built taking into consideration the full spectrum of creative people that currently fuel the web as well as the mobile space, and this I hope will help it stick around. To see what I mean just launch http://aframe.io/ from the broswer in your iPhone if you have one (sorry androids), browse through the examples, and hit that cardboard icon.

Screen Shot 2015-12-20 at 11.04.51 AM

Finally, I just stole a drawing from an article by @ngokevin where he explains what’s so special about A-frame and the entity-component-system design pattern at its core.

entity-component

The advent of computational photography

Friday, November 6th, 2015

Ever since I started working in cinematic Virtual Reality I have fantasized about the time when cameras will evolve from optics based mechanical contraptions to sensor based computational machines. Instead of projecting light into a flat image using lenses, computational photography collects data from the environment and uses it to reconstruct the scene after the fact. I find this subject matter fascinating. In fact, I almost attended Frédo Durand’s Computational Photography class at MIT, but I got too busy fooling around with symbolic programming and pattern recognition instead. I was not surprised to find out that Frédo is an advisor for the upcoming Light L16 digital camera. It looks insane and I definitely want one.

Before we had a Light 16 we had Lytro, a company famous for their shoot-first, focus-later consumer level funny looking cameras. To my knowledge this was the first time ever a data driven photography device has ever hit the consumer market. I didn’t get one, and I didn’t get their next generation DSLR model, but I always believed the Lytro guys were up to something interesting. It made total sense to me when they announced a few months ago they had begun development of a light field camera for Virtual Reality, and I even thought they might actually be the ones to pull that off.

Later I learned Wevr had been selected as a development partner to try the first working prototypes of Lytro’s VR capture system, called Immerge, and I might get to play with it before the end of this year. It will be a great relief after a couple of years dealing with custom rigs made with GoPro cameras and the limitations and difficulties inherited from having to stitch a bunch of deformed images at the very beginning of the postproduction pipeline. And since capturing light fields delivers data instead of pictures, you can move inside the scene almost like you were actually there, instead of being limited to just look around it.

Lytro CEO Jason Rosenthal sums it up in a recent press release: “To get true live-action presence in VR, existing systems were never going to get you there. To really do this, you need to re-think it from the ground up.” I can’t agree more.

Lytro Immerge from Lytro on Vimeo.

Casey Reas Linear Perspective

Sunday, September 6th, 2015

Casey Reas just opened a show at the Charlie James Gallery in Chinatown LA last night. It is interesting to see how his generative work has recently shifted from the purely algorithmic —using rules and numbers as a base to create form from scratch— to a deconstructive commentary on media that utilizes content units —like digital photographs and video streams— as a source of [not quite] raw data that generates his quasi abstract forms over an extended period of time. One of his pieces, the one I photographed for this article, retrieves the main photograph from the cover page of the New York Times every day, and uses it as is as a topological stripe that stretches across the digital frame over and over again, weaving a familiar, yet unrecognizable tapestry across the big television screen that Casey chose as his canvas. Well done.

9k=-7

9k=-8

New Context Conference – From Cinema to Virtual Reality

Thursday, July 9th, 2015

The New Context Conference, an annual conference hosted by Digital Garage and Joi Ito, co-founder of Digital Garage and current director of the MIT Media Lab, took place this year at Toranomon Hills in Tokyo, and focused on The Future of Digital Currency and Virtual Reality.

I was invited by Digital Garage to represent Wevr and talk about our virtual reality cinematic work. I also participated in a panel moderated by Joi with Daito Manabe and Rei Inamoto. I was proud to be part of such an impressive line-up.

ncc-2

ncc-1