Archive for the 'opengl' Category

Openframeworks + Kinect still working

Thursday, July 28th, 2016

Last night I was looking at depthkit and 8i while researching options for video-based 3D capture, and I felt inspired to rescue my old kinect from the bottom of a drawer. I got a fresh copy of openframeworks and ten minutes later I had a running build of the openframeworks kinect example in my computer. It was literally like time-traveling to 2011. I remember capturing a point cloud of Amy pregnant back before Maya was born. A couple of months later the kinect found oblivion in the bottom of a drawer and I stopped using openframeworks until now.

IMG_6296-550

IMG_6268-550

Geometry is back

Monday, July 25th, 2011

This weekend, I have been swimming inside a projection of the 120-cell courtesy of Jenn3D. The tetrahedrons stand for vertices. Jenn3D looks great. I downloaded the source code but I couldn’t understand most of it. At least I got it to compile.

MyStudio

Thursday, July 10th, 2008

For my thesis I modified e15 and created a studio web application to log and share my creative process while writing ogfx scripts. To save time, I embedded the studio application within PictureXS. I separated the studio from PictureXS by making a studio controller and adding some functionality to the picture model, like the ability to publish code and snapshots from e15 together at the same time. People visiting the studio website could send messages to the custom e15 I was running, and I could respond to them without leaving the programming environment in e15. It is not very hard to make an application take a capture from the pixels in one of it’s views and post it to a web service, so the interesting stuff to notice is independent from the platforms used, and what really matters is to observe how the creative process changes when it is performed in a digitally mediated public space.

Places like the MIT Media Lab tend to push towards figuring out new ways to make technology mediate between humans and their needs. There are many cases where this mediation might lead to an improvement of human life, but in many others the result is simply alienating. Writing instructions that make pictures instead of making pictures with my own hands is an interesting separation. Sharing the way these instructions change as I search for a different picture might illuminate about some aspects of computational art, but It could also be just another way to produce data where patterns could be found, just as it seems everybody everywhere is doing these days. We live, after all, in a statistical world.

The studio application is called MyStudio. The following image shows the first 110 pictures I published there:

Vampirella in e15

Wednesday, April 16th, 2008

Kate has been recently working on an implementation to support complex 3D mesh manipulation in E15, and she asked me if I could recover the experiments with OpenGL lights I was developing a while ago, to get a working simple lighting model for her to play with.

The following image shows the effect of a spotlight hitting on one of Kate’s meshes, where I placed a scanned image of original artwork from the classic Vampirella comic from the 1970s published by Warren that I found somewhere in the web.

It might take a while to properly implement full control of the OpenGL light resources from the E15 python interpreter, but it will be a very nice thing to have, specially thinking about the potential of combining lighting information with Kyle’s new in-progress implementation of GLSL support for E15. It makes me wanna grow hair on the MIT website.

Drawing in e15:oGFx

Sunday, February 3rd, 2008

We all have complained at some point about how limited the mouse is. But, is it? The two dimensional single point mapping of mouse interactions can seem like a poor way to interact with the multidimensional, information rich virtual space displayed in our computer screens today. It is true, having to click through thousands of web links to see the online pictures of all of my friends is definitely more painful than seamlessly navigating through a sea of pictures, dragging flocks of them as if I was using an invisible fishing net, and arranging them back together in spatial ways that could tell me not only about them as particular pictures or sequences of pictures, but as non linear narrative snapshots of time, experience and memory.

However, when thinking about experience and interaction, overwhelming your subjects with too much input can become a problem, and it is especially hard to design experiences that can be interaction rich and at the same time make sense. The world and our perception are a perfectly balanced system where everything seems coherent and accessible to our senses, at least most of the times, but when it comes to manipulate tools with a specific goal in mind, narrowing interaction down to the minimum gives us the advantages of focus and control. When drawing for example, every single artist in history has used the single point touch of the charcoal, pencil, pen or brush over a flat surface, performing a slightly different gesture, but just as limited, than the one imposed by the mouse. When creating a drawing with the pencil or the mouse, the differences come from the reactions of the material (paper or similar for the pencil, and computer for the mouse), and not from the devices. A mouse can be given the shape of a pencil, and used over a pressure sensitive display, it responds to the drawing gesture just as a pencil would.

Because of this reason, and because the human drawing gesture is a perfect source for random input, we have introduced mouse input into oGFx. There are several different ways to draw in oGFx. The drawing gesture can be mapped from screen coordinates to 3D coordinates in the OpenGL context or 2D coordinates in the Quartz2D context. We started by making the raw screen coordinates available to the python interpreter, so the decision of what to do with them could be taken by the programmer of the drawing experience.

I wrote a few scripts that map the screen coordinates to Quartz2D coordinates, adding some behavior to the strokes, a simple implementation of the Bresenham line algorithm, and a low resolution canvas. I have been working with simple drawing tools for a while, and I found oGFx to be a refreshing platform to experiment with, specially because of the four following reasons: I can change the definitions in a script without having to stop the program (or even stop drawing), I can draw and navigate around a drawing in 3D at the same time, I can apply and remove CoreImage filters on the fly, and I can project the action of drawing over history. Even though all these reasons are some of the important features of oGFx that we have been using from the beginning, they were not combined with hand drawing until recently.





oGFx full featured glLights

Wednesday, October 24th, 2007

We have always thought that oGFx should feature lights, because they open the way to a lot of resources to play with when using textures and shaders. Bump maps, highlights and all kinds of other effects depend on the ability to calculate how light bounces over a particular vertex or fragment, and light in motion is enough to enhance the experience of a digital environment, just because a change in lighting can be understood as a transition between spaces, the passage of time, or both. Control over lighting models is an important feature to consider when thinking about the creation of a language for interactive graphics.

I recently finished the first iteration of glLight support for oGFx , and already experienced a lot of pleasure playing with it in some of the scripts we have made in the past. Following the advice from J. Popovic, teacher of the MIT Computer Graphics undergraduate class, I wrote some code to turn on and off graphical representations of some of the objects in the oGFx context, making it easier to debug otherwise hard to track things like light directions, light positions and surface normals.

Here is a very rough (and I mean it) video where you can see the influence of the light in motion over a scene where I was manipulating an interactive animation I made using quadratic Bezier curves from Apple’s Quartz2D, and here is another one, same degree of roughness, and a different use of the same curves.

lightning.jpg

bomb_0.jpg

bomb_1.jpg

bomb_2.jpg

bomb_3.jpg

e15 and PictureXS

Wednesday, October 24th, 2007

I wrote a few simple methods in PictureXS to let E15 request individual images from it, and used them to build the qbert staircase again, using the average color of each picture to wrap the rest of each cube around it. When the first set of geometry was loaded, I was surprised to see all the censored pictures I forgot to block from access with the simple methods I wrote, and then decided it would be more interesting to show a special censorship label for each censored picture instead of disappearing them from view. After loading PictureXS again, I found out that there are not many censored pictures, but certainly more than what I thought. Naughty people…

e15_1.jpg
censored_550.png
e15_3.png