Archive for the 'ogfx' Category

Welcome to E15:oGFx, and Happy 1234567890!

Friday, February 13th, 2009

Just in time to celebrate unix epoch time 1234567890, Buza and I have finished a new website to host E15:oGFx.

The site features our first public binary in the download section, E15:oGFx ALPHA 001, a small collection of examples and tutorials to get you started, and a gallery of advanced scripts in the featured section.

We also spent the last few minutes before unix epoch time 1234567890 posting a few improvised E15:oGFx scripts inspired by the occasion to E15:WEB.

This is a demonstration video we put together a couple of months ago:

oGFx book prototype.

Wednesday, December 10th, 2008

I have been working on this with Kyle.

E15 and oGFx on Vimeo

Monday, August 11th, 2008

We just created channels in Vimeo for oGFx and E15:

It helps a lot to understand what these things are about when you look at them in motion.

MyStudio

Thursday, July 10th, 2008

For my thesis I modified e15 and created a studio web application to log and share my creative process while writing ogfx scripts. To save time, I embedded the studio application within PictureXS. I separated the studio from PictureXS by making a studio controller and adding some functionality to the picture model, like the ability to publish code and snapshots from e15 together at the same time. People visiting the studio website could send messages to the custom e15 I was running, and I could respond to them without leaving the programming environment in e15. It is not very hard to make an application take a capture from the pixels in one of it’s views and post it to a web service, so the interesting stuff to notice is independent from the platforms used, and what really matters is to observe how the creative process changes when it is performed in a digitally mediated public space.

Places like the MIT Media Lab tend to push towards figuring out new ways to make technology mediate between humans and their needs. There are many cases where this mediation might lead to an improvement of human life, but in many others the result is simply alienating. Writing instructions that make pictures instead of making pictures with my own hands is an interesting separation. Sharing the way these instructions change as I search for a different picture might illuminate about some aspects of computational art, but It could also be just another way to produce data where patterns could be found, just as it seems everybody everywhere is doing these days. We live, after all, in a statistical world.

The studio application is called MyStudio. The following image shows the first 110 pictures I published there:

Drawing in e15:oGFx

Sunday, February 3rd, 2008

We all have complained at some point about how limited the mouse is. But, is it? The two dimensional single point mapping of mouse interactions can seem like a poor way to interact with the multidimensional, information rich virtual space displayed in our computer screens today. It is true, having to click through thousands of web links to see the online pictures of all of my friends is definitely more painful than seamlessly navigating through a sea of pictures, dragging flocks of them as if I was using an invisible fishing net, and arranging them back together in spatial ways that could tell me not only about them as particular pictures or sequences of pictures, but as non linear narrative snapshots of time, experience and memory.

However, when thinking about experience and interaction, overwhelming your subjects with too much input can become a problem, and it is especially hard to design experiences that can be interaction rich and at the same time make sense. The world and our perception are a perfectly balanced system where everything seems coherent and accessible to our senses, at least most of the times, but when it comes to manipulate tools with a specific goal in mind, narrowing interaction down to the minimum gives us the advantages of focus and control. When drawing for example, every single artist in history has used the single point touch of the charcoal, pencil, pen or brush over a flat surface, performing a slightly different gesture, but just as limited, than the one imposed by the mouse. When creating a drawing with the pencil or the mouse, the differences come from the reactions of the material (paper or similar for the pencil, and computer for the mouse), and not from the devices. A mouse can be given the shape of a pencil, and used over a pressure sensitive display, it responds to the drawing gesture just as a pencil would.

Because of this reason, and because the human drawing gesture is a perfect source for random input, we have introduced mouse input into oGFx. There are several different ways to draw in oGFx. The drawing gesture can be mapped from screen coordinates to 3D coordinates in the OpenGL context or 2D coordinates in the Quartz2D context. We started by making the raw screen coordinates available to the python interpreter, so the decision of what to do with them could be taken by the programmer of the drawing experience.

I wrote a few scripts that map the screen coordinates to Quartz2D coordinates, adding some behavior to the strokes, a simple implementation of the Bresenham line algorithm, and a low resolution canvas. I have been working with simple drawing tools for a while, and I found oGFx to be a refreshing platform to experiment with, specially because of the four following reasons: I can change the definitions in a script without having to stop the program (or even stop drawing), I can draw and navigate around a drawing in 3D at the same time, I can apply and remove CoreImage filters on the fly, and I can project the action of drawing over history. Even though all these reasons are some of the important features of oGFx that we have been using from the beginning, they were not combined with hand drawing until recently.





e15 and oGFx in the Leopard dock

Wednesday, November 28th, 2007

Today I made new icons for e15 and oGFx.

e15_icons.jpg

oGFx full featured glLights

Wednesday, October 24th, 2007

We have always thought that oGFx should feature lights, because they open the way to a lot of resources to play with when using textures and shaders. Bump maps, highlights and all kinds of other effects depend on the ability to calculate how light bounces over a particular vertex or fragment, and light in motion is enough to enhance the experience of a digital environment, just because a change in lighting can be understood as a transition between spaces, the passage of time, or both. Control over lighting models is an important feature to consider when thinking about the creation of a language for interactive graphics.

I recently finished the first iteration of glLight support for oGFx , and already experienced a lot of pleasure playing with it in some of the scripts we have made in the past. Following the advice from J. Popovic, teacher of the MIT Computer Graphics undergraduate class, I wrote some code to turn on and off graphical representations of some of the objects in the oGFx context, making it easier to debug otherwise hard to track things like light directions, light positions and surface normals.

Here is a very rough (and I mean it) video where you can see the influence of the light in motion over a scene where I was manipulating an interactive animation I made using quadratic Bezier curves from Apple’s Quartz2D, and here is another one, same degree of roughness, and a different use of the same curves.

lightning.jpg

bomb_0.jpg

bomb_1.jpg

bomb_2.jpg

bomb_3.jpg