Archive for the 'e15' Category

Welcome to E15:oGFx, and Happy 1234567890!

Friday, February 13th, 2009

Just in time to celebrate unix epoch time 1234567890, Buza and I have finished a new website to host E15:oGFx.

The site features our first public binary in the download section, E15:oGFx ALPHA 001, a small collection of examples and tutorials to get you started, and a gallery of advanced scripts in the featured section.

We also spent the last few minutes before unix epoch time 1234567890 posting a few improvised E15:oGFx scripts inspired by the occasion to E15:WEB.

This is a demonstration video we put together a couple of months ago:

oGFx book prototype.

Wednesday, December 10th, 2008

I have been working on this with Kyle.

E15 and oGFx on Vimeo

Monday, August 11th, 2008

We just created channels in Vimeo for oGFx and E15:

It helps a lot to understand what these things are about when you look at them in motion.

MyStudio

Thursday, July 10th, 2008

For my thesis I modified e15 and created a studio web application to log and share my creative process while writing ogfx scripts. To save time, I embedded the studio application within PictureXS. I separated the studio from PictureXS by making a studio controller and adding some functionality to the picture model, like the ability to publish code and snapshots from e15 together at the same time. People visiting the studio website could send messages to the custom e15 I was running, and I could respond to them without leaving the programming environment in e15. It is not very hard to make an application take a capture from the pixels in one of it’s views and post it to a web service, so the interesting stuff to notice is independent from the platforms used, and what really matters is to observe how the creative process changes when it is performed in a digitally mediated public space.

Places like the MIT Media Lab tend to push towards figuring out new ways to make technology mediate between humans and their needs. There are many cases where this mediation might lead to an improvement of human life, but in many others the result is simply alienating. Writing instructions that make pictures instead of making pictures with my own hands is an interesting separation. Sharing the way these instructions change as I search for a different picture might illuminate about some aspects of computational art, but It could also be just another way to produce data where patterns could be found, just as it seems everybody everywhere is doing these days. We live, after all, in a statistical world.

The studio application is called MyStudio. The following image shows the first 110 pictures I published there:

Vampirella in e15

Wednesday, April 16th, 2008

Kate has been recently working on an implementation to support complex 3D mesh manipulation in E15, and she asked me if I could recover the experiments with OpenGL lights I was developing a while ago, to get a working simple lighting model for her to play with.

The following image shows the effect of a spotlight hitting on one of Kate’s meshes, where I placed a scanned image of original artwork from the classic Vampirella comic from the 1970s published by Warren that I found somewhere in the web.

It might take a while to properly implement full control of the OpenGL light resources from the E15 python interpreter, but it will be a very nice thing to have, specially thinking about the potential of combining lighting information with Kyle’s new in-progress implementation of GLSL support for E15. It makes me wanna grow hair on the MIT website.

Drawing in e15:oGFx

Sunday, February 3rd, 2008

We all have complained at some point about how limited the mouse is. But, is it? The two dimensional single point mapping of mouse interactions can seem like a poor way to interact with the multidimensional, information rich virtual space displayed in our computer screens today. It is true, having to click through thousands of web links to see the online pictures of all of my friends is definitely more painful than seamlessly navigating through a sea of pictures, dragging flocks of them as if I was using an invisible fishing net, and arranging them back together in spatial ways that could tell me not only about them as particular pictures or sequences of pictures, but as non linear narrative snapshots of time, experience and memory.

However, when thinking about experience and interaction, overwhelming your subjects with too much input can become a problem, and it is especially hard to design experiences that can be interaction rich and at the same time make sense. The world and our perception are a perfectly balanced system where everything seems coherent and accessible to our senses, at least most of the times, but when it comes to manipulate tools with a specific goal in mind, narrowing interaction down to the minimum gives us the advantages of focus and control. When drawing for example, every single artist in history has used the single point touch of the charcoal, pencil, pen or brush over a flat surface, performing a slightly different gesture, but just as limited, than the one imposed by the mouse. When creating a drawing with the pencil or the mouse, the differences come from the reactions of the material (paper or similar for the pencil, and computer for the mouse), and not from the devices. A mouse can be given the shape of a pencil, and used over a pressure sensitive display, it responds to the drawing gesture just as a pencil would.

Because of this reason, and because the human drawing gesture is a perfect source for random input, we have introduced mouse input into oGFx. There are several different ways to draw in oGFx. The drawing gesture can be mapped from screen coordinates to 3D coordinates in the OpenGL context or 2D coordinates in the Quartz2D context. We started by making the raw screen coordinates available to the python interpreter, so the decision of what to do with them could be taken by the programmer of the drawing experience.

I wrote a few scripts that map the screen coordinates to Quartz2D coordinates, adding some behavior to the strokes, a simple implementation of the Bresenham line algorithm, and a low resolution canvas. I have been working with simple drawing tools for a while, and I found oGFx to be a refreshing platform to experiment with, specially because of the four following reasons: I can change the definitions in a script without having to stop the program (or even stop drawing), I can draw and navigate around a drawing in 3D at the same time, I can apply and remove CoreImage filters on the fly, and I can project the action of drawing over history. Even though all these reasons are some of the important features of oGFx that we have been using from the beginning, they were not combined with hand drawing until recently.





e15 and oGFx in the Leopard dock

Wednesday, November 28th, 2007

Today I made new icons for e15 and oGFx.

e15_icons.jpg