Archive for February, 2008

Video2Web

Friday, February 29th, 2008

Last week I made a miniCocoa application that captures frames from a live video stream and posts them to the web as a practice exercise for my thesis project, because I need to deal with ways of capturing pixels from OpenGLViews and other kinds of NSViews, and figure out how to broadcast or publish them to the network.

Video2Web is a good and very simple example that deals with the NSDocument, QTCaptureView and NSURLConnection Cocoa classes. The StillMotion example from Apple’s confusing QTKit Capture Programming Guide was very useful to deal with the capture of images from the video stream, although I haven’t found a way to integrate the code I needed into a Cocoa app that was not document based. Hence Video2Web is document based, even though there is no real need for it to be, other than I couldn’t make it work otherwise.

You can download Video2Web here and play with it if you want (no warrantee). Video2Web will post screenshots from your video to PictureXS every time you press the p key, tagging them with the date and time zone data from your computer, and it will save them to your desktop every time you press the s key. It is buggy and unstable, obviously because there is a difference between making things work and making them work right, but it makes me happy that it works, and it doesn’t hurt anybody that it crashes all the time =). I might make the source code available once I figure out how to write it better.

This are some captures from the first sessions when Video2Web still tagged every capture with live-video. I later decided to add some DateTime information to separate the captures by time and time zone.

PictureXS tracing

Friday, February 8th, 2008

I have just added a canvas to trace over pictures in PictureXS. When you are looking at a particular picture, for example this one, you just have to click on trace in the right side of the head of the page to display the canvas, then trace or annotate, and submit if you want to save your doodle. I still have to add more functionality to the tracing mechanism, but I think for now it’s already fun to play with. Some extras will be easy to do, like hiding the image to see the doodle alone, or browsing only through the images that have been traced over, and other things will be harder, like adding a color palette, undos, or ways to save image files from the drawings. A couple of the things I found myself doing was putting mustaches and beards or devil horns on peoples faces, or making they say funny things with comic book balloons. I think I might implement an extra canvas layer for the censorship, so you could cover things up in a permanent way. But that is a little harder because I might want to merge the drawing with the actual pixels of the image. Maybe one day.

This is where you click to trace:

This is a happy cat:

Drawing in e15:oGFx

Sunday, February 3rd, 2008

We all have complained at some point about how limited the mouse is. But, is it? The two dimensional single point mapping of mouse interactions can seem like a poor way to interact with the multidimensional, information rich virtual space displayed in our computer screens today. It is true, having to click through thousands of web links to see the online pictures of all of my friends is definitely more painful than seamlessly navigating through a sea of pictures, dragging flocks of them as if I was using an invisible fishing net, and arranging them back together in spatial ways that could tell me not only about them as particular pictures or sequences of pictures, but as non linear narrative snapshots of time, experience and memory.

However, when thinking about experience and interaction, overwhelming your subjects with too much input can become a problem, and it is especially hard to design experiences that can be interaction rich and at the same time make sense. The world and our perception are a perfectly balanced system where everything seems coherent and accessible to our senses, at least most of the times, but when it comes to manipulate tools with a specific goal in mind, narrowing interaction down to the minimum gives us the advantages of focus and control. When drawing for example, every single artist in history has used the single point touch of the charcoal, pencil, pen or brush over a flat surface, performing a slightly different gesture, but just as limited, than the one imposed by the mouse. When creating a drawing with the pencil or the mouse, the differences come from the reactions of the material (paper or similar for the pencil, and computer for the mouse), and not from the devices. A mouse can be given the shape of a pencil, and used over a pressure sensitive display, it responds to the drawing gesture just as a pencil would.

Because of this reason, and because the human drawing gesture is a perfect source for random input, we have introduced mouse input into oGFx. There are several different ways to draw in oGFx. The drawing gesture can be mapped from screen coordinates to 3D coordinates in the OpenGL context or 2D coordinates in the Quartz2D context. We started by making the raw screen coordinates available to the python interpreter, so the decision of what to do with them could be taken by the programmer of the drawing experience.

I wrote a few scripts that map the screen coordinates to Quartz2D coordinates, adding some behavior to the strokes, a simple implementation of the Bresenham line algorithm, and a low resolution canvas. I have been working with simple drawing tools for a while, and I found oGFx to be a refreshing platform to experiment with, specially because of the four following reasons: I can change the definitions in a script without having to stop the program (or even stop drawing), I can draw and navigate around a drawing in 3D at the same time, I can apply and remove CoreImage filters on the fly, and I can project the action of drawing over history. Even though all these reasons are some of the important features of oGFx that we have been using from the beginning, they were not combined with hand drawing until recently.