Archive for the 'draw' Category

Undef Print

Friday, July 1st, 2011

This afternoon I accidentally found myself submitting tiny snippets of Javascript code to UndefPrint, and watching my submissions transform into prints almost instantly on a live video stream. The video showed a window to the street on the right side, and moving arms holding beer bottles on the left. In the center of the frame, a printer was drawing every submission on an interminable roll of paper. It was 8:30 PM in Berlin when I started looking. It was getting dark, and I stuck around until their clock hit midnight. I think it was 3:00 PM here in California. Ubiquity—to be present in several places at the same time—feels priceless. It even inspired me to write something in this journal for the first time in months ^_^

This exercise in Telematics and participation is just one out of many—Amodal Suspension by Ralfael Lozano-Hemmer and Absolut Quartet by Jeff Lieberman & Dan Paluska immediately come to mind—but it stands out in a particular way that is relevant to some of the work we were doing back in the PLW a few years ago. UndefPrint is only open to participants that can write code. The general public is excluded. At least a bit of knowledge of Javascript and computer science is required to get anything out of UndefPrint. The idea that code is a mode of expression in a way similar to simple speech, doodling, or any other gesture that can be performed in public is not new, but it is an important one, because it puts code next to activities that come naturally to most humans—like speech or hitting on a keyboard to produce sounds—even when coding doesn’t come naturally for anybody. Perhaps in the future we will be able to speak code—and math—the way we can sort out objects in a crowded room. One can only hope.

Here is the code that draws the pattern in the image above, the fifth in my series of submissions:

for (i=0;i<=pWidth();i++){
  for (j=0;j<=pHeight();j++){

And, some fooling around with triangular patterns:

Did I ever mention how much I like simple nested for-loops?

OpenStudio Archives

Saturday, September 12th, 2009

Yesterday my friend eomsco inaugurated his flickr account with a bunch of OpenStudio drawings that he saved when OpenStudio was still a functional web application. His drawings are some of the most brilliant cartoons I ever saw in OpenStudio, and it filled me with joy to see them around again. I have my own little collection of OpenStudio drawings in flickr, and I am positive that many others must have interesting similar backups forgotten in some corner of their file systems. For this reason alone it made sense to create an OpenStudio flickr group. Buza, roadrash and burnto have already added some content to the group, and Buza has just uploaded the first 200 in a collection of around 900 user profile pages that he crawled and rendered in early 2008. If you were ever an OpenStudio user, can you find yourself there? Please join the group and share your collections of OpenStudio art if you have them.

Featured illustration: Who’s there by eomsco.

Input Coffee

Wednesday, April 2nd, 2008

This morning I stumbled upon a diagram somebody made on top of a picture I took during the early days of my stay in the PLW. I don’t understand if the diagram functions as a statement or a question about how coffee can be transformed into energy that is transformed into information that stimulates the need for more coffee. Either way, I find this diagram fascinating. If you made it, please let me know who you are, I’d love to buy you drinks.

Doodle Movie Number Zero

Thursday, March 6th, 2008

I made a movie with nine months of drawing data from TinyDoodle and put it here. It takes some time to load because it’s 27 mb. It is also about three minutes long. I would have compressed it more but the drawings were starting to fade. I also thought about cutting a shorter version but I found it appealing to look at everything.

The following are some selected frames I chose from the movie.

PictureXS tracing

Friday, February 8th, 2008

I have just added a canvas to trace over pictures in PictureXS. When you are looking at a particular picture, for example this one, you just have to click on trace in the right side of the head of the page to display the canvas, then trace or annotate, and submit if you want to save your doodle. I still have to add more functionality to the tracing mechanism, but I think for now it’s already fun to play with. Some extras will be easy to do, like hiding the image to see the doodle alone, or browsing only through the images that have been traced over, and other things will be harder, like adding a color palette, undos, or ways to save image files from the drawings. A couple of the things I found myself doing was putting mustaches and beards or devil horns on peoples faces, or making they say funny things with comic book balloons. I think I might implement an extra canvas layer for the censorship, so you could cover things up in a permanent way. But that is a little harder because I might want to merge the drawing with the actual pixels of the image. Maybe one day.

This is where you click to trace:

This is a happy cat:

Drawing in e15:oGFx

Sunday, February 3rd, 2008

We all have complained at some point about how limited the mouse is. But, is it? The two dimensional single point mapping of mouse interactions can seem like a poor way to interact with the multidimensional, information rich virtual space displayed in our computer screens today. It is true, having to click through thousands of web links to see the online pictures of all of my friends is definitely more painful than seamlessly navigating through a sea of pictures, dragging flocks of them as if I was using an invisible fishing net, and arranging them back together in spatial ways that could tell me not only about them as particular pictures or sequences of pictures, but as non linear narrative snapshots of time, experience and memory.

However, when thinking about experience and interaction, overwhelming your subjects with too much input can become a problem, and it is especially hard to design experiences that can be interaction rich and at the same time make sense. The world and our perception are a perfectly balanced system where everything seems coherent and accessible to our senses, at least most of the times, but when it comes to manipulate tools with a specific goal in mind, narrowing interaction down to the minimum gives us the advantages of focus and control. When drawing for example, every single artist in history has used the single point touch of the charcoal, pencil, pen or brush over a flat surface, performing a slightly different gesture, but just as limited, than the one imposed by the mouse. When creating a drawing with the pencil or the mouse, the differences come from the reactions of the material (paper or similar for the pencil, and computer for the mouse), and not from the devices. A mouse can be given the shape of a pencil, and used over a pressure sensitive display, it responds to the drawing gesture just as a pencil would.

Because of this reason, and because the human drawing gesture is a perfect source for random input, we have introduced mouse input into oGFx. There are several different ways to draw in oGFx. The drawing gesture can be mapped from screen coordinates to 3D coordinates in the OpenGL context or 2D coordinates in the Quartz2D context. We started by making the raw screen coordinates available to the python interpreter, so the decision of what to do with them could be taken by the programmer of the drawing experience.

I wrote a few scripts that map the screen coordinates to Quartz2D coordinates, adding some behavior to the strokes, a simple implementation of the Bresenham line algorithm, and a low resolution canvas. I have been working with simple drawing tools for a while, and I found oGFx to be a refreshing platform to experiment with, specially because of the four following reasons: I can change the definitions in a script without having to stop the program (or even stop drawing), I can draw and navigate around a drawing in 3D at the same time, I can apply and remove CoreImage filters on the fly, and I can project the action of drawing over history. Even though all these reasons are some of the important features of oGFx that we have been using from the beginning, they were not combined with hand drawing until recently.

The world as a canvas

Wednesday, November 14th, 2007

My friend Andres from CMS sent me a map to help him move a desk to his new place and introduced me to an interesting mapping service called quikmaps that lets you doodle on top of a Google Map.

It reminds me of the work of my friend Andrea Di Castro, my computer hero Ken Perlin, and my screenprint mentor Jan Hendrix. Andrea made a GPS based system to record the drawings he has been making with airplanes over the surface of the world, like a trefoil shape the size of Dublin and stuff like that. Ken Perlin has an applet in his collection of online curiosities that lets you zoom as much as you want within a digital drawing and render entire landscapes inside the space between the edges of a line. Hendrix likes to look at a leave as if it was the size of a continent, and makes a map of it accordingly.

Our world feels smaller than the world of our ancestors partially because we can imagine where every single corner on Earth is just by finding it’s position in the globe. How does it feel when we are able to fly over 3 dimensional representations of it, and embed all kinds of content in this representations with any level of precision? I just wonder why there is no social component to the quikmaps application; I can easily see people sharing landmarks, trails, and routes.

Just to get a feel of how this application works I made two map drawings today. It was incredibly easy to manage my maps and embed them in this page, although drawing became a very slow after a few strokes; I’m sure the developers of quikmaps didn’t think somebody would abuse their object model as I did. (Just as a sidenote, Safari 3 doesn’t seem to like quikmaps, it dramatically displaced my drawings by thousands of miles) (As another sidenote, it seems Safari 3 likes quikmaps now).

This is an attack to Mexico City:

And this is a progression of shrinking giants that points to where I used to live in Mexico City, from the size of America to the size of one block.