The nine millionth dorkbot-nyc meeting took place on Wednesday, October 6th, 7pm at The Tank in Times Square!
It featuring the lovely and talented:
Mary Flanagan: [ineffable]
The project [ineffable] is a collabortative project by Mary Flanagan and Andrew Gerngross which reads aggregated emails between two correspondents and maps the use of language through the words users utilize in everyday correspondence. The project explores the question, Do we have particular “voice” in our daily writing to friends and colleagues? The collaborators work to bring to the foreground the primary form of exchange with technology users -- email -- through closely examining the use of language in email systems. Email is used for work and play, intimate exchange and legal agreements. How are different kinds of language, and thus sounds, used in correspondences with different people? How do we "sound" to those reading our emails, and how does the email of others sound to us?
Fang-Yu Lin: From the Great Beyond: Internet as an Entity? Channeling the Net Through a Robotic Typewriter
Here comes a robotic typewriter that is your conduit to the virtuality, a medium that channels the invisible and intangible entity called the Internet. From the Great Beyond is an interactive installation that allows the user to engage in conversation with the Internet using its keyboard, and the typewriter is able to search the Internet and print out perhaps whimsical, maybe intelligent, or simply irrelevant response to the user's input. By "talking" to the Internet, one may find out the characteristics of this being.
James Clar: 3D DISPLAY CUBE
The 3D Display Cube was hand built and constructed from one thousand individually controllable LEDs soldered into a 10x10x10 freestanding cube matrix. Each LED acts as one pixel in the spatial array and can be refreshed at a rate of over 60 frames per second creating a low resolution 3D television. Just like any other display unit, the Cube has an input device (serial input) that allows data to be sent to the cube to manipulate the 3D data live. The display unit can then be hooked up to a camera to allow for live video input to be displayed spatially, or to a sound module chip that can take live audio input to manipulate 3D structures, creating dynamically changing spatial light sculptures.
Some images from the meeting are here.