Polish and Present

I “play tested” the laser and OSC control using phone accelerometers with a few students. There were quite a few iterations and accessibility issues I ran into last minute:

Test One:
Three phones controlling a single pitch. Created a sum of float values from each accelerometer (X, Y, Z) instead of each axis controlling a different pitch. This worked pretty well, although the scaling was a little off. Having three students control a single pitch caused them to interact more with the laser’s shapes, as they honed in on combinations of frequencies that made more interesting shapes. I generally liked this approach, but the issue was OSC applications on iOS that are free and easy to setup are practically impossible to come by.

This is a big problem for class demonstration. I can’t spend a lot of time setting up the OSC routing for multiple applications, particularly the iOS variants which are convoluted.

That leads me to solution #2:

Have only two phones control two pitches via (X, Y) accelerometer data. By ratio, two people in class should have android phones that can install “oscHook”, which is free and very easy to setup. This gives two users four pitches total to manipulate. Four pitches seems to make the most complex shapes while still allowing for the more simple shapes to occur (from unison notes).

I can also guarantee with “oscHook” that the OSC packets being picked up by Max through the [udpreceive] object are the same routing. They all are sent as “/accelerometer/raw/x” and so forth.

Third, but failed idea:

Add another phone and user that could control the volume of the other two user’s frequencies. This sounded cool because amplitude controls the size of the projected shape. In practice, it was underwhelming under even half volume — the third user would essentially make it look less interesting by manipulating the volume. I trashed this idea.

Polish (or lack thereof):

I know there’s a more aesthetically pleasing setup than the balloon/mirror cupped over the bluetooth speaker I am using. I wanted to create a housing that hides the speaker and LED laser, but because I bought an LED that is button operated instead of a switch, I have to be able to physically tape over the button to keep the laser on. Housing would prevent me from doing this, so I have to leave the speaker and laser out. This is a huge bummer, and if I further iterated on the design I would buy another laser with a switch, but they go for about $35 on Amazon and I’d have to wait for shipping.

Surprisingly, the laser has great definition even from across the room in the art lab. It was easy to set it up to project onto the green wall around chest height. I think this will have to do for demonstration/presentation.

Summary:

After play testing I feel confident in how the laser operates through OSC control and accelerometers, and I’ve accounted for difference in operating systems and apps by hopefully only selecting android users to participate. The laser is set-up in the art lab at a reasonable distance and height for viewing.

I plan to record video of it in action for my final documentation.

Everything Working (and a few known unknowns)

After the last class discussion, I really thought that giving multiple people control of a band of frequencies was going to be a fun, interactive way to present the “laser chadni plate” idea.

I’ve gone ahead and premapped three phones using OSC to a freq. range, roughly 300hz per person. As three “performers”, each band has an effect on the shape of the laser. Low/Mid/High have particular effects, so working together (or against each other) will inspire different shapes and results.

I’d like to test this concept in class before I go ahead and actually record the idea with a camera as a performance. There might be a few hiccups, but I’ve taken precautions to have it set up as quickly as possible. Each phone must download an OSC-enabled music app that sends accelerometer information. I’ve found a few free ones for both Apple and Android platforms. From there, the user has to plug in my network IP address and the corresponding port number.

After that, since the apps route the OSC packet differently, I’ll have to identify on each user’s phone what the accelerometer data is sent as. Usually it’s “/accelerometer” but it can vary based on how the app was coded.

From there, each user will mess around with controlling the shape. I’ll try “conducting” by bringing people in and out, using the volume mixer, or encouraging more or less phone movement.

I think the results are going to be pretty entertaining, but I’m expecting that I might need to add more or less frequencies for each person depending on what shapes come out, which is very easy to change.

After the results from experimenting with three phones, the last step is to find the most sleek presentation of the system at work. I will likely use a go-pro, and find a nice board/wall to project the laser on to. If the performers are in the shot, I’ll need to do a little bit of choreographing around the laser to further the overall presentation.

 

Recalibration

I am still very much interested in the laser oscilloscope I showed in class last week, however, I’m still figuring the best way to showcase the shapes the laser makes from frequencies.

Over the weekend, I spent some time experimenting with user control of the sound. The simple solution was to send OSC (open sound control) data from my phone to Max MSP. Each axis of the phone’s accelerometer could control a voice of a pitch, which would allow for a chord to be expressed.

The first test just scaled the data to a band of frequencies from 200hz to 1500hz, an area that the laser seemed to respond to well. It combined all three axis to a single voice.

The second attempt used mapped frequencies for each axis by storing pitch information inside of a [coll] file. This would ensure that pitches did not float absolutely, but instead were snapped to ranges that resemble a piano playing all white keys. A line object allowed floating between these points as they activated, because the “detune” or wave offset of frequencies caused more motion in the laser. It was a bit underwhelming.

Lastly, I ended up scaling all chromatic notes between the axis for more fine tuned control. It still was a bit underwhelming. Other data from the phone, like the light sensor or gyroscope could be interesting, or a combination of the phone data with other controllers like standard MIDI controllers for a “performance” could be more captivating. But ultimately, I don’t think this is compelling.

So for my recalibration, I will focus on composing a piece of music around the shapes the laser produces and filming it. Some questions to ask are, does the music have to sound good? Can it be “non-functional” and thus not harmonious, but just based on the shapes the sound produces? What kind of signal processing effects could be used? Could I combine a precomposed piece that focuses on those frequencies, but “perform” it as a DJ of sorts using the phone accelerometer to manipulate effects and other parameters?

What recording of the sound and visuals will be best to archive this project? Is there a better medium to project the laser on still? Can I combine the recorded video and sound from the laser and further process it with jitter, or is that cheating?

Looking forward to experimenting more.

Maquette

So, my original idea was way too complicated. I’ve settled on a technique I think has a lot of potential that still uses sound. The idea is not my own, so I’m looking for ways to augment this further both from a visual perspective and the sound behind it.

Introducing: The Laser Oscilloscope
https://www.youtube.com/watch?v=utcmSCTGVj0&feature=youtu.be

Using sound to vibrate a membrane, in this case, a balloon, you can create many different shapes based on the frequencies you input. Amplitude seems to change the size of the shape as well. The first attempt involved using a bowl and stretch wrap, but that was very unreliable and the bowl itself was vibrating too much, causing interference.

The second attempt involved stretching a balloon over an old vitamin container, and glueing a piece of the reflective center of a DVD to the center. The pill bottle fits perfectly over a single speaker cone inside this bluetooth speaker. The resulting images were much clearer.

The shopping list so far has included:

  1. LED laser, $18
  2. Bluetooth speaker, $25
  3. Gorilla Glue, $4
  4. Balloon Pack, $7
  5. Broken kettlebell workout DVD, FREE!
  6. Duct Tape, $4

Since the initial concept works really well (persistence of vision is cool), I’m now wondering how to make the resulting image larger or more interesting. I know you can technically make a hologram by having four copies of an image project into a prism, but that might require too many LEDs and a multichannel speaker setup. I suppose reflecting the laser at the right angle might achieve an effect like that.

Also, there’s the question of what kind of music or sound I can design to create different shapes. The initial idea was done in Max/MSP by creating three [cycle~] objects that emit a sine wave at a floating point value. By mixing the signals, the detuning, or offset, of the waveform produces more complex shapes while “pure” tones like fifths and octaves produce more simple shapes, like a figure 8. I’m wondering what EQ filtering, feedback, and other signal processing effects will have on the shapes the laser takes.

As I experiment with the signal processing side, there may be an algorithmic way to produce different shapes within Max. Also, the frequency shifting could be controlled by another form of input, making it more interactive. For example, pitches could be controlled by sensors, whether that’s a webcam, a light sensor, proximity, maybe even a Wiimote or USB controller. It might be more fun to have some abstract, but physical control over the sound as it changes the laser’s shape.

Lastly, there might be other mediums to project and reflect the laser onto. Mirrors, or layered translucent fabrics, liquid, I’m not sure. I’d have to experiment more with that, but I do believe the final version should feel a lot bigger than this test version.

Concept Proposal

When I was working on sound design for a game here last semester, I ended up using induction microphones to capture the electromagnetic fields from electronic devices.

In theory, a color changing bulb will emit different wavelengths as it produces different colors. Dimming and flickering lights should also produce different sonic results.

Using Max MSP, I should be able to control lighting fixtures from a laptop. The induction mics will pick up the changes in lightbulb behavior and send them back into a USB audio interface. From there, further signal processing of the electromagnetic responses can happen inside Max. That audio will then be sent to a speaker system.

I’m curious how the two systems will play off each other. What visuals will “sound good”? Will the desired sound affect the way the lights behave? The visuals would then be informed by what kind of sonic information I want to create — rhythms, pulses, different pitches.

I’m a bit worried on both the cost of the rig, and making sure Max works properly with the equipment. There are custom externals to control DMX via Max already, and I’ve chosen some hardware that already seems to be supported. Still, using open source software tends to always have some weird bugs or problems I couldn’t anticipate. I think the value of learning how to control DMX for future projects makes it worth it — it has a lot of potential for live visuals and installations.

Here’s a little mockup/flowchart of the rig I’m thinking of:

Dan Flavin, minimalist light art.

I chose Dan Flavin as minimalists have always appealed to me growing up, particularly musicians like John Cage, Steve Reich, Terry Riley, etc. Flavin’s simplicity was a legitimate statement in the 70’s and 80’s when he came to public attention.

Flavin was from NY, and in the early 50s was studying for priesthood. Many art critics and historians would later attribute some of the qualities of his work to this early religious interest, however, Flavin usually denies attaching any particular meaning to his work.

He later entered the military, the US Air Force, and was trained as an air weather meteorological technician. Through the army, he also got to study art in Korea through the University of Maryland. When he returned to the US in 1956, he attended multiple schools, eventually going to Columbia University for drawing and painting.

He became employed at the Guggenheim in 1959 originally as a mailroom clerk, then a guard, and then transitioned to an elevator operator at the MoMA.

His first significant works were a series called “Diagonals”. He rejected even the concept of a “work”, instead calling them “proposals” or “experiments”. Each diagonal has no further title, just the date it was installed (and sometimes a dedication or subtitle).

He rejected the labels of being called a minimalist (as most minimalists do, in my experience).

The diagonal appears “without mass” and “indeterminate volume”. Critics tend to say they are ephemeral and temporary since the fluorescent bulbs used eventually burn out.

Flavin’s use of reconstructed  bulbs, instead of creating his own materials, falls in line with artists like Duchamp who installed reconstructed objects like a bicycle wheel or a toilet seat. Additionally, it allowed him to focus on other considerations like the surrounding space of the light, which becomes a part of the work itself.

The Diagonal of May 25, 1963 is described as “the diagonal of personal ecstasy”. Its “forty-five degrees above horizontal” position as one of “dynamic equilibrium”.

Light Vs Paint
Part of Flavin’s experimentation is in how light behaves in almost opposite ways to paints and pigments. Blending pigments eventually results in black paint, while blending spectrums of light will eventually produce white light.

The ambient light from the four colors is a white light.

Th primary colors differ as well. Pigments are red, yellow, and blue. Light primaries are red, blue, and green.

Green light is so intense it produces white light. Red light cannot be produced by phosphors, and so the tube is tinted. The red lighting is muted.
These corridor pieces block a hallway, forcing visitors to find another path in the exhibit. You can glimpse the other side of the wall through the open space in the corner.

Space & Architecture
Flavin’s work isn’t just the lighting he installs, it’s the space surrounding it. Putting a light in a corner, or a ceiling, has a deliberate purpose in the audience experiencing the physical space itself, and how the light occupies it.

His late work became larger scale installations, sometimes taking over entire buildings.
This installation in an arcade is one of his last works. The scale of it is pretty insane.

Key Ideas
“It is what it is and it ain’t nothing else”
Flavin denied any particular meaning to his work, however many attach his background to priesthood to his work as representing ideas of religious conversion and spiritual epiphany.

Preconstructed materials
Avoided constructing his own materials, opting for commercially available fluorescent lighting. These lights are “perishable”, have a lifecycle, and thus are ephemeral.

Connection to Op Art: Emphasis on lighting and its effects
Translated this 1950s concept to sculpture.

Environment is part of installation
Flavin’s lighting tends to emphasize the space it occupies. The diffusing light from bulbs into the space is part of the work.

the ephemeral quality of the light itself is arguably completely contradictory to the otherwise industrial character of standard Minimalist materials like steel, aluminum, concrete, plastic, glass, and stone. Thus, Flavin’s legacy is less about his work as a significant Minimalist artist than it is in his ability to look beyond the movement”

Matt Pietrucha Intro and Portfolio

I’m Matt Pietrucha, a new-ish grad student at WPI in the IMGD program. My background is a B.A in Music Education from Montclair State University in NJ.

I think my earliest memory of experimenting with light was being a little kid and playing with a lite-brite. If you’re not semi-old like me and never seen one of these things, here you go:

In high school I was mostly into experimenting with recording music and making electronic music. I got into Ableton Live and started making beats and glitch music as probably a lot of people do. This would become one of my greatest passions and lead to all sorts of other related fields like sound design and interactive programming in programs like Max/MSP.

One of the things a friend and I would do in high school and early college years was go out at night and do light painting for fun. One of my early album covers for an LP of beats I made in my junior year is light art we made in the woods, which was done by having each person write an individual letter or two.

Lately I spend most of my time using Max/MSP, but I’m starting to miss the idea of performance. Electronic performance is usually a hard sell — nobody wants to be a DJ just pressing “play”, and even if your set-up is “live”, nobody really knows what you’re doing anyways. Exploring ways to make performances feel more immersive or interactive might bring me back into the fold. I also like considering how visuals can impact the context of music and performance from a compositional level — it’s rare that I get to compose for visuals unless I get asked to make trailers or advertisement sound design.. and that’s usually utilitarian in purpose.

Here’s me tinkering around with a program called MLRV on a open source controller called a Monome.  It’s in the juke/footwork style, which is still popular in Chicago but sort of a niche thing (wow I sound pretentious!)

My most recent audio-visual work I’m proud of is this track and video:

The visuals are done in real time by a Jitter patch that takes MIDI data from my music as well as low/mid/high EQ bands and then maps them to a jit.noise object. The visuals are jit.noise, usually gaussian but it shifts around a lot. The EQ and MIDI trigger different scalings, rotations, and types of noise. It’s kind of minimalist like a glorified iTunes visualizer but I think it’s pretty cool.

I also have been trying to build up a portfolio of sound design and composition for animation, film, and games. This is one of my favorite things to do as it doesn’t have the creative pressure of writing music for yourself, it just serves some other purpose. This piece is for a SCAD animation final project.

Anyways, I’m excited to work with hardware and electronics during this course as it’s definitely outside my comfort zone but always been on the periphery of music composition and performance. I’m hoping to combine my interest in sound design and audio with lighting.

If for some reason you’d like to hear/see more of my work, you can check it out at http://mattpi.net

Also feel free to reach out if you need sound/music on a project!