For my final project, I created a light and sound based metaphor to represent how increased activity and sensory inputs can lead to a heightened sense of chaos and distraction, while quiet and stillness can result in peace and calmness. The physical portion of my project draws on a representation of a beach that is illuminated with bioluminescent phytoplankton. This is a phenomena that can only be seen on the darkest of nights, where sensory input is at a minimum. However, the only way to trigger it is to create movement in the sand and/or water. This adds to the noticeability of the plankton and the excitement of the experience, but draws away from the surrounding peace of an empty beach at night. My project is meant to represent this dilemma through user input and triggered sounds/lights.
The piece itself consists of seven foil strips, each of which triggers a piano note and an exterior LED. In addition, it also includes four pools of water which trigger the sound of a water droplet and a submerged LED. Combined, these sounds could be used as a form of musical instrument in addition to the metaphorical representation which they serve.
Linked here is a video which shows the overall design, the construction process, and a brief demo of the piece itself:
For my final project I chose to portray the metaphore “Blind as a Bat”. Building off of this metaphore I wanted to explore different ways to experience the world without the sense of sight. To begin I determined that the input sensor for this project would be a line sensor that I had from a previous project. This sensor was intended to be used for line following, but the way the sensor operates allows it to also serve as short distance proximity sensor. I thought it would be interesting to have to feel around and get very close to an object to sense it, as if you could not see the object. Also, this sensor gave 8 seperate outputs, one for every ir sensor in the array. So I determined two main ways of useing this information from the sensor. First I determined the normized average value to serve as a measure of how close the sensor was to an object. Then I also calcuated the centroid of the measurements to determine which side of the array the object was on. Using these values I then used an RGB LED, piezo speaker, and a vibration motor to output information to the user. After experimenting, I found that the output methods did not work well together initially. To fix this I added a pulse to the device. Each of the 3 output methods would simulatiously pulse so that each one tie into each other. Additionally the pulse would increase as the sensor detected an obeject getting closer. The speaker and vibration motor would pulse increasingly faster as the object got closer. While the RGB LED output pulsed the blue led continuously and then flashed the red or green led more as the centroid shifted form right to left, or as an obejct moved from one side to the other it would become more red or green depending on the side. Below is a video of the device in operation. Below the video are more pictures showing the development of the device after the concept post with the addition of the RGB LED, speaker, and vibration motor (http://www.joshuarosenstock.com/teaching/IMGD3200_B16/2016/12/08/zachary-armsby-final-project-concept/).
For my final project I created what I like to call the relationship simulator. My goal was to capture the idea that in a relationship both members play a vital role. A relationship cannot function without the other member, and both party members must cooperate to reach their mutual goal. If one piece breaks, the whole system breaks.
The Arduino circuit and the LED strip are both incredibly simple! All that is needed is a data pin capable of Pulse Width Modulation and the ground connected to the strip. The LED strip is also powered by an external power supply. The power supply should be capable of 10A but I got away with my 2.1A phone charger as I was only powering some of the lights at a time and not at full brightness. Below is an image of the circuit and the LED strip structure.
The challenging (and I would like to stress challenging) part of this project was getting the Arduino and Maxuino patch to communicate to each other and send data to the LED strip (Communication is an important but difficult part of a relationship). This was accomplished by painstakingly editing the Firmata code and incorporating the Adafruit LED control.
Here is a short video of my Parents ‘interacting’ with my creation.
Although the combination of Max, Maxuino, and the LED strip lead to a difficult setup process and plenty of bugs to go around. The difficulty to set up my relationship simulator, in a way, furthers the metaphorical value. Relationships are DIFFICULT. Relationships are full of problems, and random things can come up that may strain the relationship or ruin it entirely.
The idea behind my final project is that of “showing your true colors” and I decided to create an abstract piece that devises the unique facial gestures of a person as an input and outputs different color changes based on their facial language. The colors are projected directly to a colorless mannequin head inside a white box with plexiglass covering the front.
I chose the light the subject “produces” to be directed at the mannequin as a comment on how our facial language not only affects how we look or are perceived but also how it makes others feel or how our attitude reflects on them. It is a clear way of seeing how we affect others lives. As well, the light that bounces off the mannequin into the white box creates very faint color “moods” around the head, such that of auras. Some of the light projected is faintly bounced back to the subject because of the plexiglass in the box, but they are unaware of it, much like how we are unaware most of the time how we impact others.
The idea of having the mannequin in front of the subject is to show their “impact” directly to the other person.
The set up consists of a face tracking software created in max that has numerical values as input that detects how much a certain part of the face moves. That numerical input controls the colors corresponding to each part of the face.
The concept for my final project is to use a person’s face as an instrument. While the camera is tracking their every move, this input would serve as a way to trigger some sort of instrument.
I want to use the face tracker patch in order to create an instrument based on the movements of your face. And since not everyone makes the same movements, it would create different melodies depending on the face.
Research: the connection between movement and music. I need to research how I plan to tie the movements from a person’s face and translating that to music.
Tech info:use face tracker patch and app in order to get numerical values as input and have those control the tempo and depending on the location of the face the pitch would change.
Here is my video documentation for my sensory metaphor project:
Like I mentioned in class, I decided to make a project around the metaphor “good music is a drug”, meaning that listening to “good” music will make a person feel good, while listening to “bad” music will not. I really wanted to be able to use and play my own instruments for my final project, so in this Max patch, you can pick a key to be in, and the patch will determine what notes you are playing on an instrument (in my case, a guitar). Based on if the note is in the key you picked or not, it will determine your “mood”.
Sorry for the quality of the recording. The microphone I use isn’t the greatest. Hopefully you enjoyed the live presentation in class, though!
For this assignment, I ultimately decided to operate on the phrase “I’m not feeling like myself today.” So, to do that, I made Emotion Simulator 2017. This application takes in a single button press and calculates some unnecessary junk in order to inform you as to how you’re feeling at that point in time.
Since this application uses a sophisticated AI that’s trying very hard to learn Human Emotion, it is requested that when you do receive your emotion (which should be accurate about 90% of the time), you inform the application as to WHY you feel this way. This could be as simple as saying “Because I haven’t slept in a week” in response to being told you feel like a zombie, or as complicated as a multi-page story in response to being told you feel like a shoe.
I made a very simple button out of paper and tin foil and combined it with the MaKey MaKey to use it as input to the Max patch. The patch itself will then randomly select an image and update a couple of text fields with what that image is and ask for a response. It will then store the response in a text file to be accessed later for things like meme-making. The whole application exercises the user’s ability to describe how they’re feeling in a completely metaphorical sense, as they’ll never actually feel like a lobster or a narwhal, but there may be a point further down the line where, after using this application, they find those terms to be accurate representations of their current state.
I think the part that I struggled the most with was the File I/O. Apparently that’s really finicky in Max and it in general just doesn’t like to do what you ask it to.
Below is a trailer video that I created for demonstrative purposes.
For this project I decided to create a music copying/learning machine that would accept input in the form of sound and transform that into music played onto water glasses. This was completed using an Arduino setup, a max patch, and a microphone.
The metaphor: The metaphor that I was trying to relate with this piece is that music without originality is a student without a teacher. The deviations from the expected sound in music allow new ideas to form and new songs to develop. Every changed note that one makes in trying to replicate a piece allows the musician to express their own interpretations of sound. In this case the imperfect replication of the notes by the machine creates an interpretation of the sounds and allows the robot to create its own artful music and gives it its own life.
The physical setup for this piece was relatively straightforward. It consisted of an arduino connected to 6 servos. These 6 servos had pencils attached to their actuating arm and they were securely glued to the baseplate. In front of each of these servos was a single water glass filled to a certain amount to create a note. The height of each amount of water was determined by recording the frequency and evenly dividing the frequency of each tone in order to get the maximum distance between each note. The pencils were specifically chosen as a subtle way to represent the teaching aspect of this piece, as the robot learns from the user.
Software: The software setup for this piece involved both Arduino programming and Max7. The Arduino was programmed to accept Serial inputs and mapped those to servo outputs. Whenever a number was received over the serial connection would trigger the corresponding servo. This allowed the Max patch to easily communicate to the Arduino. In terms of the max patch it functioned by converting all audio heard into a frequency and mapped each of those frequencies to a certain note. When this note was heard it would record the note as well as a timestamp for that note in a matrix. Then after a certain amount of delay the playback would begin by sending the first note in the matrix out the serial output and then delaying by the next timestamp until the next note. Special considerations had to be taken into account to stop the robot from recording its own playing as well.
Interaction: The main form of interaction with this piece is by physically hitting the cups themselved with a pencil of your own to create the sound, however any noise that matched the frequency of the cups would also work. This allowed some interesting interactions such as whistling or talking to produce sounds as well. I feel that this added interaction made the machine seem more lifelike, almost like it was listening and trying to communicate.
Below is a video describing the forms of interaction and the overall setup. Also displayed are various people interacting with the machine.
For my final project, I was able to make an augmented guitar that helps to play the song Closer without any other instruments. It consists of three inputs: Makey Makey controlled buttons on the guitar itself, computer-vision through a webcam, and sound input through a microphone.
Music: The first thing I did was create the electronic track for the song and cut it into pieces using Ablelton Live. I later ended up redoing a couple of the cuts in Max itself so that they would line up better.
Physical Piece: For the physical buttons of the project, I used coiled wire. It’s very easy to manipulate and works well with the Makey Makey. Using the wire buttons also allows their positioning to be changed slightly based on the preference of the player. I had originally used foil as the ground so that the notes would play when the buttons were pressed into it, but it looked messy and I found myself accidentally keeping my fingers on all three of the top buttons at once, which caused them all to trigger simultaneously. I then changed to my final design of using more wire as the ground. For the buttons on top, I then just had to place my thumb on the ground and touch each button when I wanted them to play, giving it more of a touch pad feel, which was much more comfortable and natural to use.
(For some reason I can’t upload the picture of the final product, but it can be seen in the video)
Metaphor: My metaphor for this project was how hobbies are a way for us to slow down time for a while and enjoy life without thinking about all of our responsibilities we may have hanging over our heads. The video that plays onscreen shows how things start to pile up when we gain more and more responsibilities as we grow older, which can be scary and overwhelming. By playing the song, you can slow down the video almost to a stop depending on how much you play (or at times even make it go backwards slightly if you’re loud enough).
Video: Here is a video explaining the different pieces of the project (since I wasn’t really able to do it during class) and a demo. It takes a lot of practice to use, but overall the system works really well and can be used for any song that’d loaded into it.