I had 3 concept proposals to share with you, but while making this proposal I made a decision and chose this:
aniRGB: Animation Compressed
My concept proposal is inspired by the RGB project that I presented last time. It uses three different colors to compress three distinct images into one that can be decompressed when viewed under different lights.
This presentation is going to hard to watch because I have no visualization of it – since, well, my project is to do the visualization… so bare with me here.
My proposal is that if I compress more images with continuous colors and change the light gradually during presentation, I would be able to achieve animation. The original RGB project was obviously drawn by hand, but for this project I wish to computationally generate or “compile” animated images from pre-existing data sources like a top down view of player movement in 3D games.
This means that the generated image, under normal lighting conditions, will show the entirety of player traces during that round, essentially like a long exposure image, and when viewed with changing light, the player becomes animated and you can view different part of the session by controlling the light color.
I have not seen anything that allows playback of digitally generated animation sequences that don’t involve computer parts, so this is cool. Just imagine in the future when our AI overlords have control over all computers and we are forced to remove all computers from society – good news – you can still use a RGB light to watch excellent video game, sports, esports matches like football (soccer)!
There are three components to this, finding a data source for the animation, generating the RGB image, and presenting the image under changing light.
Components
- Animation Source
- RGB Image Generation
- Presentation
Animation Source: Research Datasets
I’m going to visualize datasets from some of my research projects, such as the NVIDIA sponsored multiplayer First Person Shooter game where we put two players in a map to shoot each other and study the effects of network latency and our compensation techniques, or the WPI-UML collaboration of VR wheelchair training simulator where participants learn to safely drive powered wheelchairs and try to avoid collision in complex locations.
Both of them record player movement data in the map, thus allowing animations of each player session to be derived from the dataset. A symbol will be drawn to represent the player, and the environment will be static.
I’m thinking about sending the composed animation image to the research study participants as a souvenir
aniRGB Image Generation
Now this can be technically hard, I can think of multiple challenges in the future. For example – if I determined that colors should be additive when a pixel get used more than once, then during animation if something isn’t moving that much, the color of that pixel would easily exceed the 255 RGB limit and turn into pure red, green, blue or a combination of those. If I determine that colors should not be additive, I’m imaging that I would have to use the higher number for each of the R, G and B values out of all occasions that the pixel was used, but whether that would still provide the same visual effect is unknown – things might look way more blurry because we’re gradually changing the color and gradually animating – meaning that all spaces would have similar colors. This will be worse and worth as temporal resolution increase, aka as I increase the “frame per second” of the images or increase the amount of snapshots in an composed animation image.
Presentation
Presentation can either be easy or easier. I can simulate the changing light digitally just like I did in the last presentation – or I can print the images, purchase colored lights, and create a dark box for viewing.
I think the concept is cool. One question I have is if you would focus on a set of colors for the animation, or if you would try to map the whole image with a broad color map (As in the animation is played as you turn up one of the RGB settings continuously).
Hi John, I think instead of RGB, I’ll use HSL and only change the hue so that
the brightness and saturation stay consistent. I will have a setting of total time it would take for the light to cycle through all colors, and I will map the color of images with the time it happened on – for example I have an animation sequence of 8.83s, I set the full cycle time to be 10s, and thus the last image will be the 88.3%th color in the full range
https://www.w3schools.com/colors/colors_hsl.asp
Hi H L, I like how you’ve taken inspiration from your light artist research and are now incorporating your talents with it! For presentation I think presenting digitally will work really well because you can present it as big or small as you would like and you will have full control over the color of the lights which I think will be helpful.
Hi H L, this is a really cool idea! I think it would be bit difficult to find lights the perfect shade/color to get this to work physically rather than digitally, but I do think having that physical aspect would be really cool! I think if you have enough time, it would be really cool to explore a physical version of your project so users can get that first hand feel!!