This week, I worked on making the character head turn and go back to rest. Eyes and blendshapes work. As the head moves, the eyes will lead while the head smoothly follows. On top of it, I got the Kinect to link up so it follows a persons face. It can also look towards a direction of a sound. I am not sure if I will use it but the method is created just in case.
Procedural Modeling – WIP
Progress was good but my original idea to use a curve to design the car work until designing the front grill and tailgate. After some thought, I reworked the chassis and started from a box with some extrusions. Currently, I have a curve which can be used to create traffic and randomizes rims, grills – so far. Next set is to set up a randomize button and connect the transforms together. I am not sure how to configure it just yet but I have the foundation set and ready to go.
Creating a personal experience where a vital part of the project is to encourage a bond, I had to ensure the audience could interact with my character without knowing. I researched some robotic artists and engineers such as Edward Ihantowicz, Rafael Lozano-hemmer, and Kenneth Rinaldo. These creatives used multiple input devices to create pieces of art that mimicked life. Ihantowicz’s SAM, an animated flower like sculpture, used four directional microphones to determine what direction the flower should bend to depending on the sound in the vicinity of it. (Zivanovic) Lozana-hemmer’s Standards and Double Standards is a piece that has 10 to 100 floating belts that uses a tracking system driven by a surveillance camera follow individuals. (Lozano-Hemmer) The system determines what bels should move to avoid those individuals in real time. Rinaldo’s Autopoiesis contains multiple vine-like sculptures that sense viewers using infrared sensors placed without the sculpture. (Rinaldo) The data is feed to the processor to allow it to react accordingly to the interaction. Each one of the interactive art pieces uses some sort of input device that allows the artist the freedom they need to interact with the audience. I knew I would need the same type of input. There were three requirements that were important. The first one was that the input device has be able to be hidden and collect important data. Secondly, I needed visual and audio inputs that could be used to further the experience. Lastly, I would need to be able to run with Unity engine and Microsoft Windows. These limitations lead me to one device, the Kinect for Xbox One.
Kinect for Xbox One is device a developed by Microsoft. Originally intended to be used for their gaming console in 2013, Xbox One. The device had a rough time in the gaming community as it would be discontinued in 2017. (Weinberger) Although the Kinect is deprecated, the device has surprising amount of value to interactive experiences. The sensor using a camera sensor, and an array of infrared beams can capture multiple visual data such as color, greyscale, infrared, and depth. All the data can be used in real time. Combining all visual data, the Kinect can track faces and their expressions, can be used to track up to 6 individuals, detect gestures, and create biological correct skeletons. Not only does the Kinect have visual sensors, it contains a four microphone arrays. These directional arrays can be used to record, and track sounds in real time. (MEGDICHE) The Kinect is truly amazing device. This sensor as my project’s eyes and ears.
As the hardware plays a vital part in making the interaction seem genuine. The microphones are used to detect viewers responses and determine what direction the sound came from. The visual sensor is detecting the viewer’s presence. It tracks their body and facial expressions to determine what the character should say and where to the character should look. The combination of all this data is just a part of what makes the portrait seem alive. These fine details are delicately tuned to influence the audience to participate in the illusion. Where the audience can interact and anthropomorphize the portrait.