This week, I worked on a bit of the thesis project. During testing and recording a demo, I discovered some bugs with the attention grabber, and if the character goes from active to idle – the sentence would not return to 0 (the intro audio). I also know I need to add a breathing animation for the character as he is too static while interacting. So that is in the process for this coming week. I might be adventurous and try to do a few animations and import them. For only 2 weeks left, I might shy away so I don’t break anything and rather only add in keywords to help the interaction.
Me this weekend with Houdini:
Over the past few days, I have been hard at work finishing up my car generator. With the crashes with Houdini and Unreal, it was frustrating to say the least. Luckily, I swap over to Unity, and fix some code which saved time as it was not crashing as much. Besides the craziness, I believe the experience was worth it. I finished the project and got to learn Houdini in the process. Here are some Screenshots and Video:
I have been working on my procedural course final. I decided to go with creating a procedural generated car. As I dug deep into the nodes and more complex everything got, the more the crashes. I ended up re-working some of the networks. It seemed to help but I still got some crashes. Along with those issues, I noticed the rand() method is weird. It always gave me the same number even though I generated another number from that rand. It seemed to be pulling one number and not re generating a new number. The work around I found was to re-write the code using python. Using python’s random, it was randomly generating new numbers every time and worked for my needs.
In studio, we are progressing nicely. Although the end of the semester is coming, we are finishing up a few things before we have to stop and record for presentations. There is still a lot that needs to be done – art, adding content, and messing more with the Kinect. I do believe the hard part of creating all the methods for the dictations and keywords are done. I am looking forward to break and finally get back into modeling all the parts I need for the project.
This week, I worked on making the character head turn and go back to rest. Eyes and blendshapes work. As the head moves, the eyes will lead while the head smoothly follows. On top of it, I got the Kinect to link up so it follows a persons face. It can also look towards a direction of a sound. I am not sure if I will use it but the method is created just in case.
Procedural Modeling – WIP
Progress was good but my original idea to use a curve to design the car work until designing the front grill and tailgate. After some thought, I reworked the chassis and started from a box with some extrusions. Currently, I have a curve which can be used to create traffic and randomizes rims, grills – so far. Next set is to set up a randomize button and connect the transforms together. I am not sure how to configure it just yet but I have the foundation set and ready to go.
Creating a personal experience where a vital part of the project is to encourage a bond, I had to ensure the audience could interact with my character without knowing. I researched some robotic artists and engineers such as Edward Ihantowicz, Rafael Lozano-hemmer, and Kenneth Rinaldo. These creatives used multiple input devices to create pieces of art that mimicked life. Ihantowicz’s SAM, an animated flower like sculpture, used four directional microphones to determine what direction the flower should bend to depending on the sound in the vicinity of it. (Zivanovic) Lozana-hemmer’s Standards and Double Standards is a piece that has 10 to 100 floating belts that uses a tracking system driven by a surveillance camera follow individuals. (Lozano-Hemmer) The system determines what bels should move to avoid those individuals in real time. Rinaldo’s Autopoiesis contains multiple vine-like sculptures that sense viewers using infrared sensors placed without the sculpture. (Rinaldo) The data is feed to the processor to allow it to react accordingly to the interaction. Each one of the interactive art pieces uses some sort of input device that allows the artist the freedom they need to interact with the audience. I knew I would need the same type of input. There were three requirements that were important. The first one was that the input device has be able to be hidden and collect important data. Secondly, I needed visual and audio inputs that could be used to further the experience. Lastly, I would need to be able to run with Unity engine and Microsoft Windows. These limitations lead me to one device, the Kinect for Xbox One.
Kinect for Xbox One is device a developed by Microsoft. Originally intended to be used for their gaming console in 2013, Xbox One. The device had a rough time in the gaming community as it would be discontinued in 2017. (Weinberger) Although the Kinect is deprecated, the device has surprising amount of value to interactive experiences. The sensor using a camera sensor, and an array of infrared beams can capture multiple visual data such as color, greyscale, infrared, and depth. All the data can be used in real time. Combining all visual data, the Kinect can track faces and their expressions, can be used to track up to 6 individuals, detect gestures, and create biological correct skeletons. Not only does the Kinect have visual sensors, it contains a four microphone arrays. These directional arrays can be used to record, and track sounds in real time. (MEGDICHE) The Kinect is truly amazing device. This sensor as my project’s eyes and ears.
As the hardware plays a vital part in making the interaction seem genuine. The microphones are used to detect viewers responses and determine what direction the sound came from. The visual sensor is detecting the viewer’s presence. It tracks their body and facial expressions to determine what the character should say and where to the character should look. The combination of all this data is just a part of what makes the portrait seem alive. These fine details are delicately tuned to influence the audience to participate in the illusion. Where the audience can interact and anthropomorphize the portrait.
Lipsync and eye controller added. All blendshapes are done. I wish to add wrinkle maps but currently that has been put on the back burner. Head turns, eyes blink, and lipsync shapes is about 50% done
Working on a procedural car generator. Although I will be reworking the concept to incorporate multiple curves and use sweep or rails.
Theme parks are places where families go to let loose and be entertained. They are full of creative art pieces from the overall themes to the attractions themselves. Many artists, designers, and engineers has spent years working on designs to immerse audiences. A great an example is at Universal Orlando Islands of Adventure. There is an area that is dedicated to the Harry Potter universe. As soon as you walk through the entrance, you are immediately immersed. Everything from the ground, the buildings, the shops, sounds and the rides are designed in a way to mimic the books and movies. Places like this can ignite and spark creativity. For me, was the place that inspired my project.
Anyone who has visited any amusement park or resort will agree waiting in line is part of the experience. Knowing this, creatives who build the attractions, design sets that try to distract you from the wait time. One environment that stood out to me was in the Harry Potter and the Forbidden Journey ride at Universal Orlando. As you make your way through Hogwarts Castle, you come to a room with a tall ceiling. Within this room are paintings that are assorted on walls above the visitors. If one stood and observed, they would notice these portraits are not ordinary paintings but are animated. They move and talk among themselves. Fooling the visitors into believe they are alive. Seeing this I was amazed. Not only by the presentation and immersion but how people, including myself, were reacting. Visitors were standing around watching and interested in the character, but as they returned, they already knew what to expect. They have lived the experienced. The immersion fell apart because of the repetitions and limited interactions. This problem led me down a tunnel of questions. What would take this concept to another level? What if the paintings saw a person? Or if they responded back to the visitors? What if they would drove a story so each person would have a unique experience every time they visited? How would the audience react? Would they feel for the character? These questions brought my project to life.
Designing my project, I knew I desired to have a painting that could interact with the audience. I wanted a character that would be able to seek and determine the presentence of a person and try to communicate with that person. My curiously lead me to see if I could develop an interaction that could allow individuals to emotionally connect with an artificial being. This was a challenge considering the scope of the programming requirements needed and the limitations of the hardware, but I knew it was possible. As I progressed with the project, I started to see something interesting that happened. Each element of the character became more like me. Visually, the character is an old man, but his personality, intentions, expressions and message are mirror of myself. This discovery encouraged me. It gave me the freedom to communicate a message that could directly affect a person emotionally. As well as giving an individual a unique experience with this character. Although I was startled how much of me is within the piece, I have accepted the result. It brought my own mentality and personality to my attention. In turn, allowing myself to design an experience were the audience can create their own thoughts about the character without knowing they actually met a part of myself.
Notes from a Houdini Tutorials that I will be referencing to for the final project.
- Fix weird depth of field issue. I am guessing its just the focus plane that we are seeing needs less compression.
- Figure out a fix for the constant confidence level. Problem is if someone speaks, it will stay on the same level. I am guessing I might need to make a variable or an api update.
“We keep moving forward, opening new doors, and doing new things, because we’re curious and curiosity keeps leading us down new paths.” – Walt Disney
The world as we know it is saturated with smart technology. From the moment we wake up to the minute we fall asleep, we interact and coexist with some sort of A.I. every day. As our relationship with technology deepens, our bond strengthens with these synthetic beings. Ask another person about a digital assistant like Apple’s Siri or Amazon’s Alexa. What would they say about this entity? How would they categorize it? Instinctually, they would give it a human-like characteristics, and traits. Almost humanizing the A.I. that they are interacting with. The term for this phenomenon is called anthropomorphism. To anthropomorphize is to give “human characteristics to animals, inanimate objects or natural phenomena.” (Nauert) This action, to humanize, is a way for humans to comprehend and predict another’s intentions, and behavior. Such as with any piece of technology we interact with, we instinctually humanize it. Inadvertently, leaving a digital fingerprint of ourselves with it. Whether it’s an image or text. we project ourselves on this object. As in my project, I crafted a character that representations what I believe is a piece of my personality. Even as I tried to stay unbiased in creating many different variations, the character within the piece still has properties that speak to me. Every animation, color, model, and sound are part of my psyche that is imprinted within the work. The combination of these elements are what I would call my avatar.
This piece of art is my voice that I am presenting to the audience. As an artist, I like to stand back and observe viewers as they interact with my work. Having the portrait act as my avatar gives me that freedom. It allows me to speak through a different voice as if I put a mask on. Cloaking my actual identity and distancing myself yet allowing me to portray my message to them. I believe anthropomorphizing the portrait painting will influence the viewer intensifying the connection with the character. Even though they know he is a digital being, they will comprehend him as being real. With this belief, the viewer should unconsciously emotionally and curiously explore what he has to say, or rather what I might have to say.
This week I reworked some A.I responder code for project frame. Some of the update:
- Mood percentage saves every 10 mins and loads when started
- Responses are logged and saved
- Load dialogue and created a simpler method for animatorPlayer()
- Created enum setStoryBranch() and getStoryBranch() aka story tracking
- Changed keywordChecker() to parsePharsing() – swapped to sort through full string searching for keyword but it could be still buggy.
Finished procedural project. My goal was to create a procedural rope bridge that can be extended and modified in UE4.
Here is some screenshots and video:
Thesis Proposal Form
What is your area of focus for your thesis research?
My area of focus for my thesis research is the uncanny valley and the experience of human-robot/A.I. interactions.
Please provide 3-5 artists whose work inspires you or that you plan to research as part of your thesis work.
Golan Levin, Edward Ihnatowicz, Kenneth Rinaldo, Lozano-hemmer
What type of work is your thesis project? (Game, installation, ARG, etc.)AND What technology needs does it require? Will you be using your own technology or Becker’s resources?
Installation. It will require a PC using Unity, Monitor, speakers, and Kinect V2.
Describe your project idea and how it ties into your thesis research in 1-2 paragraphs.
“Project Frame” is a dynamic interactive experience featuring a hanging oil painting that responses to visual and audio inputs. It will detect the presence of the viewers, follow faces and body movements, and listen for speech responses. All of which, will advance down a fixed “script” that the A.I. continuously references. The final presentation will consist of an environment that will be build around the frame for maximum immersion.
This project ties into my thesis because it involves human-A.I. interaction. I want to research the effect of the uncanny valley, positive or negative. Is there a usable use for it? Also, can different physical spaces promote this effect? Researching deeper into this, will help me heighten the overall experience of the project.
Describe your planned presentation method for your thesis project, (keeping in mind the gallery showcase at the end of the year).
I envision a space that mimics a Victorian library (with scope in mind). The main piece, the painting, is hung on the center wall. I plan to adapt the lighting in the space to hide the Kinect and any other sensors that are used in the piece. The painting itself will be a monitor with a frame built around the screen. Rest of the electronics, such as the computer, will be located behind in the walls.
Please list 4 major project milestones for your interactive work for this (You may also want to list your milestones for next semester but clarify especially where you want your project to be at the end of this 3-month semester)
- Programming A.I. branch layout/designed *Me: character rigged and blendshapes done
- MIDTERM – Rough draft of the basic script finished. A.I. tree interacting with inputs. Alpha demo where users can say hello and have it hello back. Some character movements – eyes, etc. *Me: character textured, connect/import blendshapes with lipsync. | Help on the script
- Character face animated, have lipSync working with correct responses. Background done.
- FINAL – Ground work down for a interaction demo – Detects users and voice input affects the A.I. interaction using the script. *Basic interactions (audio, and visual) are smooth!
- Art: Character is imported with textured, rigged, and blendshapes are applied. Prog: Must have branch structure designed which allows for input data effect the output visually. Such as (eyes follow detected user or head follows detected user)
- Art: Fill out the background of the scene and get lighting close to final. Prog: Have vocal input effect output.
- Art: Have basic on animations imported. (Arm movements on interaction, etc.) Prog: Have lipSync working, and Animations linked with the correct responses.
- Art: Post effects – overlays and any other polish work. Prog: Add more interactions with the data. Expand the tree and make sure it does not break during run time.
What are your project needs as far studio teams (ex- 2 artists, 3 programmers)?
- 2 artists for background modeling, texturing and animations.
- Writer for interaction mapping script
- 2-3 programmers for hardware working, and basic A.I. work.
- 1 audio engineer for voice and other ambient sounds.
* These numbers are guessed +/-. As I am not sure what help I may need.
- how does fashion play a role in your work — if you have a character(s) then those design decisions are important OR if you have no humans in your work, discuss the fashion of your favorite game with people in it.
Fashion is important within my work on characters. Depending on how the styles are designed and used, the wardrobes can influence the audience. We can use fashion in many ways as to describe who the character is, when is the character from, and why do they look the way they do. Most of the time, I use fashion to set a setting for the model to reside within. For example, I might pick a certain suit that would set a place and time that would relate to a certain time period. This could evolve into something else as I add elements of clothes that I personally like. Having my own personal preferences influence my characters, allows me to experiment with the endless possibilities that can create a unique character.
- how could you make fashion interactive and what statement would that make to wear it?
Today, there are numerous pieces of clothing that is interactive. Programmable LED tshirts, Air-Filtering scarves, heated sports clothing, posture-correcting shirts, app-enabled LED jacks, reactive feather jackets, etc. The list goes on and on. I believe that is the most important part about fashion. Allowing the individual to express themselves through clothing. Most of the interactive clothing seems to be corny or practical. Something that could be interesting would be a coat that can be modularized. Each piece (arms, pockets, collar) can be swapped out with whatever design the individual. Attaching the parts together would be an issue though. Velcro would be the easiest, but durability would be the main issue.
Another unique possibility to make fashion interactive would be a “mood ring” clothing. The clothing would react to external and internal temperature and change color. I imagine the clothing being filled with a liquid to make this happen. The liquid would be the one reacting to the temps. Both designs would give the individual a personalitzed piece of clothing, one being based on their own preferences, and one being affected by the surrounding elements.
- Find the piece pictured below, in the Eunice and Julian Cohen Galleria (Gallery 163). Note the name of the piece and then describe your impression of the piece — what are questions it draws forth from you upon viewing?
“Flicker” by Ian Sommerville (1959), 2004 Glass chandelier, flat-screen monitor, Morse code unit, and computer. Unfortunately, this piece was not working as intended. As the blinking was not working. The Morse code unit was not attached as well. With that said, the initial impression of the piece was bland. Without the blink, the piece is only a glass chandelier. At first, I did not even notice it hanging up until someone else pointed it out. If it was lit, even without the blink, I believe it would have caught my eye. If it was working as intended, I would have been curious of the blinks. I might have tried to decode the Morse to figure out what the message was. This would have been a fun piece to experience. Although the name of the piece is confusing. It is straight forward, this chandelier is about Ian Sommerville’s dreamachine “Flicker” as it will blink Morse code text of the piece. I think the title is a weak part of the piece but I wonder if the reason Evans called it that because he imagined that these lights are only talking about Flicker and Sommerville.
Other questions that came to mind are:
What is it trying to say? How can I figure it out? Without cheating…. Could I use this interaction in one of my pieces? Would it be a better experience if it eye level? Or is the experience better because it is higher up and massive? Would lowering the ambient light, and putting it into a darker room, create a better environment (more immersive) to experience this piece? Would anyone notice the Morse code blinks if the chandelier was piece among other items that resemble its style?
- Discuss how you can imagine taking one of the pieces in the exhibit and if you were asked to contribute a work with a similar look/feel/message now, how could you make it into something interactive? Keep in mind the stated message of the Bauhaus as well as the individual feel/reading you get from the piece you choose.
In the Bauhaus exhibit, I was fascinated by the abstract shapes. I wanted to pull them apart and visualize the prints in 3 dimensions. I want to walk around the work in space and see them with depth. The way I would do this would be to create an AR experience. I would create bigger versions of the postcards with the abstract shapes, and balanced forms. The viewers would open an app on a tablet that would allow the user to aim at the prints and the prints would come alive. Depending on the image, the shapes could rotate, scale, dotted lines might move, etc. The viewer could walk around and see the in-betweens of the print in 3D. Allowing them to the freedom to experience the forms that create the Bauhaus style.
This week I have been working numerous projects. First project is my human interaction final. As of now, I am struggling to get an heart rate from skin conductivity. I believe I will have to redesign the project to simulate the heart rate when the box is touched and held. Although I do have some time to develop it before the need to redesign it.
The other project I worked on this week was the 3d digital art final. I finally got a character that “fits” my requirement for the project. Here is a screenshot:
I am going to do a polish pass of the face and detail it out with wrinkles, defects, and to make the face asymmetrical. Also the character’s face has multiple layers that allows me to remove the beard and make the face fuller or not but the only thing I am deciding on is the clothes. I haven’t figured out the style yet but I am feeling something Victorian or Gothic suit.