Research & Project Updates – 2020 #5

This week, I met with my mentor and discussed the status of my thesis project. After the meeting, I reworked some story. Currently 5 is “done” for now. I am going to ask my studio writter to help on this. Hopefully he can follow the layout the way I have it. If I get the go, I will start to rework the code to make it work out correctly.

I also worked on the character for the concept class. So far here it is:

I am dreading the polish pass of the sculpt. I will need to do some hard surface modeling and detailing. After that, it should be pretty easy to retopo and uv up.

Research & Project Updates – 2020 #4

This week I worked on some of my character in zbrush – he is coming together well. I need to clean up the muscle grouping and finalize the back armor. I think I am at a good point as of now,  Worked on and 3d printed a part for a vintage Fuji camera, fixed up my thesis paper from suggested edits, and worked on my presentation outline.

Research & Project Updates – 2020 #3

This week I have worked on fixing some locking bugs. I got lipsync, audio, and animation to work with the storyID. I have a method developed and now have to implement it into the project. I am expecting a couple of hours to work it out. The major issue right now is locking the system so it does not loop but rather wait until it is finished animating.

 

Character Creation:

I worked on the base mesh. I am pretty happy with the overall character. I need to rework the torso and armor shapes but it is going in the right direction.

 

Research & Project Updates – 2020 #2

Concept art for character creation course:

 

Studio To Do List:

  • Designer find some free fx sounds
  • Art – 3 Canvas photos
  • Programming – Content adding in Keywords and expressions
  • Programming – Fix random switcher for response
  • Programming – Add mood responses
  • Writing – polish current work and add in mumbling
  • Matt – Animator state creating and audio | AUDIO voices

Set design sketch (ROUGH) for the installation: 

 

This week I will be focusing on trying to get some type of demo voice for the character done. I am not quite sure what he will sound like but I will be voicing him. Hopefully I can a few done to start work on the state machine. I am hoping it will go smooth considering the time and effect we put in last semester. It will be exciting to see the character to finally talk…

Work done over winter-break – Blog 2020 1

Back to classes for the last semester of my MFA. Lots to do in a short amount of time. I am excited to get to this part of the journey and I am looking forwards to the installation of my thesis. But as of right now, I am focus on finishing the last parts of the project.

Over the break I went ahead and modeled, re-lit, and adjusted the scene. Here are some screenshots of that work:

I am happy with the overall scene and now have to look at what is left. Off the top of my head, I know I need audio for the old man and to setup the animator states. I will dive deeper into it later this week and create a to do list to cover all that needs to be done.

Personal Project

Over the break, I wanted to flex my modeling abilities a bit. I figure why not make another weapon. This time I decided on a Winchester 1866. As of now, the high res model is about 85% done in less than 24 hours. (It was separated between days, a little here and there – originally I wanted a weekend showdown…) Not much is left on the model, most of it is polish and some micro details. After I finish that, I will make a low poly version.Here are the screenshots: ALSO all was done in Blender 2.81 (that was a requirement for myself, I do miss parts of Maya though…)

Thesis – Project Blog 13-14

Procedural Modeling

Me this weekend with Houdini:

Over the past few days, I have been hard at work finishing up my car generator. With the crashes with Houdini and Unreal, it was frustrating to say the least. Luckily, I swap over to Unity, and fix some code which saved time as it was not crashing as much. Besides the craziness, I believe the experience was worth it. I finished the project and got to learn Houdini in the process. Here are some Screenshots and Video:

Thesis I – Project Blog 12

oh Houdini…..

I have been working on my procedural course final. I decided to go with creating a procedural generated car. As I dug deep into the nodes and more complex everything got, the more the crashes. I ended up re-working some of the networks. It seemed to help but I still got some crashes. Along with those issues, I noticed the rand() method is weird. It always gave me the same number even though I generated another number from that rand. It seemed to be pulling one number and not re generating a new number. The work around I found was to re-write the code using python. Using python’s random, it was randomly generating new numbers every time and worked for my needs.


In studio, we are progressing nicely. Although the end of the semester is coming, we are finishing up a few things before we have to stop and record for presentations. There is still a lot that needs to be done – art, adding content, and messing more with the Kinect. I do believe the hard part of creating all the methods for the dictations and keywords are done. I am looking forward to break and finally get back into modeling all the parts I need for the project.

Thesis I – Project Blog 11

Thesis Project

This week, I worked on making the character head turn and go back to rest. Eyes and blendshapes work. As the head moves, the eyes will lead while the head smoothly follows. On top of it, I got the Kinect to link up so it follows a persons face. It can also look towards a direction of a sound. I am not sure if I will use it but the method is created just in case.


Procedural Modeling – WIP

Progress was good but my original idea to use a curve to design the car work until designing the front grill and tailgate. After some thought, I reworked the chassis and started from a box with some extrusions. Currently, I have a curve which can be used to create traffic and randomizes rims, grills – so far. Next set is to set up a randomize button and connect the transforms together. I am not sure how to configure it just yet but I have the foundation set and ready to go.


Technical Background

Creating a personal experience where a vital part of the project is to encourage a bond, I had to ensure the audience could interact with my character without knowing. I researched some robotic artists and engineers such as Edward Ihantowicz, Rafael Lozano-hemmer, and Kenneth Rinaldo. These creatives used multiple input devices to create pieces of art that mimicked life. Ihantowicz’s SAM, an animated flower like sculpture, used four directional microphones to determine what direction the flower should bend to depending on the sound in the vicinity of it. (Zivanovic) Lozana-hemmer’s Standards and Double Standards is a piece that has 10 to 100 floating belts that uses a tracking system driven by a surveillance camera follow individuals. (Lozano-Hemmer) The system determines what bels should move to avoid those individuals in real time. Rinaldo’s Autopoiesis contains multiple vine-like sculptures that sense viewers using infrared sensors placed without the sculpture. (Rinaldo) The data is feed to the processor to allow it to react accordingly to the interaction. Each one of the interactive art pieces uses some sort of input device that allows the artist the freedom they need to interact with the audience. I knew I would need the same type of input. There were three requirements that were important. The first one was that the input device has be able to be hidden and collect important data. Secondly, I needed visual and audio inputs that could be used to further the experience. Lastly, I would need to be able to run with Unity engine and Microsoft Windows. These limitations lead me to one device, the Kinect for Xbox One.

Kinect for Xbox One is device a developed by Microsoft. Originally intended to be used for their gaming console in 2013, Xbox One. The device had a rough time in the gaming community as it would be discontinued in 2017. (Weinberger) Although the Kinect is deprecated, the device has surprising amount of value to interactive experiences. The sensor using a camera sensor, and an array of infrared beams can capture multiple visual data such as color, greyscale, infrared, and depth. All the data can be used in real time. Combining all visual data, the Kinect can track faces and their expressions, can be used to track up to 6 individuals, detect gestures, and create biological correct skeletons. Not only does the Kinect have visual sensors, it contains a four microphone arrays. These directional arrays can be used to record, and track sounds in real time. (MEGDICHE) The Kinect is truly amazing device. This sensor as my project’s eyes and ears.

As the hardware plays a vital part in making the interaction seem genuine. The microphones are used to detect viewers responses and determine what direction the sound came from. The visual sensor is detecting the viewer’s presence. It tracks their body and facial expressions to determine what the character should say and where to the character should look. The combination of all this data is just a part of what makes the portrait seem alive. These fine details are delicately tuned to influence the audience to participate in the illusion. Where the audience can interact and anthropomorphize the portrait.

Thesis I – Project Blog 10

Lipsync and eye controller added. All blendshapes are done. I wish to add wrinkle maps but currently that has been put on the back burner. Head turns, eyes blink, and lipsync shapes is about 50% done


Working on a procedural car generator. Although I will be reworking the concept to incorporate multiple curves and use sweep or rails.


Background

                        Theme parks are places where families go to let loose and be entertained. They are full of creative art pieces from the overall themes to the attractions themselves. Many artists, designers, and engineers has spent years working on designs to immerse audiences. A great an example is at Universal Orlando Islands of Adventure. There is an area that is dedicated to the Harry Potter universe. As soon as you walk through the entrance, you are immediately immersed. Everything from the ground, the buildings, the shops, sounds and the rides are designed in a way to mimic the books and movies. Places like this can ignite and spark creativity. For me, was the place that inspired my project.

Anyone who has visited any amusement park or resort will agree waiting in line is part of the experience. Knowing this, creatives who build the attractions, design sets that try to distract you from the wait time. One environment that stood out to me was in the Harry Potter and the Forbidden Journey ride at Universal Orlando. As you make your way through Hogwarts Castle, you come to a room with a tall ceiling. Within this room are paintings that are assorted on walls above the visitors. If one stood and observed, they would notice these portraits are not ordinary paintings but are animated. They move and talk among themselves. Fooling the visitors into believe they are alive. Seeing this I was amazed. Not only by the presentation and immersion but how people, including myself, were reacting. Visitors were standing around watching and interested in the character, but as they returned, they already knew what to expect. They have lived the experienced. The immersion fell apart because of the repetitions and limited interactions. This problem led me down a tunnel of questions. What would take this concept to another level? What if the paintings saw a person? Or if they responded back to the visitors? What if they would drove a story so each person would have a unique experience every time they visited? How would the audience react? Would they feel for the character? These questions brought my project to life.

Designing my project, I knew I desired to have a painting that could interact with the audience. I wanted a character that would be able to seek and determine the presentence of a person and try to communicate with that person. My curiously lead me to see if I could develop an interaction that could allow individuals to emotionally connect with an artificial being. This was a challenge considering the scope of the programming requirements needed and the limitations of the hardware, but I knew it was possible. As I progressed with the project, I started to see something interesting that happened. Each element of the character became more like me. Visually, the character is an old man, but his personality, intentions, expressions and message are mirror of myself. This discovery encouraged me. It gave me the freedom to communicate a message that could directly affect a person emotionally. As well as giving an individual a unique experience with this character. Although I was startled how much of me is within the piece, I have accepted the result. It brought my own mentality and personality to my attention. In turn, allowing myself to design an experience were the audience can create their own thoughts about the character without knowing they actually met a part of myself.

Thesis I – Project Blog 9

Notes from a Houdini Tutorials that I will be referencing to for the final project.


Studio:

  • Fix weird depth of field issue. I am guessing its just the focus plane that we are seeing needs less compression.
  • Figure out a fix for the constant confidence level. Problem is if someone speaks, it will stay on the same level. I am guessing I might need to make a variable or an api update.

“We keep moving forward, opening new doors, and doing new things, because we’re curious and curiosity keeps leading us down new paths.” – Walt Disney

Intro

The world as we know it is saturated with smart technology. From the moment we wake up to the minute we fall asleep, we interact and coexist with some sort of A.I. every day. As our relationship with technology deepens, our bond strengthens with these synthetic beings. Ask another person about a digital assistant like Apple’s Siri or Amazon’s Alexa. What would they say about this entity? How would they categorize it? Instinctually, they would give it a human-like characteristics, and traits. Almost humanizing the A.I. that they are interacting with. The term for this phenomenon is called anthropomorphism. To anthropomorphize is to give “human characteristics to animals, inanimate objects or natural phenomena.” (Nauert) This action, to humanize, is a way for humans to comprehend and predict another’s intentions, and behavior. Such as with any piece of technology we interact with, we instinctually humanize it. Inadvertently, leaving a digital fingerprint of ourselves with it. Whether it’s an image or text. we project ourselves on this object. As in my project, I crafted a character that representations what I believe is a piece of my personality. Even as I tried to stay unbiased in creating many different variations, the character within the piece still has properties that speak to me. Every animation, color, model, and sound are part of my psyche that is imprinted within the work. The combination of these elements are what I would call my avatar.

This piece of art is my voice that I am presenting to the audience. As an artist, I like to stand back and observe viewers as they interact with my work. Having the portrait act as my avatar gives me that freedom. It allows me to speak through a different voice as if I put a mask on. Cloaking my actual identity and distancing myself yet allowing me to portray my message to them. I believe anthropomorphizing the portrait painting will influence the viewer intensifying the connection with the character. Even though they know he is a digital being, they will comprehend him as being real. With this belief, the viewer should unconsciously emotionally and curiously explore what he has to say, or rather what I might have to say.