Thesis – Project Blog 13-14

Procedural Modeling

Me this weekend with Houdini:

Over the past few days, I have been hard at work finishing up my car generator. With the crashes with Houdini and Unreal, it was frustrating to say the least. Luckily, I swap over to Unity, and fix some code which saved time as it was not crashing as much. Besides the craziness, I believe the experience was worth it. I finished the project and got to learn Houdini in the process. Here are some Screenshots and Video:

Thesis I – Project Blog 12

oh Houdini…..

I have been working on my procedural course final. I decided to go with creating a procedural generated car. As I dug deep into the nodes and more complex everything got, the more the crashes. I ended up re-working some of the networks. It seemed to help but I still got some crashes. Along with those issues, I noticed the rand() method is weird. It always gave me the same number even though I generated another number from that rand. It seemed to be pulling one number and not re generating a new number. The work around I found was to re-write the code using python. Using python’s random, it was randomly generating new numbers every time and worked for my needs.


In studio, we are progressing nicely. Although the end of the semester is coming, we are finishing up a few things before we have to stop and record for presentations. There is still a lot that needs to be done – art, adding content, and messing more with the Kinect. I do believe the hard part of creating all the methods for the dictations and keywords are done. I am looking forward to break and finally get back into modeling all the parts I need for the project.

Thesis I – Project Blog 10

Lipsync and eye controller added. All blendshapes are done. I wish to add wrinkle maps but currently that has been put on the back burner. Head turns, eyes blink, and lipsync shapes is about 50% done


Working on a procedural car generator. Although I will be reworking the concept to incorporate multiple curves and use sweep or rails.


Background

                        Theme parks are places where families go to let loose and be entertained. They are full of creative art pieces from the overall themes to the attractions themselves. Many artists, designers, and engineers has spent years working on designs to immerse audiences. A great an example is at Universal Orlando Islands of Adventure. There is an area that is dedicated to the Harry Potter universe. As soon as you walk through the entrance, you are immediately immersed. Everything from the ground, the buildings, the shops, sounds and the rides are designed in a way to mimic the books and movies. Places like this can ignite and spark creativity. For me, was the place that inspired my project.

Anyone who has visited any amusement park or resort will agree waiting in line is part of the experience. Knowing this, creatives who build the attractions, design sets that try to distract you from the wait time. One environment that stood out to me was in the Harry Potter and the Forbidden Journey ride at Universal Orlando. As you make your way through Hogwarts Castle, you come to a room with a tall ceiling. Within this room are paintings that are assorted on walls above the visitors. If one stood and observed, they would notice these portraits are not ordinary paintings but are animated. They move and talk among themselves. Fooling the visitors into believe they are alive. Seeing this I was amazed. Not only by the presentation and immersion but how people, including myself, were reacting. Visitors were standing around watching and interested in the character, but as they returned, they already knew what to expect. They have lived the experienced. The immersion fell apart because of the repetitions and limited interactions. This problem led me down a tunnel of questions. What would take this concept to another level? What if the paintings saw a person? Or if they responded back to the visitors? What if they would drove a story so each person would have a unique experience every time they visited? How would the audience react? Would they feel for the character? These questions brought my project to life.

Designing my project, I knew I desired to have a painting that could interact with the audience. I wanted a character that would be able to seek and determine the presentence of a person and try to communicate with that person. My curiously lead me to see if I could develop an interaction that could allow individuals to emotionally connect with an artificial being. This was a challenge considering the scope of the programming requirements needed and the limitations of the hardware, but I knew it was possible. As I progressed with the project, I started to see something interesting that happened. Each element of the character became more like me. Visually, the character is an old man, but his personality, intentions, expressions and message are mirror of myself. This discovery encouraged me. It gave me the freedom to communicate a message that could directly affect a person emotionally. As well as giving an individual a unique experience with this character. Although I was startled how much of me is within the piece, I have accepted the result. It brought my own mentality and personality to my attention. In turn, allowing myself to design an experience were the audience can create their own thoughts about the character without knowing they actually met a part of myself.

Thesis I – Project Blog 9

Notes from a Houdini Tutorials that I will be referencing to for the final project.

https://www.sidefx.com/tutorials/sci-fi-panel-generator/


Studio:

  • Fix weird depth of field issue. I am guessing its just the focus plane that we are seeing needs less compression.
  • Figure out a fix for the constant confidence level. Problem is if someone speaks, it will stay on the same level. I am guessing I might need to make a variable or an api update.

“We keep moving forward, opening new doors, and doing new things, because we’re curious and curiosity keeps leading us down new paths.” – Walt Disney

Intro

The world as we know it is saturated with smart technology. From the moment we wake up to the minute we fall asleep, we interact and coexist with some sort of A.I. every day. As our relationship with technology deepens, our bond strengthens with these synthetic beings. Ask another person about a digital assistant like Apple’s Siri or Amazon’s Alexa. What would they say about this entity? How would they categorize it? Instinctually, they would give it a human-like characteristics, and traits. Almost humanizing the A.I. that they are interacting with. The term for this phenomenon is called anthropomorphism. To anthropomorphize is to give “human characteristics to animals, inanimate objects or natural phenomena.” (Nauert) This action, to humanize, is a way for humans to comprehend and predict another’s intentions, and behavior. Such as with any piece of technology we interact with, we instinctually humanize it. Inadvertently, leaving a digital fingerprint of ourselves with it. Whether it’s an image or text. we project ourselves on this object. As in my project, I crafted a character that representations what I believe is a piece of my personality. Even as I tried to stay unbiased in creating many different variations, the character within the piece still has properties that speak to me. Every animation, color, model, and sound are part of my psyche that is imprinted within the work. The combination of these elements are what I would call my avatar.

This piece of art is my voice that I am presenting to the audience. As an artist, I like to stand back and observe viewers as they interact with my work. Having the portrait act as my avatar gives me that freedom. It allows me to speak through a different voice as if I put a mask on. Cloaking my actual identity and distancing myself yet allowing me to portray my message to them. I believe anthropomorphizing the portrait painting will influence the viewer intensifying the connection with the character. Even though they know he is a digital being, they will comprehend him as being real. With this belief, the viewer should unconsciously emotionally and curiously explore what he has to say, or rather what I might have to say.

Thesis I – Project Blog 8

This week I reworked some A.I responder code for project frame. Some of the update:

  • Mood percentage saves every 10 mins and loads when started
  • Responses are logged and saved
  • Load dialogue and created a simpler method for animatorPlayer()
  • Created enum setStoryBranch() and getStoryBranch() aka story tracking
  • Changed keywordChecker() to parsePharsing() – swapped to sort through full string searching for keyword but it could be still buggy.

Finished procedural project. My goal was to create a procedural rope bridge that can be extended and modified in UE4.

Here is some screenshots and video:

Thesis I – Project Blog 7

In Procedural modeling class, we ran into issues with assigning materials in UE4. I created a quick image that should help:

I also noticed with auto seams, it can be buggy. I used it a bunch of times because I could not find a way to cut uv seams proceduraly without using the SOP. Sometimes UE4 would crash because of the complexity of the mesh and I had to tweak the settings a bit to avoid this issue.


In studio, I worked on finalizing the code for A.I. responder. Nothing to complex but it is exciting to see the character interacting with us during testing.


Defense 4: Technopomorphism/Technomorpism

            As we know what anthropomorphize and the reason why humans tend to use it, there is another term worth noting – technopomorphism. Originally coined as mechanomorphism by Linnda R. Caporael, (Lum) Technopomorphism/technomorphism is the tendency to project technological characteristics to humans. (Hurley) This term has rarely been fully researched but as it relates to anthropomorphism, it has been indirectly studied. (Lum) Earlier in this paper I talked about the use of anthropomorphic terms, Technopomorphism is also used to describe and communicate human traits that we are uncertain of. A great example from Denis Hurley is the description of the “thought process like cogs in a machine or someone’s capacity for work may be described with bandwidth.” (Hurley) Unknowingly this term has been used in scientific studies to explain our bodily functions, but it can be used in other communities. In 3D animation, humanoid skeletons are reduced to nodes that are used to control and animation characters. Designing A.I., like in my project, we must transcode human social interactions or expressions and make algorithms to mimic and respond to the input. These are just a handful of examples that are technopomorphic. I believe we anthropomorphize and technopomoprhize for the same reason. That reason is to help us predict and comprehend the unknown. The only difference is in which direction the projection is going too.

 

 

 

 

 

 

Works Cited

Hurley, Denis. “Technical & Human Problems With Anthropomorphism & Technopomorphism.” 25 March 2017. <https://medium.com/emergent-future/technical-human-problems-with-anthropomorphism-technopomorphism-13c50e5e3f36>.

Lum, Heather Christina. “ARE WE BECOMING SUPERHUMAN CYBORGS?” 2011. <http://www.personal.psu.edu/hcl11/blogs/lum_597blog/Lum_Heather_C_201108_PhD.pdf>.

 

Thesis I – Project Blog 6

Below are the screenshots of my procedural bridge for procedural art course. This bridge has numerous parameter which can be edited in Unreal. The curve which determines the bridges length can be moved as well. Extended or shrunk. I did noticed Houdini seems to be buggy at times and crashes. UVing is going to prove difficult but I shall see. Hopefully it all comes out well.


Defense 3: Uncanny Valley

As artists, designers, and creators, we often to explore the boundaries of our art. Traversing through different type of styles in search of what calls to us. Nevertheless, an artist will sooner or later stumble upon the style of realism. Realism in art can be defined as “the theory or practice of fidelity in art and literature to nature or to real life and to accurate representation without idealization”. (Merriam-Webster) The desire to create accurate representations of real life has no doubly changed the way we think and interact with digital art in the last couple of decades. Visuals in movies has been inching closer to visually mimicking life. Robotic customer support has progressed in mimicking human voice and expressions. These advancements in technology is remarkable but there is a problem with achieving visual realism. Humans have a high awareness and understanding in recognizing differences between living and non-living. (Angela Tinwell)

As humans are social beings, we are driving by social cues. With these cues, we are aware and can make predictions about interactions we might come across. If these cues are disrupted, mismatched, or inconsistent, we will spot them. Visually speaking, as we increase the realism, the more the information we receive. With the increase of information, the greater chance of error will be spotted. Creating a sense of eeriness or disgust. (Pollick) This phenomenon is called the uncanny valley.

In 1970, the Japanese professor and robotics, Dr. Masahiro Mori, discovered (Hsu) (Pollick) that as an object, such as an robot or a digital character, becomes more humanlike or anthropomorphic, it’s attraction will increase until a point in which there is a drastic negative effect. (Pollick) Examples of objects that lie within the uncanny valley, as Dr. Mori included, was corpse, prosthetics limbs and zombies. (Angela Tinwell) When a viewer experiences the phenomenon, they will feel an eerie sensation, uneased and/or feel disgusted. (Rouse) To avoid such symptoms, Dr. Mori, suggested designer to work until the first peak of the uncanny valley and not to seek out the second peak. (Angela Tinwell) Despite his suggestions, artists have striding to achieve the second peak. Films such as Tin Toy (Hsu), Final Fantasy (Pollick), and The Polar Express (Jakub A. Zlotowski) have failed because of the reactions were negative due to them falling into the valley. Researchers have been studying on what causes this phenomenon. No one is exactly sure what trigger this effect, but multiple hypothesis has been created and might explain why we can experience this phenomenon.

One concept is a survival instinct to help us avoid pathogens. (Shensheng Wang) Some researchers have speculated that humans evolved to predict and react to minor changes in appearances of others. This feeling disgust is might to avoid people that have diseases and prevent us from such disease. (Hsu) This avoidance could be considered as survival tactic deeming the inconsistences in the anthropomorphic character as repulsive. Another concept that could explain what triggers the uncanny valley is our perceptual processing ability. (Shensheng Wang)

As noted in this paper, we instinctually recognize facial features. We are highly sensitive to this information because of the familiarity of it. Researchers suggest that with this heighten awareness we are attracted to certain physical features, shapes, and the health. If the actor is inconsistent to what we know, we instantly become unattracted to it. (Shensheng Wang) If a voice is mismatched to a face or appearance, can trigger this effect as we expect certain features to relate with one another. Movements can drastically increase the effect. As noted earlier, I explained how important biological movements are to humans. Born with the preference to viewing the motions, are naturally familiar with them. If visual appearance and movements mismatch, the eeriness increases because we are unable to predict the outcome correctly. (Shensheng Wang) This disruption of information causes humans to fail at categorizing the other actor. (Pollick) I noted before if we can not categorize another person or actor, we become uncertain and start to fall back on stereotypes to process and understand them. Most likely relying on features we are familiar with. This concept is interesting as it relates to theory of mind and our social cognitive.

So could the uncanny valley occur because we predict, and try to comprehend everything we observe or interact with? Is it because we are social beings seeking out connections with others? I believe it is all the above. We can assume failure to reach total realism of an anthropomorphic character can cause problems with our ability to predict and comprehend. This inability and failed expectations will cause us to begin to panic and feel nervous but not all characters will fall into uncanny valley. There is research that the more an individual interacts with anthropomorphic characters, even if they are eerie, the more they gradually become more familiar. (Angela Tinwell) This repeating habit could circumvent and reducing the effect of the uncanny valley. As we interact more often with anthropomorphic characters, maybe we our perspective will change, and the valley will shrink. Desensitizing us from noticing the inconsistencies between what is living and nonliving.

Thesis I – Project Blog 4

Below are some videos of the final particle fx for the Procedural Art course. I decided to work on some effects referencing Battlefield 5. The main requirement was to create sprite sheets from Houdini and then make the effect in Unreal 4 engine using cascade. There was some difficulties with crashing and render times but even with those issues, I believe they came out well.



 

I added finish skin weighting the character and he is now imported into the screen. Prior to this, I created a zbrush file for creating blendshapes. Which is my plan to start work on this week.


Defense 1: Theory of Mind

To anthropomorphize something, we humanize that entity. We give it human characteristics, be it emotions, intentions, or thoughts. During that transformation, our brains begin to process and comprehend those distinct human traits. This cognitive process is called Theory of Mind. First reported in an article by psychologist David Premack. (Leslie) It later became a term psychologist use to describe the cognitive process that gives an individual the ability to comprehend another’s emotional and mental statues. This cerebral function starts to develop around age of 3-5. (Cherry) However other research has shown it could develop earlier or could be delayed depending on the individual. This ability to theorize or predict another’s state of mind – thought, emotion, and behaviors, is the most important function for survival for human beings. (Drubach) Being social individuals, we commutate in such a way to comprehend other’s intentions, thoughts, and feelings. (Nauert) As we do this, areas of our brains such as the temporoparietal junction activate. Additionally, anthropomorphizing activates the same area and the more a person anthropomorphizes, the larger the areas of the brain are for Theory of Mind processing. (Atherton) Predicting and theorizing, our brains never rest. Humans continuously try to make sense of everything around them, especially with motions.

Instinctually, we humanize non-human actors to predict their behavior, but we also anthropomorphize motions. Research has shown that processing and recognizing biological motion contributes to the awareness of animacy. (Atherton) Recognizing motion is an instant flow of information which allows humans to predict and identify an actor’s behavior. Humans expect to observe smooth human-like movements as oppose to erratic motions. Researcher also found that early in development, infants prefer biological movements over artificial and by the age of 2, prefer human motion. (Atherton) Example of biological movements would be objects moving in a coherent manner with respect to one another. (Airenti) Using movements alone, we can start to understand actor’s intentions. We interpret the two objects as interacting with one another. As both objects could understand one another. Instantly we begin to anthropomorphize the objects by assigning unique roles to each one. These anthropomorphized motions allow for easier recognition and predictions. It allows humans to theorize about what might the objects do next. As important as biological movement is to Theory of Mind, there is another process we anthropomorphiz to help interpret. This clever process is called facial recognition.

Face processing is part of the cognitive process in which humans can determine what a person is thinking when observing a facial expression. As infants, we naturally develop skills to determine faces and mimic facial expressions. Whether a face is familiar or not, we can immediately tell one face apart from another. As humans develop, this constant stimulus will train a person to specific facial shapes and emotions (Atherton) which enables us to see key differences in faces. An example of anthropomorphizing a face is when we humanize our pets. For this instance, humanizing the face of a pet. They would express anthropomorphizing terms to explain the facial expressions of it. Such as the pet is smiling or happy. We assign them behaviors and emotions. Using such vocabulary helps us instantly recognize the other entity’s status without ever needing to see it in person. This grants us the ability to simulate and mimic the experience using our imagination.

We often use our creativity to dream up anything that our hearts desires and anthropomorphism is a unique part of this. Inadvertently, we trick our brain into believing these non-human entities are other persons. This creates a sense of familiarity and predictability that we know and need. This feeling of ease allows us to use everyday strategies to determine what the other’s motivations are and enables us to predict future behaviors. Humanizing non-human actors can go a long way as anthropomorphism and Theory of Mind triggers the same part of ours brains. Human need predictable communication and environments. We strive to make sense of the purpose of other’s goals. If we are unable to comprehend any situation, we will start to anthropomorphize. As soon as this happens, the humanized situation becomes easier to accept, comprehend and predict.

 

Works Cited

Airenti, Gabriella. "The Development of Anthropomorphism in Interaction: Intersubjectivity, Imagination, and Theory of Mind." (2018). <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6231421/>.

Atherton, Gray and Cross, Liam. "Seeing More Than Human: Autism and Anthropomorphic Theory of Mind." (2018). <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5932358/>.

Cherry, Kendra. How the Theory of Mind Helps Us Understand Others. 26 July 2019. <https://www.verywellmind.com/theory-of-mind-4176826>.

Drubach, Daniel A. "The Purpose and Neurobiology of Theory of Mind Functions." Blanton-Peale Institute, 18 December 2007. Online.

Leslie, A.M. "Theory of Mind." International Encyclopedia of the Social & Behavioral Sciences. Elsevier Ltd, 2001. <https://www.sciencedirect.com/topics/neuroscience/theory-of-mind>.

Nauert, Dr. Rick. PsychCentral. 15 June 2019.

Thompson, Brittany N. Psychology Today. 03 July 2017. Website. <https://www.psychologytoday.com/us/blog/socioemotional-success/201707/theory-mind-understanding-others-in-social-world>.


 

Blog Entry – Research and Project Updates – 2019 #11

This week, I have been working on more shapes of the head for my 3d digital art course and my thesis. This character will be fully sculpted so I am trying to find a shape I like. As of now, I am leanings towards the second head. (longer skull, and droopy chin) I do like the fourth one day as well but I believe it is too close to Picard from Star Trek. Nevertheless, I will find the shape and go along with it to finish the project.