Blog Entry – Research and Project Updates 5

After months of working on Permadeath the opera, I was given the opportunity to work backstage during the performance. Prior to this, I had never worked in a theater. I signed on as manager of the real time facial animation system, Faceware live. Luckily, the system setup, that I originally help create, worked flawlessly. The system setup was:

  • 2 computers – 1 for each actor
    • 1 computer is for one actor which is passing the data over to the other computer
  • 2 cameras
  • Area lights
  • Unreal Engine 4
  • Faceware Live server

I had one day of training at the last day of rehearsals. Nothing was too different than what I remembered. There were only a few tweaks that the director had updated. Those updates solved most of the issues it originally had. So what did I have to do? I watched over the actors’ faces in the software. Sometimes, the face tracking would get lost and break. One actor’s nose would lose tracking. The other actor’s lips would lose tracking. I had to make sure everything was set before the CGI was called to be played. The actors wiggled or moved around to fix the issues. Besides those minor hiccups, the CGI ran smoothly. We did see one issue where the character, Apollo, mouth was weirdly flapping around. The next day, we suggested a different way to act that scene out and it was fixed. From what I was told, no one saw the issue. Only two of us that worked on the facial animations saw the problem. I say that is a win. The experience was great. I got to meet and work with many talented individuals. Something I would not have had the chance to do.

 

Project update

I got in touch with the developer who makes the custom wrapper (https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/) . He kindly sent me the project assets to study and work with. So far, I had little time to dissect the inner workings but I am excited to get the chance too. At first, I couldn’t get the project examples to work but it was simply because I imported the original Kinect plugins. I created a new project and imported the wrapper and it worked. Now what I have to do is create a guideline or create sprints. Without having goals laid out, I feel it is hard to progress. The Kinect has so many things it can do! Once I get the ball rolling, I believe, the project will start to move forward.

 

 

Blog Entry – Research and Project Updates 4

As I progress with my research with using the Kinect in Unity. I come to realize how difficult it is to use deprecated devices. I struggle to find documentation on classes and setups.

I have found a few YouTube videos and some tutorials. The main issue is trying to understand each class and what they are doing and what are they are connected too. On the positive side, I did get the Kinect to register and run in Unity.

The Kinect fired up and worked with out a problem. One tutorial used the bodysourcemanger to detect and track a joint to make a 3d gameobject move. I am struggling with the face section, once again I am restricted to limited documentation, I did get a face to be detected.

This linked helped: https://social.msdn.microsoft.com/Forums/en-US/20257887-4c2e-42a9-be77-926d91fbdae3/face-expressions?forum=kinectv2sdk

I haven’t figured out how to pull a basic expression result out. I am going to have to dig into the plugin classes to figure stuff out. I did find documentation (https://ratemt.com/k2gpapi/annotated.html) but I believe it is for an examples that might use a custom wrapper (https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/) . I am going to email the developer about it. If it is OK, I will research the examples and try to understand the basics before I start diving into the code.

Unfortunately, I will have limited time this week for research as I will be working a the Permadeath Opera. As the Techinical Assistant Director of CGI, I am over seeing the Faceware software during the performance. If anything goes wrong, I will be there to react and fix the issue. Prior to this, I was working on some technical work. I have been working on fixing some skin weights for some characters. One character , Aphrodite, has feathers on her arms as a shirt.

Aphrodite skin was poking through when moving around. I fixed this by simply adjusting some skin weights. Unfortunately, I could do so much since it was hundreds of planes so I tweaked her skin texture. I took the planes and baked it onto her skin texture. From there, I blurred the feathers to make an average color. Now, if there is some “skin” mesh poking through, it will be invisible to the viewer. Another character I adjusted the skin weights was Apollo.

Apollo had some weird skin weights in his face. He was the first Faceware rigged character so it is to be expected he was going to look rough. At first it was an easy fix by pushing and pulling the weights. Ultimately, I had to adjust the transform and rotations of the joints in the Unreal Engine. It took a little more time than I thought but the results show it was the right decision:

The last character I had to tweak was Adonis. The technical director decided the cape needed to be re-worked. Due to time limitations, the cape simulations were scraped for a shorter cape that was going to be bound to the skin. I modified the model to spec and worked on the binding. The bind was a little tricky as the cape supports on his shoulders float. If the characters shoulder joints move, it reacts and moves.

Next week, I will be spending more time researching and developing the Kinect in Unity. For the time being, I will be deciding what should be prioritized and what should be my starting point.

Blog Entry – Research and Project Updates 3

This week of research has been held up and on delay because of a delivery issue. I purchased a Kinect v2 that has all the accessories required to run on the PC. Unfortunately, it was shipped across to the other side of the country. As I write this, I should be receiving the items tomorrow, 9/19/18.

Nevertheless, I decided to create a Unity project and started importing the required plugins. I also installed the Kinect SDK for Windows 10. I got it all imported and installed without a hitch.

I did see an issue about a script called joints as Unity has a script the same name. It shouldn’t be an issue to update the scripts and the calls. Unity did want to update the plugin scripts using deprecated calls. I let Unity do its thing. Hopefully, I don’t regret doing that. I opened an example scene and ran it. Of course, it ran but it didn’t throw an error! Be that good or bad news, who knows until I plug the device in.

FYI – anyone that has Kinect running on a computer or laptop that switches gpus, by default when rebooting/starting Windows 10, it will always switch on the dedicated GPU. Without the NVIDIA notification icon, I would be wondering why my fans are constantly running and battery life that is trash. Easy fix is to end the tasks in the task manager. I haven’t found a way to stop it from being started by default. I will have to look at it some more but I fear if I mess with, I will cause more harm then good at this point. So I am going to hold off until I can test the Kinect.

I am excited to play around with the Kinect. Not only because of the project but also to expand the use of the device. I am curious if I can create animations using it as well as scan objects. If so, I want to use some of the features to help build the art around my project to help speed up the production.


Barber Shop Video

I remember something like this video at Disney World Hollywood Studios. It was called “Sounds Dangerous! Starring Drew Carey”. It was closed in 2012 and replaced by numerous different shows but the audio show had the same idea about the binaural audio. It used Dolby as the audio driver. Essentially, you watch a video while having a headset on. The lights would turn off and the audio show begins. There is a part where Drew Carey goes to the barber and he gets a haircut. The video felt and sounded the same as I remember the show did.

In the video where they put a bag over the microphones, I felt like there was a bag on my head. I started to panic and opened my eyes. I am surprised how well the illusion works. The buzz cutters sound sounds life like as well. I am curious how many games and movies use this audio technique. I feel like it would create a better immersion into their worlds. Would it work with a surround sound system or only with headphones?

 

Meow Wolf

Meow Wolf is an interesting place. At first, it looked underground and run down but now they have expanded so much! It is quite impressive. I like the idea where they expand to other cities, and have local artist work on pieces of art to display in their “museum”. I use “museum” loosely.

It looks more like a take on a modern “museum”, where the rooms are the pieces with pieces within the room’s world. One piece that was shown was the black and white room. I would have liked to see more of that piece. It reminded me of some comic book or something Tim Burton would make. Another thing that caught my eye was the “doors” to the other rooms. One was a fireplace where you would walk through. Another was a hole in the wall where you would crawl through. I am assuming there are regular doors but experiencing these “gateways” would help create the illusion. The illusion that you are in the art piece. Someday, I would like to visit one of these places. Not only for the experience but to be inspired by these “rooms”.

Blog Entry – Research and Project Updates 2

Setup Layout: (possible)

My idea for implantation for the Kinect in my project:

  • Audience / viewer approaches or walks by
  • Kinect detects and pulls data into Unity
  • Unity scripts will separate video and audio data
    • Video data will be face, color, depth, skeleton and # id
    • Audio data will listen for keywords and direction
  • Data will be push to a preset list of options
  • Options will be prioritized and sorted then picked upon
  • The combined option data will be passed to an animation to play that animation/audio
  • Once animation is completed return to the begin or have the new segment cached waiting until animation is done to play the new animation sequence

“Kinect is the eyes and ears of the piece.”

                  These features of the Kinect are important because it the eyes and ears of the piece. Without the Kinect, the piece will be static and non-interactive. It won’t have the life and emotion that I am striving for. For example, facial recognition will help the piece because I plan to use it to receive important data points.

One way facial recognition data would be useful is by determine if the audience or viewer is focused on the piece. Another would be by determining what facial expression they are making. I could use that data to point to and select the best animation sequence. Another way I could use facial recognition would be to determine when to start listening for audio input. I could use facial recognition to help determine gender if I needed too.

Color data is another useful input for the piece because I could try to determine the color of the viewers clothing, eye colors, and so on. I could even capture the image to save it and display it on the display if I wanted to add another feature. (< possible addon)  Another data capture would be interesting is by using the skeleton tracking.

Skeleton tracking could be used as a confirmation if the user is interacting with the piece or not. It may also be used to determine how far the audience is from the piece. I could try to call over the audience if it is determined they are too far from the piece. If it works, it will help the facial recognition feature by moving the viewer closer in. That way, I can receive better and more accurate results to process. I may have to combine it with id tracking but, from my understanding, I could also determine how many people are standing in front of the piece. I think the total can only be 6.

With all these features, I believe the piece will come alive. These fine details, when combined, should “wow” the audience. Either, they will be excited or creeped out by the piece. Either way, they will have an experience! The next step in the journey is testing the Kinect and getting the plugin to work to within Unity as the plugin is quite old. Nevertheless, I am excited to work the Kinect.

Blog Entry – Research and Project Updates 1

Research Notes:

https://en.baydachnyy.com/2014/11/20/kinect-2-and-unity-3d-how-to/

  • kinect allows to track bodies, leans, colors and so on
  • But if you want to use functionality, which relates to face (emotions, face HD tracking etc.) you will need the second package – Kinect.Face.2.0.1410.19000.unitypackage.
  • In order to start working with Kinect 2 SDK you need a Kinect 2 sensor
  • If you have Kinect for Xbox One, you will able to buy a special Kinect adapter for Windows, which will allow you to connect existing sensor to PC. The adapter costs around 50 dollars, which is much cheaper than a new Kinect sensor. Because I already have Xbox One, I decided to buy adapter only.
  • provides basic information about tracked people (up to 6) like skeleton information, leans, hand states etc.;
  • AudioSource – allows to track a sound source from a specific direction;

https://peted.azurewebsites.net/kinect-4-windows-v2-unity-3d/

After installing the Kinect v2 SDK from here http://www.microsoft.com/en-us/download/details.aspx?id=44561 you can also download the supporting Unity 3D plugins here http://go.microsoft.com/fwlink/?LinkID=513177.

 

Kinect SDK v2.0

https://www.microsoft.com/en-us/download/details.aspx?id=44561

https://developer.microsoft.com/en-us/windows/kinect

 

https://www.youtube.com/watch?v=GPjS0SBtHwY

  • 25 skeleton joints for 6 people > bio correct skeletal joints plus more points and rotations
  • range depth ranger 50 cm to 8 m
  • 4 microphone arrays
  • data sources
    • color
    • infrared,
    • depth
    • bodyindex,
    • body
    • audio
  • audio can detect direction
    • steerable “cone” for focus for audio
    • data is audio samples captured over a specific interval of time
    • 30 fps to 15 fps based on lighting
    • hand pointer gestures
    •  face
      • detection (outputs a bounding box around face, can be visualized in color or IR)
      • alignment (identifies 5 facial landmarks on the face, can be visualized in color or IR)
      • orientation (returns quaternion of the head joint)
      • expressions (provides classifiers for happy, left / right eye open, engagement, mouth open and mouth moving

    kinect for windows SDK 2.0

    https://www.microsoft.com/en-us/download/details.aspx?id=44561

    kinect plugin for unity

    https://go.microsoft.com/fwlink/p/?LinkId=513177

     

     

    3d scan for Kinect

    https://www.microsoft.com/en-us/p/3d-scan/9nblggh68pmc?activetab=pivot%3aoverviewtab

    Tutorials:

    http://www.voratima.com/getting-started-with-kinect-v2-unity-3d-and-c/

    https://andreasassetti.wordpress.com/2015/11/02/develop-a-game-using-unity3d-with-microsoft-kinect-v2/

     

     

    Microsoft Programming series for Kinect

    https://channel9.msdn.com/series/Programming-Kinect-for-Windows-v2/01

    https://channel9.msdn.com/series/Programming-Kinect-for-Windows-v2/02

    https://channel9.msdn.com/series/Programming-Kinect-for-Windows-v2/03

    https://channel9.msdn.com/series/Programming-Kinect-for-Windows-v2/04

    https://channel9.msdn.com/series/Programming-Kinect-for-Windows-v2/05

    https://channel9.msdn.com/series/Programming-Kinect-for-Windows-v2/06

    https://channel9.msdn.com/series/Programming-Kinect-for-Windows-v2/07

 

via GIPHY

Blog Assignment 8/28/18 – Residency Summary Blog Entry

Last week was the first residency of my journey through the MFA program. Throughout the week, I got exposed to numerous pieces of art and even created some of my own. From touch experiences to visual/audio experiences, it was enlightening experience. The residency helped me understand that the meaning of interactivity can be simply to interact but to have a rememberable experience. There were a few pieces that relate to an area that I want to focus on this semester which are Double Taker by Golan Levin, and Standards and Double Standards by Lozano-Hemmer.

Double Taker by Golan Levin is part of an experience I want to include in my project. The piece is a black cylinder with an eye that follow visitors that walk towards the entrance of the building. The “character” moves around bending almost as if it had emotion and life. I want to explore this concept but expand upon it. I want to create a piece that has that emotion and life but add A.I. that would react to groups or individuals. I envision an interaction where people notice that the piece is following them. I want the piece to respond to what it visually sees. As an example, if the A.I. sees a person wearing a sweater. It would make a comment about the piece of clothing. I want the visitor to feel the piece has life and to enjoy that interaction. Another piece we covered that covers that area would be Standards and Double Stands by Lozano-Hemmer.

In the piece, Standards and Double Stands by Lozano-Hemmer, has “living” floating belts. These belts are waist high. They rotate and move out of the way of the on lookers. The belts react to the people. With that said, the experience feels like these belts are worn by invisible people. Within the piece on the back wall, it shows how the magic happens. There is a monitor that displays the graph of motion where a person is standing and how the belts react. Like that piece, I would like to explore a way to receive real time data to create a fast and smooth reaction.

I wish to explore importing these ideas into a game engine like Unity. I believe using Unity will have the flexibility to combine the emotion and the reactions I am envisioning. I wish to research and develop a way to include an A.I. using visual input, such as a camera, to create a unique experience for the audience. The visual input could dictate what comes next in the chain of events. These events would be pre-determined scenes, for now, that would be play in response to the data. If there is any type of delay, it could throw off the feeling of being “alive”.

To summarize what I wish to focus on this semester would be to begin research to implement visual input within the game engine, Unity. I believe this would be the biggest and most difficult hurdle to complete my idea for a final project. If I could increase the scope, I would like to include audio input. In the end, I would like to stand back and watch people interact with the piece and enjoy their experience.


Blog Assignment 8/23/18 – Non Interactive to Interactive

What if Access(2003) by Maria Sester was an fully digital piece? The mixed interactive experience bridges the gap between online and in person experiences by a spotlight following a single spectator. This spectator is the center of the piece often on-looked by passer-byers. I envision a mixed reality experience almost game like.

There would a phone application with the needed software to “play” or experience this. For the ones who have the app, would have to find the clues in this room. The clues would either be in real life or mixed. The player would point at the clue and a description would overlay. The overlay would display information and only visible in the app. As they follow along, which could possibly be like an escape room or a futuristic noir mystery, the room in real life would start to evolve.  The room lighting would start changing the mood. A voice would be playing over the phone or a directional speaker. This would be targeting the user who is getting closer to solving the mystery.  The speaker, what I will call the “AI”, could try to distract each players from completing the story. Either by flashing lights 0r talking non-sense to the player. This interaction is like the speaker and spotlight from the original piece.

If the player solves the mystery. The room mood would change and “reset”. The spectators who are viewing this experience can ride along. They would see the smart room reacting and the player’s reactions.  They might not have a clue whats going on. Only that, something is happening. There would be some limits in the mixed reality. If there are too many players, it could ruin the experience. If this is the case, I can see this working more so as an escape the room environment than a gallery.

The next piece I envision as an interactive experience would be the Refinery29’s”29Rooms”. There is certain piece that would be interesting would be the black and white statues. As visitors enter the area, parts the display would react. The heads or the whole body would turn towards the visitors. Hidden cameras would record the viewers face as the visitor walks in the area. It would then display their face on the TVs. If the person talks or it detects that their mouth is opened, a distorted sound would come from that TV nearest of the person. If the faces are record to a database, they could be saved and projected on the other non TV heads. Another possibility would be to display the recorded faces on the circle mirrors on the background. I believe this could evolve the current piece. Giving the new visitors and older visitors a different experience of the piece every time they see the piece.

8/23/18 “Rabbit’s Hole”

I have been researching on how to create and add cloth in Unreal 4. This post is about the journey where I found a solution to my main issue, collisions don’t work! I would like to say before I start, I have experience with Unity cloth system and very little in Unreal. So I started like anyone else would by researching…on youtube. I stumble across a lot of tutorials but here are a few that helped me: https://www.youtube.com/watch?v=uTOELBNBt04&t=1003s

https://www.youtube.com/watch?v=OO8v-yzeuBo&t=223s

https://www.youtube.com/watch?v=j34Q9J4Sbmk

The first link is rather old. It explains how to setup the clothing in Autodesk Maya using Nvidia APEX. The newer versions of UE4 has it built in. The workflow is simple. Here is an breakdown: Import model and skeleton> Select “cloth” mesh > Right click and create new cloth data > Right Click and Apply data > Paint the weights of the masks

With that knowledge, I went into the editor and created my mesh…it didn’t turn out so well. The cloth when right through my character. I popped open another tutorial. https://www.youtube.com/watch?v=zK580JmywZQ

I thought I must be missing something. I tried again and I ran into the same issue as before collision issues. It acted like I did not assign the correct physic assets but I did. I decided to go basic and create a flag like this video: https://www.youtube.com/watch?v=kjOq8OB_3AQ&t=10s I followed it and it worked! So something else was off. I started to google “ue4 cloth collision not working” and “ue4 cloth system broke” desperately trying to figure out what could be wrong. I came across some UE4 documentation for help: https://docs.unrealengine.com/en-us/Engine/Physics/Cloth/Overview It didn’t help as I already got a plane to become clothing. So here comes the rabbit’s hole: https://answers.unrealengine.com/questions/456817/constraints-are-broken.html

https://answers.unrealengine.com/questions/667930/clothing-tool-collision-not-working.html

https://answers.unrealengine.com/questions/737608/cloth-collisions-still-not-working-in-418.htmlhttps://answers.unrealengine.com/questions/688363/clothing-tool-ignores-physic-asset.html

https://answers.unrealengine.com/questions/661274/cloth-goes-through-collision-like-its-nothing.htmlhttps://answers.unrealengine.com/questions/121075/cloth-physics-not-colliding-with-worldcollision-ac.html

https://answers.unrealengine.com/questions/737608/cloth-collisions-still-not-working-in-418.html?sort=oldest

 

One of the best answers, I found was here : https://answers.unrealengine.com/questions/688363/clothing-tool-ignores-physic-asset.html

It looked like it had to do something with scale of the character. I dug a little deeper into the project scene. I found in the editor there is a way display the cloth physic colliders. When displayed, the colliders were all exploded and pushed out. I began re importing and trying to see if my mesh was broken. Over and over again…

I discovered the issue. Remember something about the scale? Well, we had to scale the characters down due to their sizes being too large. I had to create a new rig that I scaled down in Maya and reimported it. It worked! I have tried to work around this by using Nvidia APEX: https://developer.nvidia.com/clothing  It didn’t work. The only real solution to keep the scale was to create a nCloth simulation in Maya and create a custom rig to bake that data to bones. Bad news, I will have to retarget all the characters animations regardless of what solution I use.

The moral of the story is NEVER scale your skeleton mesh in the editor! or never scale if you are planning to use the cloth system in UE4.