Blog Entry – Research and Project Updates 5

After months of working on Permadeath the opera, I was given the opportunity to work backstage during the performance. Prior to this, I had never worked in a theater. I signed on as manager of the real time facial animation system, Faceware live. Luckily, the system setup, that I originally help create, worked flawlessly. The system setup was:

  • 2 computers – 1 for each actor
    • 1 computer is for one actor which is passing the data over to the other computer
  • 2 cameras
  • Area lights
  • Unreal Engine 4
  • Faceware Live server

I had one day of training at the last day of rehearsals. Nothing was too different than what I remembered. There were only a few tweaks that the director had updated. Those updates solved most of the issues it originally had. So what did I have to do? I watched over the actors’ faces in the software. Sometimes, the face tracking would get lost and break. One actor’s nose would lose tracking. The other actor’s lips would lose tracking. I had to make sure everything was set before the CGI was called to be played. The actors wiggled or moved around to fix the issues. Besides those minor hiccups, the CGI ran smoothly. We did see one issue where the character, Apollo, mouth was weirdly flapping around. The next day, we suggested a different way to act that scene out and it was fixed. From what I was told, no one saw the issue. Only two of us that worked on the facial animations saw the problem. I say that is a win. The experience was great. I got to meet and work with many talented individuals. Something I would not have had the chance to do.

 

Project update

I got in touch with the developer who makes the custom wrapper (https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/) . He kindly sent me the project assets to study and work with. So far, I had little time to dissect the inner workings but I am excited to get the chance too. At first, I couldn’t get the project examples to work but it was simply because I imported the original Kinect plugins. I created a new project and imported the wrapper and it worked. Now what I have to do is create a guideline or create sprints. Without having goals laid out, I feel it is hard to progress. The Kinect has so many things it can do! Once I get the ball rolling, I believe, the project will start to move forward.

 

 

Blog Entry – Research and Project Updates 2

Setup Layout: (possible)

My idea for implantation for the Kinect in my project:

  • Audience / viewer approaches or walks by
  • Kinect detects and pulls data into Unity
  • Unity scripts will separate video and audio data
    • Video data will be face, color, depth, skeleton and # id
    • Audio data will listen for keywords and direction
  • Data will be push to a preset list of options
  • Options will be prioritized and sorted then picked upon
  • The combined option data will be passed to an animation to play that animation/audio
  • Once animation is completed return to the begin or have the new segment cached waiting until animation is done to play the new animation sequence

“Kinect is the eyes and ears of the piece.”

                  These features of the Kinect are important because it the eyes and ears of the piece. Without the Kinect, the piece will be static and non-interactive. It won’t have the life and emotion that I am striving for. For example, facial recognition will help the piece because I plan to use it to receive important data points.

One way facial recognition data would be useful is by determine if the audience or viewer is focused on the piece. Another would be by determining what facial expression they are making. I could use that data to point to and select the best animation sequence. Another way I could use facial recognition would be to determine when to start listening for audio input. I could use facial recognition to help determine gender if I needed too.

Color data is another useful input for the piece because I could try to determine the color of the viewers clothing, eye colors, and so on. I could even capture the image to save it and display it on the display if I wanted to add another feature. (< possible addon)  Another data capture would be interesting is by using the skeleton tracking.

Skeleton tracking could be used as a confirmation if the user is interacting with the piece or not. It may also be used to determine how far the audience is from the piece. I could try to call over the audience if it is determined they are too far from the piece. If it works, it will help the facial recognition feature by moving the viewer closer in. That way, I can receive better and more accurate results to process. I may have to combine it with id tracking but, from my understanding, I could also determine how many people are standing in front of the piece. I think the total can only be 6.

With all these features, I believe the piece will come alive. These fine details, when combined, should “wow” the audience. Either, they will be excited or creeped out by the piece. Either way, they will have an experience! The next step in the journey is testing the Kinect and getting the plugin to work to within Unity as the plugin is quite old. Nevertheless, I am excited to work the Kinect.