Research & Project Updates – 2020 #13

This week I was under the weather for a few days but got a bunch done regardless. On the character creation class here is the final link: http://www.mdhopkins.net/virus-ara-prototype/

On my thesis project, I finished my to do list. I adjusted the characters arm for the normal pose and sad pose. I had issue with the user voice search. It ended up being the index changing in the list. I swapped it out for a dictionary and reworked the code to get it to work. Another issue, was when it had to pick another story to tell, it would cause and error. I needed to set the sentence ID to 0. The last major fix was the issue with the head rotation in late update. I updated the animations so there is blending between it all but the head would snap when locking onto the user. Found help on the Unity forums. I had to create another rotation var in the start() and then call it again. All the original code worked as well. As of now, I need to do some long run testings aka over night testing for idle to see if it breaks. Hopefully it all works as intended. After that I am not sure what is left, I could do more voices with different emotions for the different moods but I am not sure how the writting would be. To be honest, I am gatekeeping this project because it is on the home stretch now…

Research & Project Updates – 2020 #9

This week I finished up the character retopo and uving. I made sure I brought it into zBrush and re projected. I am looking forward to texturing the creature.


For my thesis project, the major struggle to finish the code is now done. The only real test left for it is to use the kinect and see if there are any bugs. I am guessing the only thing that could would be the switch when it check if a person is present or not. The rest should work pretty good. I’m actually excited to move on and start doing the audio and animations. My mind is burnt from this rewrite… Here is all the functions that are required to make the magic happen.

Blog Entry – Research and Project Updates 7

This week’s research contains with speech recognition. I got the Kinect speech functionality to work in Unity to display my command and the percentage of confidence. I believe this is not going to work for my project as I need more functionality than only commands. The way it recognizes phrases does not seem to understand the phrase when it is used in a sentence. After some debate, I was recommended to try out a few different services that are tailored to my needs.  – https://blogs.unity3d.com/2016/08/02/speech-recognition-and-vr/

 

IBM Watsom:

https://www.ibm.com/watson/

https://github.com/watson-developer-cloud/unity-sdk

Google Cloud Speech:

https://cloud.google.com/speech-to-text/

https://assetstore.unity.com/packages/add-ons/machinelearning/google-cloud-speech-recognition-vr-ar-desktop-desktop-72625

 

Unity Built in – Windows 10 – UnityEngine.Windows.Speech

https://docs.unity3d.com/ScriptReference/Windows.Speech.DictationRecognizer.html

 

Plugin for Unity that contains all:

https://bitbucket.org/Unity-Technologies/speech-to-text

 

Only difference on paper between Google and IBM is the amount of free monthly send requests. Google is 60 minutes and IBM is 100 minutes. After the free monthly use, the prices are comparable. I do not think I will pass the limit but if I do, I will make sure there are measures in place. Side note – I believe the windows speech might do exactly what the Kinect was doing. I am going to be testing that one first as it is already built within Unity.

Next week sprint:

Try out, one or all, speech to text services within Unity and decide what service is the right one to use.


Side Project – Turn HopShock into a infinite auto jump art piece

I am working on transferring my game, HopShock, into a automated art piece. As of right now, I removed the GUI, and other interactivity. Only thing left to work on is automating the jump and having the character randomly get hit by the spinning circle. I am planning to work with my cousin, who was the lead programmer on the project, to help create a successful solution. I am expecting at the end of the week, the edits will be done and fully working on an iPad. If not, then by this weekend.

 

 

 

 

 

 

Blog Entry – Research and Project Updates 6

This week’s sprint required me to detect if a person is present in the scene, and to detect/track a face and the face features the Kinect sensor registers. I completed it using the  Kinect v2 Examples with MS-SDK and Nuitrack SDK  plugin for unity. There are two main necessary scripts to use. One is KinectManager.

KinectManager script is required for the Kinect to initialize and run. It also contains numerous classes that are important for body tracking and could be useful later in the project. The other script is called FaceTrackingManager. This script is like the KinectManager but only manages the face and head of the user. With these scripts, it is easy to debug any issues. For instance, I have it set up to create a small window with a color map video. In that video, right now, displays a rectangle box around my face. It enables me to see if the face tracking is working. With those two scripts working, I edited a script that was a demo. That script was only to detect a smile. I added in all the face properties Microsoft allows. I should be able to call the variables once I start creating my database. These properties should give a direction to what script/animation needs to be played at a given time.

Next week’s sprint:

Get speech recognition to work with some basic phrases. Then create a script to call all the input data and display them.

 

Also, as I mentioned above, the KinectManager has some useful classes so I wrote them down. Later, I discovered a website that has it all indexed:

https://ratemt.com/k2gpapi/annotated.html

 

Blog Entry – Research and Project Updates 5

After months of working on Permadeath the opera, I was given the opportunity to work backstage during the performance. Prior to this, I had never worked in a theater. I signed on as manager of the real time facial animation system, Faceware live. Luckily, the system setup, that I originally help create, worked flawlessly. The system setup was:

  • 2 computers – 1 for each actor
    • 1 computer is for one actor which is passing the data over to the other computer
  • 2 cameras
  • Area lights
  • Unreal Engine 4
  • Faceware Live server

I had one day of training at the last day of rehearsals. Nothing was too different than what I remembered. There were only a few tweaks that the director had updated. Those updates solved most of the issues it originally had. So what did I have to do? I watched over the actors’ faces in the software. Sometimes, the face tracking would get lost and break. One actor’s nose would lose tracking. The other actor’s lips would lose tracking. I had to make sure everything was set before the CGI was called to be played. The actors wiggled or moved around to fix the issues. Besides those minor hiccups, the CGI ran smoothly. We did see one issue where the character, Apollo, mouth was weirdly flapping around. The next day, we suggested a different way to act that scene out and it was fixed. From what I was told, no one saw the issue. Only two of us that worked on the facial animations saw the problem. I say that is a win. The experience was great. I got to meet and work with many talented individuals. Something I would not have had the chance to do.

 

Project update

I got in touch with the developer who makes the custom wrapper (https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/) . He kindly sent me the project assets to study and work with. So far, I had little time to dissect the inner workings but I am excited to get the chance too. At first, I couldn’t get the project examples to work but it was simply because I imported the original Kinect plugins. I created a new project and imported the wrapper and it worked. Now what I have to do is create a guideline or create sprints. Without having goals laid out, I feel it is hard to progress. The Kinect has so many things it can do! Once I get the ball rolling, I believe, the project will start to move forward.

 

 

Blog Entry – Research and Project Updates 2

Setup Layout: (possible)

My idea for implantation for the Kinect in my project:

  • Audience / viewer approaches or walks by
  • Kinect detects and pulls data into Unity
  • Unity scripts will separate video and audio data
    • Video data will be face, color, depth, skeleton and # id
    • Audio data will listen for keywords and direction
  • Data will be push to a preset list of options
  • Options will be prioritized and sorted then picked upon
  • The combined option data will be passed to an animation to play that animation/audio
  • Once animation is completed return to the begin or have the new segment cached waiting until animation is done to play the new animation sequence

“Kinect is the eyes and ears of the piece.”

                  These features of the Kinect are important because it the eyes and ears of the piece. Without the Kinect, the piece will be static and non-interactive. It won’t have the life and emotion that I am striving for. For example, facial recognition will help the piece because I plan to use it to receive important data points.

One way facial recognition data would be useful is by determine if the audience or viewer is focused on the piece. Another would be by determining what facial expression they are making. I could use that data to point to and select the best animation sequence. Another way I could use facial recognition would be to determine when to start listening for audio input. I could use facial recognition to help determine gender if I needed too.

Color data is another useful input for the piece because I could try to determine the color of the viewers clothing, eye colors, and so on. I could even capture the image to save it and display it on the display if I wanted to add another feature. (< possible addon)  Another data capture would be interesting is by using the skeleton tracking.

Skeleton tracking could be used as a confirmation if the user is interacting with the piece or not. It may also be used to determine how far the audience is from the piece. I could try to call over the audience if it is determined they are too far from the piece. If it works, it will help the facial recognition feature by moving the viewer closer in. That way, I can receive better and more accurate results to process. I may have to combine it with id tracking but, from my understanding, I could also determine how many people are standing in front of the piece. I think the total can only be 6.

With all these features, I believe the piece will come alive. These fine details, when combined, should “wow” the audience. Either, they will be excited or creeped out by the piece. Either way, they will have an experience! The next step in the journey is testing the Kinect and getting the plugin to work to within Unity as the plugin is quite old. Nevertheless, I am excited to work the Kinect.