Blog Entry – Research and Project Updates 7

This week’s research contains with speech recognition. I got the Kinect speech functionality to work in Unity to display my command and the percentage of confidence. I believe this is not going to work for my project as I need more functionality than only commands. The way it recognizes phrases does not seem to understand the phrase when it is used in a sentence. After some debate, I was recommended to try out a few different services that are tailored to my needs.  – https://blogs.unity3d.com/2016/08/02/speech-recognition-and-vr/

 

IBM Watsom:

https://www.ibm.com/watson/

https://github.com/watson-developer-cloud/unity-sdk

Google Cloud Speech:

https://cloud.google.com/speech-to-text/

https://assetstore.unity.com/packages/add-ons/machinelearning/google-cloud-speech-recognition-vr-ar-desktop-desktop-72625

 

Unity Built in – Windows 10 – UnityEngine.Windows.Speech

https://docs.unity3d.com/ScriptReference/Windows.Speech.DictationRecognizer.html

 

Plugin for Unity that contains all:

https://bitbucket.org/Unity-Technologies/speech-to-text

 

Only difference on paper between Google and IBM is the amount of free monthly send requests. Google is 60 minutes and IBM is 100 minutes. After the free monthly use, the prices are comparable. I do not think I will pass the limit but if I do, I will make sure there are measures in place. Side note – I believe the windows speech might do exactly what the Kinect was doing. I am going to be testing that one first as it is already built within Unity.

Next week sprint:

Try out, one or all, speech to text services within Unity and decide what service is the right one to use.


Side Project – Turn HopShock into a infinite auto jump art piece

I am working on transferring my game, HopShock, into a automated art piece. As of right now, I removed the GUI, and other interactivity. Only thing left to work on is automating the jump and having the character randomly get hit by the spinning circle. I am planning to work with my cousin, who was the lead programmer on the project, to help create a successful solution. I am expecting at the end of the week, the edits will be done and fully working on an iPad. If not, then by this weekend.

 

 

 

 

 

 

Blog Entry – Research and Project Updates 6

This week’s sprint required me to detect if a person is present in the scene, and to detect/track a face and the face features the Kinect sensor registers. I completed it using the  Kinect v2 Examples with MS-SDK and Nuitrack SDK  plugin for unity. There are two main necessary scripts to use. One is KinectManager.

KinectManager script is required for the Kinect to initialize and run. It also contains numerous classes that are important for body tracking and could be useful later in the project. The other script is called FaceTrackingManager. This script is like the KinectManager but only manages the face and head of the user. With these scripts, it is easy to debug any issues. For instance, I have it set up to create a small window with a color map video. In that video, right now, displays a rectangle box around my face. It enables me to see if the face tracking is working. With those two scripts working, I edited a script that was a demo. That script was only to detect a smile. I added in all the face properties Microsoft allows. I should be able to call the variables once I start creating my database. These properties should give a direction to what script/animation needs to be played at a given time.

Next week’s sprint:

Get speech recognition to work with some basic phrases. Then create a script to call all the input data and display them.

 

Also, as I mentioned above, the KinectManager has some useful classes so I wrote them down. Later, I discovered a website that has it all indexed:

https://ratemt.com/k2gpapi/annotated.html

 

Blog Entry – Research and Project Updates 5

After months of working on Permadeath the opera, I was given the opportunity to work backstage during the performance. Prior to this, I had never worked in a theater. I signed on as manager of the real time facial animation system, Faceware live. Luckily, the system setup, that I originally help create, worked flawlessly. The system setup was:

  • 2 computers – 1 for each actor
    • 1 computer is for one actor which is passing the data over to the other computer
  • 2 cameras
  • Area lights
  • Unreal Engine 4
  • Faceware Live server

I had one day of training at the last day of rehearsals. Nothing was too different than what I remembered. There were only a few tweaks that the director had updated. Those updates solved most of the issues it originally had. So what did I have to do? I watched over the actors’ faces in the software. Sometimes, the face tracking would get lost and break. One actor’s nose would lose tracking. The other actor’s lips would lose tracking. I had to make sure everything was set before the CGI was called to be played. The actors wiggled or moved around to fix the issues. Besides those minor hiccups, the CGI ran smoothly. We did see one issue where the character, Apollo, mouth was weirdly flapping around. The next day, we suggested a different way to act that scene out and it was fixed. From what I was told, no one saw the issue. Only two of us that worked on the facial animations saw the problem. I say that is a win. The experience was great. I got to meet and work with many talented individuals. Something I would not have had the chance to do.

 

Project update

I got in touch with the developer who makes the custom wrapper (https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/) . He kindly sent me the project assets to study and work with. So far, I had little time to dissect the inner workings but I am excited to get the chance too. At first, I couldn’t get the project examples to work but it was simply because I imported the original Kinect plugins. I created a new project and imported the wrapper and it worked. Now what I have to do is create a guideline or create sprints. Without having goals laid out, I feel it is hard to progress. The Kinect has so many things it can do! Once I get the ball rolling, I believe, the project will start to move forward.

 

 

Blog Entry – Research and Project Updates 3

This week of research has been held up and on delay because of a delivery issue. I purchased a Kinect v2 that has all the accessories required to run on the PC. Unfortunately, it was shipped across to the other side of the country. As I write this, I should be receiving the items tomorrow, 9/19/18.

Nevertheless, I decided to create a Unity project and started importing the required plugins. I also installed the Kinect SDK for Windows 10. I got it all imported and installed without a hitch.

I did see an issue about a script called joints as Unity has a script the same name. It shouldn’t be an issue to update the scripts and the calls. Unity did want to update the plugin scripts using deprecated calls. I let Unity do its thing. Hopefully, I don’t regret doing that. I opened an example scene and ran it. Of course, it ran but it didn’t throw an error! Be that good or bad news, who knows until I plug the device in.

FYI – anyone that has Kinect running on a computer or laptop that switches gpus, by default when rebooting/starting Windows 10, it will always switch on the dedicated GPU. Without the NVIDIA notification icon, I would be wondering why my fans are constantly running and battery life that is trash. Easy fix is to end the tasks in the task manager. I haven’t found a way to stop it from being started by default. I will have to look at it some more but I fear if I mess with, I will cause more harm then good at this point. So I am going to hold off until I can test the Kinect.

I am excited to play around with the Kinect. Not only because of the project but also to expand the use of the device. I am curious if I can create animations using it as well as scan objects. If so, I want to use some of the features to help build the art around my project to help speed up the production.


Barber Shop Video

I remember something like this video at Disney World Hollywood Studios. It was called “Sounds Dangerous! Starring Drew Carey”. It was closed in 2012 and replaced by numerous different shows but the audio show had the same idea about the binaural audio. It used Dolby as the audio driver. Essentially, you watch a video while having a headset on. The lights would turn off and the audio show begins. There is a part where Drew Carey goes to the barber and he gets a haircut. The video felt and sounded the same as I remember the show did.

In the video where they put a bag over the microphones, I felt like there was a bag on my head. I started to panic and opened my eyes. I am surprised how well the illusion works. The buzz cutters sound sounds life like as well. I am curious how many games and movies use this audio technique. I feel like it would create a better immersion into their worlds. Would it work with a surround sound system or only with headphones?

 

Meow Wolf

Meow Wolf is an interesting place. At first, it looked underground and run down but now they have expanded so much! It is quite impressive. I like the idea where they expand to other cities, and have local artist work on pieces of art to display in their “museum”. I use “museum” loosely.

It looks more like a take on a modern “museum”, where the rooms are the pieces with pieces within the room’s world. One piece that was shown was the black and white room. I would have liked to see more of that piece. It reminded me of some comic book or something Tim Burton would make. Another thing that caught my eye was the “doors” to the other rooms. One was a fireplace where you would walk through. Another was a hole in the wall where you would crawl through. I am assuming there are regular doors but experiencing these “gateways” would help create the illusion. The illusion that you are in the art piece. Someday, I would like to visit one of these places. Not only for the experience but to be inspired by these “rooms”.