Research & Project Updates – 2020 #9

This week I finished up the character retopo and uving. I made sure I brought it into zBrush and re projected. I am looking forward to texturing the creature.


For my thesis project, the major struggle to finish the code is now done. The only real test left for it is to use the kinect and see if there are any bugs. I am guessing the only thing that could would be the switch when it check if a person is present or not. The rest should work pretty good. I’m actually excited to move on and start doing the audio and animations. My mind is burnt from this rewrite… Here is all the functions that are required to make the magic happen.

Spotter #7

Spotter #7

 

Hardware Components Used

  • 1 Photo-resistor
  • 2 Green LEDs
  • 1 Cardboard cutout
  • 2 330Ω Resistors
  • 1 10k Resistor
  • 1 Servo Motor
  • Arduino Uno Board
  • Breadboard
  • 5-volt power
  • 11 wires
  • Light

Concept

 

Description:

Spotter#7 is a little cardboard robot endlessly sweeps 180 looking for a friend.  When it finds a friend, it stops and blinks to show that it has spotted the person. If the person moves away fast, it will continue with its path and keep sweeping. If the person remains for longer time and then moves, the robot will shake back and forth in disapproval. After the reaction, it will begin sweeping endlessly to find a friend.

The emotional response expected from this piece is compassion towards the little robot. The personal value of the piece was the challenge of designing the interaction. Originally, it was designed to use a temperature sensor to detect the interaction. Plans changed when the only sensor I had was damaged and I had to rework the piece. I decided to swap out the sensor for a photo-resistor. Even with the redesign, the piece is enjoyable.

Other Technical Information:

This interactive piece is using a servo and a photo-resistor. It requires a spotlight to light the “friend”. Since the lighting is important with this piece, the photo-resistor can be set for either, back lit or front lit, to make it work correctly. The servo motor has a sweep from 0 to 180. The speed of the sweep is randomized after every completed rotation to break up the repetitions of the piece.

Blog Entry – Research and Project Updates – 2019 #7

This week I focused on completing my substance designer project. After a week of working on it, I realized that designing the concept for the “kit” should have been my first task. Designing on the fly with a node base system…….makes a mess.

I tried using other graphs which helped clean stuff up a bit but I ended up using one main graph so I can expose more parameters. Maybe with more time, I can figure out a way to reference other graphs’ parameters.

After this weekend’s grind, I am at the final stretch. With the project due date being extended, I can put more time into coloring and give it a polish pass. A major issue I am having is with the height depth. Everything looks flat even after the tessellation. I might have to go in the beginning of the graph where I declared the basic shapes, and create a basic bevel to give it some dynamics.

Here are some screenshots from unity – displaying the kit:

My other projects/journey, I have not had the chance to drop the lipsync plugin into Unity. The instructions seem intense so I will dive into that this week. I have a feeling I will have to import a character with a facial rig (using blendshapes) to make sure it works in my Unity project. I think if I can get it imported Unity and working (or seem like it is working), I need to get it interacting with the kinect. From there, I should be in a good place to start sculpting the character. Although…..

I am worrying a little at this point. It is already HALF WAY through the semester!

I have to start shifting my time into the semester paper. I have an idea about the paper. Something along the lines of color and the affects on humans but I am not sure. I need to create an outline or proposal first.

Blog Entry – Research and Project Updates – 2019 #1

This week I started to test two free A.I. behavior plug ins for Unity. This is important as I needed to test them for a game studio and my own project. The two free plug ins are: PandaBT (https://assetstore.unity.com/packages/tools/ai/panda-bt-free-33057) and Behavior Bricks (https://assetstore.unity.com/packages/tools/visual-scripting/behavior-bricks-74816).

(AI plugin testing – custom demo “game” scene)

The “game” I made had the AI chasing after the ball (player). Once the AI is out of the red, it will constantly loose health a5 until it’s death. The AI decides to pick whatever red zone is closer to heal. Once topped off on health, it will go after the player again.


PandaBT is a minimalistic script to create behavior trees. It uses a basic script called BT script. To create tasks, one can create a single c# script (or many) to build the tasks for the AI.

PandaBT

There are many positives with using this plugin. One of which is the ease of use. Getting started is easy. I got a custom demo “game” up and running in a few hours of messing around. Another major benefit of using the plug is the mass amounts of documentation. They have their own site (http://www.pandabehaviour.com/) j59 full of access to examples and information needed to get up and running. There are also many YouTube tutorial videos. I did enjoy using the plugin but some issues I ran into was the BT script. As I am using visual studio, the structure of the script is key. If the spacing is off, it throws an error. It is minor but annoying. Another issue (not really an issue but personal preference), the layout of the behavior is a list. I fear once the AI becomes more complex, this list is going to become confusing and difficult to debug. This is where Behavior Bricks shines, node trees.

“Down the stairs and near the safe, she found her space in which she escapes.”


Behavior Bricks is a behavior tree with a visual editor. I got it up in running about the same time as PandaBT maybe a little bit more as it forces you to take the modular approach which is a good thing t119!

Behavior Bricks

I found the visual editor easy to visualize what I needed to do but it was buggy. The editor’s colors, while running the game, glitches but I did not see any affect on the game. Another issue was the poor documentation. There is a some but much. It does include examples and their site has some api information (http://bb.padaonegames.com/doku.php). Regardless, I found it difficult to start off. So, there is a learning curve. With saying that, I believe it is worth fighting through as the visual editor helps a lot more than a list view. The task management isn’t too hard, but the API can be confusing for beginners to start off. For a project, I suggest creating templates of the code. That way it saves time having to change the namespaces and such repeatedly.

What plugin do I suggest?  b11 They are very similar, and both are great and free! As I will be suggesting one plugin to use for a game studio, I want the students to learn the concept of behavior trees and use a modular approach. With that being said, I am favoring Behavior Bricks. The good thing about using this for our projects, we can transfer to other behavior trees with limited difficulty which we might end up doing. As it is free, the development is not consistent and updates are put off. This reason alone, we might move to a paid plugin like Behavior Designer, Node Canvas, and playmaker. I read many great things about them. Their prices are around the same price – $70 although playmaker is $45.

 

119.89. 137.47.89.77.29.131.29.107. 23.53.113.17.89.131.29.107.29.23. 119.47.53.113.: 5.137.29.113.89.77.29. 59.89.11.!!! 137.29.71.17.89.77.29. 119.89. 119.47.29. 77.35.5. 95.107.89.41.107.5.77.! 95.89.125.107. 149.89.125.107. 95.5.113.113.53.89.83. 53.83.119.89. 149.89.125.107. 17.107.29.5.119.53.89.83.113.. 71.29.119. 149.89.125.107. 5.107.119. 11.29. 149.89.125.107. 131.89.53.17.29.. 83.89.137., 41.89. 89.125.119. 119.47.29.107.29. 5.83.23. 47.5.131.29. 35.125.83.!

77.5.119.119.

Blog Entry – Research and Project Updates 8

This week, I worked on transforming my game, Hopshock, to an art piece that has no interactivity from user. As of right now, I am done with it. I do feel like I might need to adjust the wheel speed as the character’s animation does not look that appealing. (Mind you, Not many people can actually survive that long in the game to see the animation looking weird.) Through the process of de-gaming it, I realized how much the UI and the game are connected together. I would change or remove a few pieces and break a bunch of other things. The process took longer to debug than making the autojumper. The way I created the autojumper was a basic collider detection. As the ring collides with the node, the character passes true boolean to the touch control script. I did not have to rewrite any prior code to get it to work.

The only worry I had with the de-game version was Unity crashing after a few hours of the game running continuously. If it did crash, I had a plan in place that would reset the game after so many hours. Thankfully, it did not. So no need for the reset-er. I also spend some time upgrading some graphics and optimation of the game. I baked and removed some lights in the scene. The lighting in the forground was off, I adjusted that. Since Unity had been upgraded, I had to recreate the color grading and other post processing effects. After all the updates and fixes, the game runs from 30-40 fps on an iPad Air 2 which I capped at 30.

 

Other projects I have been working at work, Photogrammetry. My boss was interested in it for a project. We have been testing it with this camera. Unfortunately, we have not had success. I believe the camera we are trying to photograph has a smooth, black, and glossy surface. I am trying a new way with the same object but trying to cross polarize the lighting. I am hoping it will pull some detail out and help the process.

 

As for the interactive frame, I am still reading about the speech to text. I will be implementing at least one next week for testing. I will focus on the default Microsoft one as it is already embedded with Windows.

 


ALSO for game studio class. We ran into issues with colliders breaking and we had a mess of triggers and colliders. I found a function that would of solved some of these issues. So for the future:

Physics.IgnoreCollision(col1, col2);

https://docs.unity3d.com/ScriptReference/Physics.IgnoreCollision.html

Blog Entry – Research and Project Updates 7

This week’s research contains with speech recognition. I got the Kinect speech functionality to work in Unity to display my command and the percentage of confidence. I believe this is not going to work for my project as I need more functionality than only commands. The way it recognizes phrases does not seem to understand the phrase when it is used in a sentence. After some debate, I was recommended to try out a few different services that are tailored to my needs.  – https://blogs.unity3d.com/2016/08/02/speech-recognition-and-vr/

 

IBM Watsom:

https://www.ibm.com/watson/

https://github.com/watson-developer-cloud/unity-sdk

Google Cloud Speech:

https://cloud.google.com/speech-to-text/

https://assetstore.unity.com/packages/add-ons/machinelearning/google-cloud-speech-recognition-vr-ar-desktop-desktop-72625

 

Unity Built in – Windows 10 – UnityEngine.Windows.Speech

https://docs.unity3d.com/ScriptReference/Windows.Speech.DictationRecognizer.html

 

Plugin for Unity that contains all:

https://bitbucket.org/Unity-Technologies/speech-to-text

 

Only difference on paper between Google and IBM is the amount of free monthly send requests. Google is 60 minutes and IBM is 100 minutes. After the free monthly use, the prices are comparable. I do not think I will pass the limit but if I do, I will make sure there are measures in place. Side note – I believe the windows speech might do exactly what the Kinect was doing. I am going to be testing that one first as it is already built within Unity.

Next week sprint:

Try out, one or all, speech to text services within Unity and decide what service is the right one to use.


Side Project – Turn HopShock into a infinite auto jump art piece

I am working on transferring my game, HopShock, into a automated art piece. As of right now, I removed the GUI, and other interactivity. Only thing left to work on is automating the jump and having the character randomly get hit by the spinning circle. I am planning to work with my cousin, who was the lead programmer on the project, to help create a successful solution. I am expecting at the end of the week, the edits will be done and fully working on an iPad. If not, then by this weekend.

 

 

 

 

 

 

Blog Entry – Research and Project Updates 6

This week’s sprint required me to detect if a person is present in the scene, and to detect/track a face and the face features the Kinect sensor registers. I completed it using the  Kinect v2 Examples with MS-SDK and Nuitrack SDK  plugin for unity. There are two main necessary scripts to use. One is KinectManager.

KinectManager script is required for the Kinect to initialize and run. It also contains numerous classes that are important for body tracking and could be useful later in the project. The other script is called FaceTrackingManager. This script is like the KinectManager but only manages the face and head of the user. With these scripts, it is easy to debug any issues. For instance, I have it set up to create a small window with a color map video. In that video, right now, displays a rectangle box around my face. It enables me to see if the face tracking is working. With those two scripts working, I edited a script that was a demo. That script was only to detect a smile. I added in all the face properties Microsoft allows. I should be able to call the variables once I start creating my database. These properties should give a direction to what script/animation needs to be played at a given time.

Next week’s sprint:

Get speech recognition to work with some basic phrases. Then create a script to call all the input data and display them.

 

Also, as I mentioned above, the KinectManager has some useful classes so I wrote them down. Later, I discovered a website that has it all indexed:

https://ratemt.com/k2gpapi/annotated.html

 

Blog Entry – Research and Project Updates 5

After months of working on Permadeath the opera, I was given the opportunity to work backstage during the performance. Prior to this, I had never worked in a theater. I signed on as manager of the real time facial animation system, Faceware live. Luckily, the system setup, that I originally help create, worked flawlessly. The system setup was:

  • 2 computers – 1 for each actor
    • 1 computer is for one actor which is passing the data over to the other computer
  • 2 cameras
  • Area lights
  • Unreal Engine 4
  • Faceware Live server

I had one day of training at the last day of rehearsals. Nothing was too different than what I remembered. There were only a few tweaks that the director had updated. Those updates solved most of the issues it originally had. So what did I have to do? I watched over the actors’ faces in the software. Sometimes, the face tracking would get lost and break. One actor’s nose would lose tracking. The other actor’s lips would lose tracking. I had to make sure everything was set before the CGI was called to be played. The actors wiggled or moved around to fix the issues. Besides those minor hiccups, the CGI ran smoothly. We did see one issue where the character, Apollo, mouth was weirdly flapping around. The next day, we suggested a different way to act that scene out and it was fixed. From what I was told, no one saw the issue. Only two of us that worked on the facial animations saw the problem. I say that is a win. The experience was great. I got to meet and work with many talented individuals. Something I would not have had the chance to do.

 

Project update

I got in touch with the developer who makes the custom wrapper (https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/) . He kindly sent me the project assets to study and work with. So far, I had little time to dissect the inner workings but I am excited to get the chance too. At first, I couldn’t get the project examples to work but it was simply because I imported the original Kinect plugins. I created a new project and imported the wrapper and it worked. Now what I have to do is create a guideline or create sprints. Without having goals laid out, I feel it is hard to progress. The Kinect has so many things it can do! Once I get the ball rolling, I believe, the project will start to move forward.

 

 

Blog Entry – Research and Project Updates 4

As I progress with my research with using the Kinect in Unity. I come to realize how difficult it is to use deprecated devices. I struggle to find documentation on classes and setups.

I have found a few YouTube videos and some tutorials. The main issue is trying to understand each class and what they are doing and what are they are connected too. On the positive side, I did get the Kinect to register and run in Unity.

The Kinect fired up and worked with out a problem. One tutorial used the bodysourcemanger to detect and track a joint to make a 3d gameobject move. I am struggling with the face section, once again I am restricted to limited documentation, I did get a face to be detected.

This linked helped: https://social.msdn.microsoft.com/Forums/en-US/20257887-4c2e-42a9-be77-926d91fbdae3/face-expressions?forum=kinectv2sdk

I haven’t figured out how to pull a basic expression result out. I am going to have to dig into the plugin classes to figure stuff out. I did find documentation (https://ratemt.com/k2gpapi/annotated.html) but I believe it is for an examples that might use a custom wrapper (https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/) . I am going to email the developer about it. If it is OK, I will research the examples and try to understand the basics before I start diving into the code.

Unfortunately, I will have limited time this week for research as I will be working a the Permadeath Opera. As the Techinical Assistant Director of CGI, I am over seeing the Faceware software during the performance. If anything goes wrong, I will be there to react and fix the issue. Prior to this, I was working on some technical work. I have been working on fixing some skin weights for some characters. One character , Aphrodite, has feathers on her arms as a shirt.

Aphrodite skin was poking through when moving around. I fixed this by simply adjusting some skin weights. Unfortunately, I could do so much since it was hundreds of planes so I tweaked her skin texture. I took the planes and baked it onto her skin texture. From there, I blurred the feathers to make an average color. Now, if there is some “skin” mesh poking through, it will be invisible to the viewer. Another character I adjusted the skin weights was Apollo.

Apollo had some weird skin weights in his face. He was the first Faceware rigged character so it is to be expected he was going to look rough. At first it was an easy fix by pushing and pulling the weights. Ultimately, I had to adjust the transform and rotations of the joints in the Unreal Engine. It took a little more time than I thought but the results show it was the right decision:

The last character I had to tweak was Adonis. The technical director decided the cape needed to be re-worked. Due to time limitations, the cape simulations were scraped for a shorter cape that was going to be bound to the skin. I modified the model to spec and worked on the binding. The bind was a little tricky as the cape supports on his shoulders float. If the characters shoulder joints move, it reacts and moves.

Next week, I will be spending more time researching and developing the Kinect in Unity. For the time being, I will be deciding what should be prioritized and what should be my starting point.