Sunday, 23 June 2013

A Brief Update

Recently I've been working on getting animations into my characters in a bid to finish off the game art. So I made a basic walking cycle and imported the model into JMonkey, this however didn't work, for some reason the animation control wasn't attached.

After examining the model I concluded that the UniHuman skeleton wasn't valid for JMonkey (no root bone at 0,0,0 and it may have used shape keys instead of bone animations). I don't know if it did have bones as blender wouldn't display them (and yes I did set the armature to x-ray). So I deleted the rig and started again making my own skeleton something there are plenty of resources on and is dead easy.

So I re-rigged the model, animated, exported. Still nothing.

After some hunting around the JMonkey forums I found that the general consensus of the community is that the blender importer doesn't support animations. There were one or two people who said they did it and it worked but apparently followed the same steps the rest of us were following. So I tried to export using OGREXML.

The result made me want to gouge my eyes out. Bow legged, stumbling, it made Peter Jackson's early work seem restrained and rather pretty. I tried to fix the rig but it didn't go well so I deleted it and re-rigged, 46 bones later and scale and rotation applied I had a working animation.

Now animations work perfectly, unfortunately the texture is really messed up my .material file seems to be doing naff all except for putting the face on the upper thigh.

I've also made some alterations to my autonomousMovementControl class, copying useful fields over to separate lists to garbage collect larger items away earlier, also removes the overhead of calling methods and makes the code more readable. (path.get(i).getPosition() vs path.get(i)).

I'm beginning to get frustrated with computer graphics, but it''ll be worth it when I have a game under my belt.

Thursday, 13 June 2013

Enemy AI (Finite State Machine and Line of Sight)

So I feel that I've made great progress recently, characters are up (unfinished) and the AI is coming along great. I've taken my initial prototypes of various functionality (line of sight, path finding) and migrated them to a finite state machine.

Each state is a singleton and they share a unified interface with the methods CheckConditions, Start, DoAction, End. They all take the spatial they are controlling as an argument. The actual state machine is implemented as a controller attached to the character, so using the spatial the states can get access to the state machine and other controllers.

The reason I chose the singleton pattern is that when using a lot of characters I didn't want to have to create 3 states for each character and the singleton avoids this overhead, plus with the spatial it is commanded inputted as an argument it is able to do everything I need.

So without further ado I'll go into detail about the methods:

CheckConditions(Spatial spatial): this method checks if the conditions to enter the state are true or not. So when in the patrolling state AttackState.checkConditions(spatial) will be called and if this returns true, the next state will be the attackState meaning the spatial will attack. This method is also used inside the AttackState class to check that it can still attack, if it can't it will move back to patrol and set the place the player was last seen as its first patrol point.

Start(Spatial spatial): this method runs any start up logic necessary to move into that state, this includes and is not limited to animations for the character.

DoAction(Spatial spatial): this works out what the spatial should do and sets it to do it, its also responsible for moving to other states if the conditions are correct

End(Spatial spatial): runs any ending animations or logic.

Line of sight

Early on in the AI line of sight was one of the first things implemented, using the AIs view direction we find the angle between straight in front of the AI and the players location. If this is outside of the field of view then the AI can't see the player and this is demonstrated below:

Red line dictates field of view, green and red triangles player and AI respectively
If the player is inside the AIs field of view a ray test is then done from the centre of the AI to the player. This ray test will collide with the AIs geometry so if the number of entities the ray collides with is greater than 1 there is a geometry between the AI and the player, meaning that the player is obscured from view.

Right now this system just returns true or false however I am planning to implement a 2D Gaussian function so that the AI is more confident the closer and the nearer to the centre of vision the player is. This will mean that when the player is a distance away and creeping around the outside there will be a chance he/she remains unseen. Then the response of the AI can be dictated by numerous factors such as: aggression, fear or other emotional scores I could give.

This is a stretch goal, however my AI can see the player and move between states, it can effectively move between attack and patrol and respond in a semi-realistic manner. Now it's just a question of tailoring the system and augmenting it - something relatively easy with the highly modular system I'm using.

All in all very happy =D

Sunday, 9 June 2013

Texturing

So far with my game I've been representing characters with cubes, these cubes move around, shoot, and change colour when you enter their line of sight. But a big part of the game and something I see being one of the most complicated for me is graphics and animation. Because of this a desire to get the basic animating of characters and importing into the game has preyed on my mind.

Initially I was trying to use makehuman to make my game characters, it was fully rigged textures included and had a very open licensing agreement, yes it did make very high poly characters but I was sure in blender I could decrease the vertex count and optimise it. However I had a lot of problems trying to move the models into JMonkey, as you can see below:

This took a lot of work to get to, initially I started off with a fragment of torso with some random pixel clusters. I know of people who have apparently just exported, put into blender and imported into JMonkey with little trouble so I don't know if there's something I missed or they're using the nightly builds or an older version.

I was beginning to reach my wits end, at one point I half considered scrapping JMonkey and giving unity a shot. Luckily however I came into contact with unihuman ( http://goo.gl/OrTl7 ). I opened up the blend file, selected the model and pressed shift-f10. This unwraps the model to create a UV-Layout:

I exported this and whacked it into gimp where I painted it (albeit terribly). I then created a material and added a texture to it. The screenshot below shows the textured model along with the mapping settings. The texture is mapped to the UV layout because other wise the image file is laid onto the back and the rest of the model filled in with a solid colour:

I use the jmonkey blender imported and add the material file to the model programmatically, when the model is finished and animated I'll be able to use it for nearly every character and just provide a different texture file.

The finished result in the game world is:

As you can see the texture is a bit terrible (great art skills on my part), but I used to do a lot of photoshop and eventually I'll take the time to properly texture up the model. Right now I'm looking at animating the model either from scratch or using open source mocap data.

Overall I'm quite happy with this. It might be overly optimistic but I feel that characters are the hardest 40% of the art work, and they are a part that I have near enough got down.