Tuesday, 26 November 2013

Such a long time (contains video)

So it's been ages.

Really, ages. My last post was near the start of September now we're coming up to the end of November. My silence obviously means I've gotten my head down and done tons of work. In fact I have an AAA game ready to launch and I'm giving you all a playable demo... Right? Well not exactly..

I've actually spent long periods not touching my game, works kept me busy, I've had a holiday and I've chosen what I want to do next year for a final year project and started some prep work for that. (My prep work is an implementation of an Artificial Neural Network using Simulated Annealing to learn instead of Back Propagation with the aim of using it for Online Learning). And I've been playing around with python. And playing some games which I'm of course going to call research so it seems I've been really busy. But I haven't.

A lot of what I've been doing is trying to put GOAP into my game and that kind of stalled it's a big architecture change and I wanted it to add it in and not remove anything, then remove the stuff it was replacing. So right now I have two partial AI systems. And I'm beginning to question GOAP based on the overhead of implementation. A bit too late for that so I'm just going to stick to my plan for now.

However while I stew on project strategies there is a completely neglected area I can work on... And that's art. Bloody god damn 3D freaking art. So this weekend I knuckled down set up my model for animation made a simple walking animation in 30 minutes then put it into the game to make sure it worked.

I implemented a stub controller I called __FollowPlayerControl (the double underscores are my way of saying this is not production standard). and got it to activate the animation when walking and stop it when not walking, and to follow me.

Now that meant I could only get a front view of the player walking so I just removed the stop so I could give you all a video with side and front views. In hindsight I should have just made it so I could specify a point the character should walk to and then follow it. getting all the angles I want. However its too late for that and its just more test code to rip out so I'm not going to go to the trouble for now.

Also you may remember my last video, the laggy low frame rate screen capture spectacular. Well I found out JMonkey has a class called VideoRecorderAppState that when attached to the AppStateManager records video. So now I've added a button to start recording and stop recording to my game and the recordings are so much better. No more screen capture software for me which is a relief I can tell you.

So without further adieu:



In terms of the animation on the return the foot drags too much and its too bouncy. It's kind of like a bad caricature of a walk. Oh and the green lines that's a navmesh I generated on the fly, still there from debugging.

Hopefully I'll start getting back into the swing of things and be able to keep you entertained with tons of progress in the weeks and months to come!

Daniel

Saturday, 7 September 2013

GOAP (Goal Oriented Action Planner)

For the uninitiated GOAP stands for Goal Oriented Action Planner. The AI basically chooses a goal from a set of goals and then finds a chain of actions that allow it to meet that goal.

Each goal will have conditions that must be met to make it a success, each action will have preconditions that need to exist to execute it and effects of it being executed successfully. So for example take the following action:

Action: eat_cake
Preconditions: cakeCount>0
Effect: cakeCount--, hunger--

We can clearly see that eating a cake will lose us one cake, and also decrease our hunger. However we do need to have cake to eat cake hence the precondition.

GOAP traditionally chains these actions together using A*, so actions have costs associated with them, now the heuristic cost is a bit difficult. With A* used for path finding you can just draw a line from x to the end point and call that the heuristic distance. But with actions and goals its not as clear how you can measure how far you are from completing the goal, after all this is a more abstract that distance. Now with my action planner for GOAP I'm using the number of unresolved conditions - preconditions that have to be met for any actions I've chosen and the conditions to satisfy the goal that haven't been met - as the score.

The A* search works best regressively, moving from the goal to the current world state as this narrows down actions. For the goal kill_entity all the end goals are going to have the effect entityDead==true, and this will be a limited number, whereas any actions out of the action set could go before in the plan to satisfy relevant preconditions.

Now the reason I went from my finite state machines to GOAP is because the "next state" logic in GOAP is essential kept in each action, if I want to change the AI or the game by adding new behaviours or skills all I have to do is add the relevant actions and goals, there's no need to change existing classes whereas in finite state machines you would have to go through the relevant states and change the transition logic.

I'm still in the process of integrating GOAP into my game however the planner is coded, abstract classes have been created and the main leap now is to implement some real actions and goals and integrate it all into my current framework.

But as well as being very versatile GOAP is quite timely to implement but I believe it will be worthwhile and hopefully I can start testing it in some basic situations in the next few months.

For anyone who wants to learn more about GOAP you can find a lot of useful information here: http://web.media.mit.edu/~jorkin/goap.html

Thursday, 15 August 2013

Using sketchup for level models (First game video inside)

Right now I'm trialling Sketchup as part of my asset creation pipeline, mainly as a way of quickly making rooms for further refinement (texturing, optimising etc) in blender. This week I decided to trial sketchup and see just how easily I could get it to make something usable as a game asset, first I started off by creating a floor plan.

Then with a collection of free plugins (which will be listed at the end of the article) I set about bringing my dream into reality. This was a fairly quick process, I'd say it only took me 1-2 hours (including making the floor plan. Which I didn't invest a lot of time on.)



So with this done what I spend my next 1-2 hours doing is hiding walls and removing coplanar faces. When I draw a wall that comes out of another one, the end of my second wall meets part of the face on my first wall. These two faces occupy the same space and are hidden geometries so with the goal of improving the quality of the model I set about deleting these faces.

With that done I exported as a .Obj, loaded into blender, and found the stairs about ten times bigger hovering quite far away from the room. To solve this I made the whole scene into one component essentially binding the separate parts of the model. Imported into blender and found everything was in order.

So I imported the .blend file to JMonkey to see the initial results. And my god was I surprised! The surfaces were all flickering and all the doorways seem to flicker in and out of existence. This wasn't what I'd seen in blender and it was definitely a surprise. I'd seen some flicker on the floor in some patches in blender but nothing to this extent. (unfortunately I didn't screenshot this monster).

The flicker was I found down to duplicate faces, it seemed that sketchup had overlaying faces. Now I'm no expert on the 3D assets front but any software that doubles the number of faces required seems to be a no go for me. But I expected some tweaking in blender so I selected all the vertices in edit mode and then pressed: w -> delete duplicates. Over 500 vertices were removed, so in other words things were looking pretty bad.

Imported into JMonkey. Less flicker but still flicker, and more disturbingly the doorways were still obscured. Something which I did manage to screenshot:

That rectangle on the right... That shouldn't be there.
In fact none of the black bits should

Okay. Time to google.

From my googling I found two programs: Meshlab and Netfabb basic, I used Meshlab to convert my .Obj to a .STL so I could load it into Netfabb. Netfabb seems to have quite a few uses (at least the paid for version does). But one thing it can do is take a format suited to a 3D printer (STL), and find flaws in the mesh. Things like hidden vertices, coplanar faces, inverted faces, lack of a closed volume.

And sure enough Netfabb showed the flaws that blender wouldn't:


The red parts are flaws in the mesh, and you can clearly see the doorways are filled in as well as part of the floor is red. The red floor is down to the face being upside down making the normals wrong. So I went into Sketchup and flipped the upside down faces to the right way round. But the doors. The doors were still a problem.

So I decided to take a walk around the level with the flipped faces fixed, the room did look better (slightly) but as I walked I saw something with the flickering faces. The majority of them seemed to be triangles, and then it occurred to me. I hadn't triangulated the mesh!

So I went into Sketchup and checked the triangulate mesh options on the exporter and voila:

Perfecto
So with some quick texturing in blender (Sketchup is definitely not for my texturing needs no UV unwrapping). I have my first piece of "game footage". You'll see the lighting isn't presentable yet (something I'm still to look into) and the texture seems a bit pixelated, also in the model I appear to have made rooms, corridors and doorways too narrow/small so things feel a bit claustrophobic. But that's something I'll address.

However it's exciting to have a room to walk around in instead of just a flat plane. One step closer to realising my dream!



Sketchup plugins used:

FredoScale
Cleanup
Joint Push Pull
1001bit tools
Buildedge

You can find all these from the sketchup plugin warehouse or sketchucation. 



Wednesday, 7 August 2013

Sound the alarm! A post on AI and spatial access methods

Preamble

Firstly to avoid confusion, I know I often refer to entities in the game as Spatials (as that is the object I use) but with this post the spatial in the title is referring to an objects location in 3D space.

So recently I've paused work on asset creation and have taken to working on something far more interesting, my game AI. I've programmed up some basic decision making to allow the AI to work out if it's scared, confident or concerned and to what level but that's not what this post is about. This post is about what happens when an enemy NPC discovers the player and raises an alarm.

So raising an alarm could be done in numerous ways and could have varying consequences:

  • A physical alarm is rung and either through the alarms location, loudspeaker, or general flashing lights the players rough location is broadcast to all enemies in a large area.
  • The enemy that spotted the player shouts over radio and alerts all other enemies the players precise location, unless they lose sight of the player in which case where the player was last seen.
  • The enemy shouts out hoping that someone hears and comes to assist.
I've been toying with all three ideas and I'm thinking of doing a combination of 1 and 3, with the NPC having the choice of shouting and engaging in combat or running scared to an alarm and hoping for backup. Then environmental properties such as distance to the nearest alarm would factor in the decision making and make the combat less predictable, something which is always good.

So now that's decided and it comes to implementation an important question presents itself.

How do I find all the enemies within a given radius?

This is important for the third method as the NPCs need to hear the shout, so what were my initial thoughts:
  • Using a spatial access method such as a binary space partition or a k-d tree.
  • Creating a bounding box centred on the location of the sound and extending out to its range and checking for the NPCs that fall within the box
Now the first method can require a fair bit of programming and also every time an NPC moves the bsp or k-d tree would need to be regenerated. However it allows to quickly determine which NPCs are within range.

The second method on the other hand is very simplistic to program and maintain but comes with a performance drop. It would iterate through every NPC checking for intersection whereas the previous method would only look at a subset.

Now at this point hash maps had popped into my mind but I'd cringed and thought about iterating through them checking for every continuous point in 3D space within the range. Which is just icky.

So still undecided I did some googling and suddenly team hash map presented a winning pitch. They keys I'd use would be discrete points and I'd split the map into cubes of a set size [source: http://goo.gl/dtXSfw ]

Now for my hash map the key will be an array of integers and the object a list of spatials (the entity sort). And as non-primitives in java are passed by reference not value to update the hash map I decided to:
  • Iterate through the values in the hash map (so loop through all the array lists)
  • Get each spatials key based on current position
  • Check if the array list stored at that key is equal to the one I'm at in the iterator.
  • If not add the spatial to the array list at that key and remove it from the array list it was in.
Now worst case that should perform at O(n) where n is the number of entities. And getting all the entities within the range of a sound should be O(log(n)) as we are only dealing with a subset of the entities rather than brute forcing through every entity.

All in all I think this was the best way to do it, but we'll see when I start testing it out and nearer the end when I start stress testing my game to see what it can handle in terms of how many NPCs active at once etc.

A word on sound intensity

So I've probably (hopefully) mentioned it before (at least in passing) but I'm aiming on using a form of utility in my AI where scores are assigned to attributes and actions and used to determine the best result. So from this some new inputs to the AIs decision making process is born, sound intensity and context.

So initially my thoughts on sound intensity would be that it would most likely be an exponential decay, like most things that drop off in nature. But in the name of science I decided to have a proper look into it. 

This lead me to this formula [Source: http://goo.gl/FQ9sEL ]:


Where L2 is the sound intensity we want to work out and r2 the distance it is from the sound
L1 is a reference intensity and r1 how far that is from the sound.

I suppose the reference is there to try and take account for the varying acoustic properties of rooms but it's not really that relevant (after all I posted an excellent link for those that want to learn more!) What is relevant is that from the graphs on the page and the sum we can see that I wasn't far off, I can model this with an exponential decay.

This decay will represent the intensity of the sound where the AI is based on the distance from the sounds origin and how intense it is at the origin:



The perceived intensity will be normalised between 1 and 0 where 1 is definitely hearing something of note and 0 is hearing nothing of note. Whether or not the NPC responds to the sound at values between 0 and 1 depends on its own attributes such as alertness etc.

I should also take into account any instances of background noise that could dilute the intensity of the sound but that's a stretch goal for now.

Once I've done with this fleshing out of AI I should go back to graphics and design and maybe even be able to provide my first game play video... Not sure if that's a goal for the distant or not too distant future though.



Tuesday, 30 July 2013

Combat AI, Navigation and Screenshots

So recently I've been looking at my combat AI, so far I've made do with it getting so close to the player and sitting still and shooting until the player moved out of range, it lost sight of the player or the player killed it. That's not good enough though, in terms of game play it is really predictable and lacks any sort of adrenaline or challenge. Well unless about 20 of these enemies surrounded you and just opened fire.

Of course this was just a temporary combat mechanic designed more for testing other functionality such as line of sight, my finite state machine and weapon controls. But now the time comes for something more advanced.

Initially I thought about terrain analysis and tactical assessment on the fly, something I'd be forced to do if I was working with robotics or something more physical but that idea rapidly gave way to storing valuable information on the map.

So to my scene I added a node named "Cover" to this node all objects that could be used as cover would be attached, things like crates, sandbags maybe a table. Things to obscure the line of sight but still allow the AI to shoot at the player. This list of cover objects would then be loaded and stored in an object designed to provide global information to the NPCs. An object which could also be used to relay communications between troops alerting other allies if they have a man down and should be on alert for the player. But enough of that I'm getting sidetracked (maybe if I updated more regularly I could have wrote about this in another post).

So I had my cover list and I changed the finite state machine to get the AI to put cover between it and the player upon sighting the player. And one thing became clear. The player just walked through the cover.

So I regenerated my navigation mesh trying to get it to account for static scenery and found that the JMonkey navmesh (which in turn is a wrapper for a previous version of the critterai library) doesn't factor in other models for the navmesh.

Right now I saw two choices:

  1. Edit the source code to fix this
  2. Use the navmesh pathfinder for a generalised path then tweak this dynamically to avoid obstacles
Upon looking at the source I decided to go for option 2. Plus it would allow for objects to move from start position which is definitely more flexible.

So how to do this...

Well here goes my method.

I cloned the bounding boxes of all cover objects as you can't get dimensions from models directly. I then added the radius of my AIs collision shape to this. The reason for this is when checking if the AI can fit between two obstacles its easier to expand the bounds for the obstacle and shrink the AI down to a point with an infinitely small width. 

Then any overlapping bounding boxes I merged as the AI couldn't fit between those two obstacles. I then performed ray casts between each pair of adjacent way points on the path checking for intersections with the bounding boxes.

Any bounding boxes I collided with I took the position of the midpoint of the segment of line that intersected the box. Then I created a vector starting at the centre and moving to the midpoint and extended that by the width of the bounding box. 

For a tighter fit I might want to perform another raycast to find the edge of the box were the character can walk and just brush the sides of the obstacle (remember I extended the bounding box) but for now I'm happy with this fix. It's still a work in progress and I've still got to fully test how far I've built before optimising it.

So without further ado a screenshot. The blue box is the AI, the grey box some cover, the pink dot is set above the centre of the bounding box (which is invisible), the cyan dot above the midpoint. And finally the red dot is the detour point to avoid hitting the cover.

The green line I drew in to show the ray going between the end way point behind the cover and the one prior to it.

Cor' blimey check out that FPS!
Now because I don't want anyone to think that because of the ugly character I made last entry I'd forsaken the human race and shifted my allegiance to the people of cube here is a screenshot of my last player texturing effort:

I'm quite proud to say it, the skin was made in photoshop by brush. I used these brushes and followed the tips http://www.brusheezy.com/brushes/1752-skin-textures-v1 they've also made my clone brushing look smoother can't recommend them enough!


Thursday, 4 July 2013

Finally! A "Properly" Textured Human

Well I've done it, through the power of youtube I've managed to find a good way of texturing humans. No more trying to paint onto meshes or UV layouts desperately trying to achieve the accuracy needed despite the bumps and protrusions of a human body. No more labouring under the hot sun looking for a friendly face and despite my efforts being confronted by grotesque homunculi.

By projecting from view (with bounds) I create a UV layout of one face of my model. I then apply a normal picture of a person as the texture texturing that side of the model. With the clone brush That single side is painted onto the corresponding area of the full UV layout.

Tis truly a thing of beauty. I take this simple man: http://fanart.tv/fanart/tv/248834/characterart/last-man-standing-2011-4ffb175969e71.png

I then apply him paint and I achieve:


Now naturally I won't use this man in the game and all rights belong to whomever took that picture. And the texture was only just lined up but it looks bloody good. I'll try and find some models to take photographs of or create my own computer graphicsy people that are of comparable quality to the resolution blender paints at and then huzah. The people are done.

Of course credit were credit is due I wouldn't have done it without this great tutorial:  http://www.youtube.com/watch?v=p4ngVoGIj1Q  

Now naturally this wasn't as easy as it seems, something had royally messed up in blender and I couldn't see the clone brush but by linking my scene to a blank project I was able to use the clone brush without having to burrow into blenders settings and find out what had gone wrong.

Monday, 1 July 2013

So Pretty.

Textures are wrapping correctly, and animations exporting correctly, the next stage in graphics is to make some nice textures for characters.

However my art skills may not be what they once were:

Hey good lookin', whatcha got cookin' ?
It's nice to see real progress is being made, I feel half of the work left on the game is polishing it up to add that professional touch.