Tuesday, 26 November 2013

Such a long time (contains video)

So it's been ages.

Really, ages. My last post was near the start of September now we're coming up to the end of November. My silence obviously means I've gotten my head down and done tons of work. In fact I have an AAA game ready to launch and I'm giving you all a playable demo... Right? Well not exactly..

I've actually spent long periods not touching my game, works kept me busy, I've had a holiday and I've chosen what I want to do next year for a final year project and started some prep work for that. (My prep work is an implementation of an Artificial Neural Network using Simulated Annealing to learn instead of Back Propagation with the aim of using it for Online Learning). And I've been playing around with python. And playing some games which I'm of course going to call research so it seems I've been really busy. But I haven't.

A lot of what I've been doing is trying to put GOAP into my game and that kind of stalled it's a big architecture change and I wanted it to add it in and not remove anything, then remove the stuff it was replacing. So right now I have two partial AI systems. And I'm beginning to question GOAP based on the overhead of implementation. A bit too late for that so I'm just going to stick to my plan for now.

However while I stew on project strategies there is a completely neglected area I can work on... And that's art. Bloody god damn 3D freaking art. So this weekend I knuckled down set up my model for animation made a simple walking animation in 30 minutes then put it into the game to make sure it worked.

I implemented a stub controller I called __FollowPlayerControl (the double underscores are my way of saying this is not production standard). and got it to activate the animation when walking and stop it when not walking, and to follow me.

Now that meant I could only get a front view of the player walking so I just removed the stop so I could give you all a video with side and front views. In hindsight I should have just made it so I could specify a point the character should walk to and then follow it. getting all the angles I want. However its too late for that and its just more test code to rip out so I'm not going to go to the trouble for now.

Also you may remember my last video, the laggy low frame rate screen capture spectacular. Well I found out JMonkey has a class called VideoRecorderAppState that when attached to the AppStateManager records video. So now I've added a button to start recording and stop recording to my game and the recordings are so much better. No more screen capture software for me which is a relief I can tell you.

So without further adieu:



In terms of the animation on the return the foot drags too much and its too bouncy. It's kind of like a bad caricature of a walk. Oh and the green lines that's a navmesh I generated on the fly, still there from debugging.

Hopefully I'll start getting back into the swing of things and be able to keep you entertained with tons of progress in the weeks and months to come!

Daniel

Saturday, 7 September 2013

GOAP (Goal Oriented Action Planner)

For the uninitiated GOAP stands for Goal Oriented Action Planner. The AI basically chooses a goal from a set of goals and then finds a chain of actions that allow it to meet that goal.

Each goal will have conditions that must be met to make it a success, each action will have preconditions that need to exist to execute it and effects of it being executed successfully. So for example take the following action:

Action: eat_cake
Preconditions: cakeCount>0
Effect: cakeCount--, hunger--

We can clearly see that eating a cake will lose us one cake, and also decrease our hunger. However we do need to have cake to eat cake hence the precondition.

GOAP traditionally chains these actions together using A*, so actions have costs associated with them, now the heuristic cost is a bit difficult. With A* used for path finding you can just draw a line from x to the end point and call that the heuristic distance. But with actions and goals its not as clear how you can measure how far you are from completing the goal, after all this is a more abstract that distance. Now with my action planner for GOAP I'm using the number of unresolved conditions - preconditions that have to be met for any actions I've chosen and the conditions to satisfy the goal that haven't been met - as the score.

The A* search works best regressively, moving from the goal to the current world state as this narrows down actions. For the goal kill_entity all the end goals are going to have the effect entityDead==true, and this will be a limited number, whereas any actions out of the action set could go before in the plan to satisfy relevant preconditions.

Now the reason I went from my finite state machines to GOAP is because the "next state" logic in GOAP is essential kept in each action, if I want to change the AI or the game by adding new behaviours or skills all I have to do is add the relevant actions and goals, there's no need to change existing classes whereas in finite state machines you would have to go through the relevant states and change the transition logic.

I'm still in the process of integrating GOAP into my game however the planner is coded, abstract classes have been created and the main leap now is to implement some real actions and goals and integrate it all into my current framework.

But as well as being very versatile GOAP is quite timely to implement but I believe it will be worthwhile and hopefully I can start testing it in some basic situations in the next few months.

For anyone who wants to learn more about GOAP you can find a lot of useful information here: http://web.media.mit.edu/~jorkin/goap.html

Thursday, 15 August 2013

Using sketchup for level models (First game video inside)

Right now I'm trialling Sketchup as part of my asset creation pipeline, mainly as a way of quickly making rooms for further refinement (texturing, optimising etc) in blender. This week I decided to trial sketchup and see just how easily I could get it to make something usable as a game asset, first I started off by creating a floor plan.

Then with a collection of free plugins (which will be listed at the end of the article) I set about bringing my dream into reality. This was a fairly quick process, I'd say it only took me 1-2 hours (including making the floor plan. Which I didn't invest a lot of time on.)



So with this done what I spend my next 1-2 hours doing is hiding walls and removing coplanar faces. When I draw a wall that comes out of another one, the end of my second wall meets part of the face on my first wall. These two faces occupy the same space and are hidden geometries so with the goal of improving the quality of the model I set about deleting these faces.

With that done I exported as a .Obj, loaded into blender, and found the stairs about ten times bigger hovering quite far away from the room. To solve this I made the whole scene into one component essentially binding the separate parts of the model. Imported into blender and found everything was in order.

So I imported the .blend file to JMonkey to see the initial results. And my god was I surprised! The surfaces were all flickering and all the doorways seem to flicker in and out of existence. This wasn't what I'd seen in blender and it was definitely a surprise. I'd seen some flicker on the floor in some patches in blender but nothing to this extent. (unfortunately I didn't screenshot this monster).

The flicker was I found down to duplicate faces, it seemed that sketchup had overlaying faces. Now I'm no expert on the 3D assets front but any software that doubles the number of faces required seems to be a no go for me. But I expected some tweaking in blender so I selected all the vertices in edit mode and then pressed: w -> delete duplicates. Over 500 vertices were removed, so in other words things were looking pretty bad.

Imported into JMonkey. Less flicker but still flicker, and more disturbingly the doorways were still obscured. Something which I did manage to screenshot:

That rectangle on the right... That shouldn't be there.
In fact none of the black bits should

Okay. Time to google.

From my googling I found two programs: Meshlab and Netfabb basic, I used Meshlab to convert my .Obj to a .STL so I could load it into Netfabb. Netfabb seems to have quite a few uses (at least the paid for version does). But one thing it can do is take a format suited to a 3D printer (STL), and find flaws in the mesh. Things like hidden vertices, coplanar faces, inverted faces, lack of a closed volume.

And sure enough Netfabb showed the flaws that blender wouldn't:


The red parts are flaws in the mesh, and you can clearly see the doorways are filled in as well as part of the floor is red. The red floor is down to the face being upside down making the normals wrong. So I went into Sketchup and flipped the upside down faces to the right way round. But the doors. The doors were still a problem.

So I decided to take a walk around the level with the flipped faces fixed, the room did look better (slightly) but as I walked I saw something with the flickering faces. The majority of them seemed to be triangles, and then it occurred to me. I hadn't triangulated the mesh!

So I went into Sketchup and checked the triangulate mesh options on the exporter and voila:

Perfecto
So with some quick texturing in blender (Sketchup is definitely not for my texturing needs no UV unwrapping). I have my first piece of "game footage". You'll see the lighting isn't presentable yet (something I'm still to look into) and the texture seems a bit pixelated, also in the model I appear to have made rooms, corridors and doorways too narrow/small so things feel a bit claustrophobic. But that's something I'll address.

However it's exciting to have a room to walk around in instead of just a flat plane. One step closer to realising my dream!



Sketchup plugins used:

FredoScale
Cleanup
Joint Push Pull
1001bit tools
Buildedge

You can find all these from the sketchup plugin warehouse or sketchucation. 



Wednesday, 7 August 2013

Sound the alarm! A post on AI and spatial access methods

Preamble

Firstly to avoid confusion, I know I often refer to entities in the game as Spatials (as that is the object I use) but with this post the spatial in the title is referring to an objects location in 3D space.

So recently I've paused work on asset creation and have taken to working on something far more interesting, my game AI. I've programmed up some basic decision making to allow the AI to work out if it's scared, confident or concerned and to what level but that's not what this post is about. This post is about what happens when an enemy NPC discovers the player and raises an alarm.

So raising an alarm could be done in numerous ways and could have varying consequences:

  • A physical alarm is rung and either through the alarms location, loudspeaker, or general flashing lights the players rough location is broadcast to all enemies in a large area.
  • The enemy that spotted the player shouts over radio and alerts all other enemies the players precise location, unless they lose sight of the player in which case where the player was last seen.
  • The enemy shouts out hoping that someone hears and comes to assist.
I've been toying with all three ideas and I'm thinking of doing a combination of 1 and 3, with the NPC having the choice of shouting and engaging in combat or running scared to an alarm and hoping for backup. Then environmental properties such as distance to the nearest alarm would factor in the decision making and make the combat less predictable, something which is always good.

So now that's decided and it comes to implementation an important question presents itself.

How do I find all the enemies within a given radius?

This is important for the third method as the NPCs need to hear the shout, so what were my initial thoughts:
  • Using a spatial access method such as a binary space partition or a k-d tree.
  • Creating a bounding box centred on the location of the sound and extending out to its range and checking for the NPCs that fall within the box
Now the first method can require a fair bit of programming and also every time an NPC moves the bsp or k-d tree would need to be regenerated. However it allows to quickly determine which NPCs are within range.

The second method on the other hand is very simplistic to program and maintain but comes with a performance drop. It would iterate through every NPC checking for intersection whereas the previous method would only look at a subset.

Now at this point hash maps had popped into my mind but I'd cringed and thought about iterating through them checking for every continuous point in 3D space within the range. Which is just icky.

So still undecided I did some googling and suddenly team hash map presented a winning pitch. They keys I'd use would be discrete points and I'd split the map into cubes of a set size [source: http://goo.gl/dtXSfw ]

Now for my hash map the key will be an array of integers and the object a list of spatials (the entity sort). And as non-primitives in java are passed by reference not value to update the hash map I decided to:
  • Iterate through the values in the hash map (so loop through all the array lists)
  • Get each spatials key based on current position
  • Check if the array list stored at that key is equal to the one I'm at in the iterator.
  • If not add the spatial to the array list at that key and remove it from the array list it was in.
Now worst case that should perform at O(n) where n is the number of entities. And getting all the entities within the range of a sound should be O(log(n)) as we are only dealing with a subset of the entities rather than brute forcing through every entity.

All in all I think this was the best way to do it, but we'll see when I start testing it out and nearer the end when I start stress testing my game to see what it can handle in terms of how many NPCs active at once etc.

A word on sound intensity

So I've probably (hopefully) mentioned it before (at least in passing) but I'm aiming on using a form of utility in my AI where scores are assigned to attributes and actions and used to determine the best result. So from this some new inputs to the AIs decision making process is born, sound intensity and context.

So initially my thoughts on sound intensity would be that it would most likely be an exponential decay, like most things that drop off in nature. But in the name of science I decided to have a proper look into it. 

This lead me to this formula [Source: http://goo.gl/FQ9sEL ]:


Where L2 is the sound intensity we want to work out and r2 the distance it is from the sound
L1 is a reference intensity and r1 how far that is from the sound.

I suppose the reference is there to try and take account for the varying acoustic properties of rooms but it's not really that relevant (after all I posted an excellent link for those that want to learn more!) What is relevant is that from the graphs on the page and the sum we can see that I wasn't far off, I can model this with an exponential decay.

This decay will represent the intensity of the sound where the AI is based on the distance from the sounds origin and how intense it is at the origin:



The perceived intensity will be normalised between 1 and 0 where 1 is definitely hearing something of note and 0 is hearing nothing of note. Whether or not the NPC responds to the sound at values between 0 and 1 depends on its own attributes such as alertness etc.

I should also take into account any instances of background noise that could dilute the intensity of the sound but that's a stretch goal for now.

Once I've done with this fleshing out of AI I should go back to graphics and design and maybe even be able to provide my first game play video... Not sure if that's a goal for the distant or not too distant future though.



Tuesday, 30 July 2013

Combat AI, Navigation and Screenshots

So recently I've been looking at my combat AI, so far I've made do with it getting so close to the player and sitting still and shooting until the player moved out of range, it lost sight of the player or the player killed it. That's not good enough though, in terms of game play it is really predictable and lacks any sort of adrenaline or challenge. Well unless about 20 of these enemies surrounded you and just opened fire.

Of course this was just a temporary combat mechanic designed more for testing other functionality such as line of sight, my finite state machine and weapon controls. But now the time comes for something more advanced.

Initially I thought about terrain analysis and tactical assessment on the fly, something I'd be forced to do if I was working with robotics or something more physical but that idea rapidly gave way to storing valuable information on the map.

So to my scene I added a node named "Cover" to this node all objects that could be used as cover would be attached, things like crates, sandbags maybe a table. Things to obscure the line of sight but still allow the AI to shoot at the player. This list of cover objects would then be loaded and stored in an object designed to provide global information to the NPCs. An object which could also be used to relay communications between troops alerting other allies if they have a man down and should be on alert for the player. But enough of that I'm getting sidetracked (maybe if I updated more regularly I could have wrote about this in another post).

So I had my cover list and I changed the finite state machine to get the AI to put cover between it and the player upon sighting the player. And one thing became clear. The player just walked through the cover.

So I regenerated my navigation mesh trying to get it to account for static scenery and found that the JMonkey navmesh (which in turn is a wrapper for a previous version of the critterai library) doesn't factor in other models for the navmesh.

Right now I saw two choices:

  1. Edit the source code to fix this
  2. Use the navmesh pathfinder for a generalised path then tweak this dynamically to avoid obstacles
Upon looking at the source I decided to go for option 2. Plus it would allow for objects to move from start position which is definitely more flexible.

So how to do this...

Well here goes my method.

I cloned the bounding boxes of all cover objects as you can't get dimensions from models directly. I then added the radius of my AIs collision shape to this. The reason for this is when checking if the AI can fit between two obstacles its easier to expand the bounds for the obstacle and shrink the AI down to a point with an infinitely small width. 

Then any overlapping bounding boxes I merged as the AI couldn't fit between those two obstacles. I then performed ray casts between each pair of adjacent way points on the path checking for intersections with the bounding boxes.

Any bounding boxes I collided with I took the position of the midpoint of the segment of line that intersected the box. Then I created a vector starting at the centre and moving to the midpoint and extended that by the width of the bounding box. 

For a tighter fit I might want to perform another raycast to find the edge of the box were the character can walk and just brush the sides of the obstacle (remember I extended the bounding box) but for now I'm happy with this fix. It's still a work in progress and I've still got to fully test how far I've built before optimising it.

So without further ado a screenshot. The blue box is the AI, the grey box some cover, the pink dot is set above the centre of the bounding box (which is invisible), the cyan dot above the midpoint. And finally the red dot is the detour point to avoid hitting the cover.

The green line I drew in to show the ray going between the end way point behind the cover and the one prior to it.

Cor' blimey check out that FPS!
Now because I don't want anyone to think that because of the ugly character I made last entry I'd forsaken the human race and shifted my allegiance to the people of cube here is a screenshot of my last player texturing effort:

I'm quite proud to say it, the skin was made in photoshop by brush. I used these brushes and followed the tips http://www.brusheezy.com/brushes/1752-skin-textures-v1 they've also made my clone brushing look smoother can't recommend them enough!


Thursday, 4 July 2013

Finally! A "Properly" Textured Human

Well I've done it, through the power of youtube I've managed to find a good way of texturing humans. No more trying to paint onto meshes or UV layouts desperately trying to achieve the accuracy needed despite the bumps and protrusions of a human body. No more labouring under the hot sun looking for a friendly face and despite my efforts being confronted by grotesque homunculi.

By projecting from view (with bounds) I create a UV layout of one face of my model. I then apply a normal picture of a person as the texture texturing that side of the model. With the clone brush That single side is painted onto the corresponding area of the full UV layout.

Tis truly a thing of beauty. I take this simple man: http://fanart.tv/fanart/tv/248834/characterart/last-man-standing-2011-4ffb175969e71.png

I then apply him paint and I achieve:


Now naturally I won't use this man in the game and all rights belong to whomever took that picture. And the texture was only just lined up but it looks bloody good. I'll try and find some models to take photographs of or create my own computer graphicsy people that are of comparable quality to the resolution blender paints at and then huzah. The people are done.

Of course credit were credit is due I wouldn't have done it without this great tutorial:  http://www.youtube.com/watch?v=p4ngVoGIj1Q  

Now naturally this wasn't as easy as it seems, something had royally messed up in blender and I couldn't see the clone brush but by linking my scene to a blank project I was able to use the clone brush without having to burrow into blenders settings and find out what had gone wrong.

Monday, 1 July 2013

So Pretty.

Textures are wrapping correctly, and animations exporting correctly, the next stage in graphics is to make some nice textures for characters.

However my art skills may not be what they once were:

Hey good lookin', whatcha got cookin' ?
It's nice to see real progress is being made, I feel half of the work left on the game is polishing it up to add that professional touch.

Sunday, 23 June 2013

A Brief Update

Recently I've been working on getting animations into my characters in a bid to finish off the game art. So I made a basic walking cycle and imported the model into JMonkey, this however didn't work, for some reason the animation control wasn't attached.

After examining the model I concluded that the UniHuman skeleton wasn't valid for JMonkey (no root bone at 0,0,0 and it may have used shape keys instead of bone animations). I don't know if it did have bones as blender wouldn't display them (and yes I did set the armature to x-ray). So I deleted the rig and started again making my own skeleton something there are plenty of resources on and is dead easy.

So I re-rigged the model, animated, exported. Still nothing.

After some hunting around the JMonkey forums I found that the general consensus of the community is that the blender importer doesn't support animations. There were one or two people who said they did it and it worked but apparently followed the same steps the rest of us were following. So I tried to export using OGREXML.

The result made me want to gouge my eyes out. Bow legged, stumbling, it made Peter Jackson's early work seem restrained and rather pretty. I tried to fix the rig but it didn't go well so I deleted it and re-rigged, 46 bones later and scale and rotation applied I had a working animation.

Now animations work perfectly, unfortunately the texture is really messed up my .material file seems to be doing naff all except for putting the face on the upper thigh.

I've also made some alterations to my autonomousMovementControl class, copying useful fields over to separate lists to garbage collect larger items away earlier, also removes the overhead of calling methods and makes the code more readable. (path.get(i).getPosition() vs path.get(i)).

I'm beginning to get frustrated with computer graphics, but it''ll be worth it when I have a game under my belt.

Thursday, 13 June 2013

Enemy AI (Finite State Machine and Line of Sight)

So I feel that I've made great progress recently, characters are up (unfinished) and the AI is coming along great. I've taken my initial prototypes of various functionality (line of sight, path finding) and migrated them to a finite state machine.

Each state is a singleton and they share a unified interface with the methods CheckConditions, Start, DoAction, End. They all take the spatial they are controlling as an argument. The actual state machine is implemented as a controller attached to the character, so using the spatial the states can get access to the state machine and other controllers.

The reason I chose the singleton pattern is that when using a lot of characters I didn't want to have to create 3 states for each character and the singleton avoids this overhead, plus with the spatial it is commanded inputted as an argument it is able to do everything I need.

So without further ado I'll go into detail about the methods:

CheckConditions(Spatial spatial): this method checks if the conditions to enter the state are true or not. So when in the patrolling state AttackState.checkConditions(spatial) will be called and if this returns true, the next state will be the attackState meaning the spatial will attack. This method is also used inside the AttackState class to check that it can still attack, if it can't it will move back to patrol and set the place the player was last seen as its first patrol point.

Start(Spatial spatial): this method runs any start up logic necessary to move into that state, this includes and is not limited to animations for the character.

DoAction(Spatial spatial): this works out what the spatial should do and sets it to do it, its also responsible for moving to other states if the conditions are correct

End(Spatial spatial): runs any ending animations or logic.

Line of sight

Early on in the AI line of sight was one of the first things implemented, using the AIs view direction we find the angle between straight in front of the AI and the players location. If this is outside of the field of view then the AI can't see the player and this is demonstrated below:

Red line dictates field of view, green and red triangles player and AI respectively
If the player is inside the AIs field of view a ray test is then done from the centre of the AI to the player. This ray test will collide with the AIs geometry so if the number of entities the ray collides with is greater than 1 there is a geometry between the AI and the player, meaning that the player is obscured from view.

Right now this system just returns true or false however I am planning to implement a 2D Gaussian function so that the AI is more confident the closer and the nearer to the centre of vision the player is. This will mean that when the player is a distance away and creeping around the outside there will be a chance he/she remains unseen. Then the response of the AI can be dictated by numerous factors such as: aggression, fear or other emotional scores I could give.

This is a stretch goal, however my AI can see the player and move between states, it can effectively move between attack and patrol and respond in a semi-realistic manner. Now it's just a question of tailoring the system and augmenting it - something relatively easy with the highly modular system I'm using.

All in all very happy =D

Sunday, 9 June 2013

Texturing

So far with my game I've been representing characters with cubes, these cubes move around, shoot, and change colour when you enter their line of sight. But a big part of the game and something I see being one of the most complicated for me is graphics and animation. Because of this a desire to get the basic animating of characters and importing into the game has preyed on my mind.

Initially I was trying to use makehuman to make my game characters, it was fully rigged textures included and had a very open licensing agreement, yes it did make very high poly characters but I was sure in blender I could decrease the vertex count and optimise it. However I had a lot of problems trying to move the models into JMonkey, as you can see below:

This took a lot of work to get to, initially I started off with a fragment of torso with some random pixel clusters. I know of people who have apparently just exported, put into blender and imported into JMonkey with little trouble so I don't know if there's something I missed or they're using the nightly builds or an older version.

I was beginning to reach my wits end, at one point I half considered scrapping JMonkey and giving unity a shot. Luckily however I came into contact with unihuman ( http://goo.gl/OrTl7 ). I opened up the blend file, selected the model and pressed shift-f10. This unwraps the model to create a UV-Layout:

I exported this and whacked it into gimp where I painted it (albeit terribly). I then created a material and added a texture to it. The screenshot below shows the textured model along with the mapping settings. The texture is mapped to the UV layout because other wise the image file is laid onto the back and the rest of the model filled in with a solid colour:

I use the jmonkey blender imported and add the material file to the model programmatically, when the model is finished and animated I'll be able to use it for nearly every character and just provide a different texture file.

The finished result in the game world is:

As you can see the texture is a bit terrible (great art skills on my part), but I used to do a lot of photoshop and eventually I'll take the time to properly texture up the model. Right now I'm looking at animating the model either from scratch or using open source mocap data.

Overall I'm quite happy with this. It might be overly optimistic but I feel that characters are the hardest 40% of the art work, and they are a part that I have near enough got down.


Friday, 19 April 2013

Spawn points and map data

So lately I've had a bit of a conundrum, how do I save level data so that when I load the scene I can get things like player spawn points, patrol areas etc. My options as I saw them were:

  • Find a way to place the data directly onto the map
  • Save it into another file i.e. <level_name>data.xml and create a map editor that synchronises the two.
Naturally the second one isn't the best option, there is always the chance the two could become out of sync and the level won't be correctly loaded the first is far better. And before I go on a brief introduction should be given on Spatials vs Nodes vs Geometries:

  • A Spatial is an abstract class that contains all the data and base methods for nodes and geometries. You can store things such as position and rotation in a spatial.
  • A Geometry is an extension of a spatial except it represents a visible object in the scene graph
  • A Node extends Spatial is a handle for other Spatials, it can be used to group objects in the scene graph
So initially I wanted to add a nodes to the graph to represent data, there would be a parent node called something like spawns and all it's children would have a local translation set. However in the sceneComposer in the jmonkey SDK there isn't an option to add a node to the scene. But there is an easy workaround (credit does go to Normen from the JMonkey community for the inspiration behind this):

In the models folder in assets save a new jmonkey scene (.j3o file), however don't create a model for it leave it blank. The empty space for the model will have an origin, and we can place this empty model in the scene and use it to store information. If you don't understand double click on the file and see the picture of the axes. Well when we place this empty model the origin of these axes is our local translation. We can place these blank models around the scene and get their location and this is how information is being saved into the level. I won't go into how to add these models to the scene this is covered adeptly on the jmonkey website.

These models will be added as a child to a node added to the scene graph. I called this node "Spawn points" and loaded the list of spawn points with the following code:

Node spawnList=(Node)((Node)level).getChild("Spawn points");
for(Spatial s:spawnList.getChildren())
{
    ArrayList<Vector3f> spawnPoints.add(s.getLocalTranslation().add(0,4,0);
}

The add(0,4,0) is there to prevent spawning a character at a point were it can fall through the map. There's no need to make additional blank models of course, you can infer the purpose of the spatial by the name of its parent node.

Of course this doesn't solve the patrolling issue but I have a number of ideas:
  • Using none blank models the patrol zone being inside them
  • Points defining the area, the patrol area is a concave polygon encompassing all the points
  • Other more inventive method.
As I see it now the point based method has short comings based on the difficulty of specifying a awkwardly shaped patrol zone, and multiple patrol zones. Also the model method has problems, storing a large amount of different shapes that aren't even loaded adds an overhead to the project. I'm not attempting to prematurely optimise but while this isn't a pressing concern there's no harm in mulling it over until an elegant enough solution presents itself.

I've also attempted and successfully created and animated a human figure in blender + makehuman I'm going to work on some more pressing essentials before I start figuring out the best way to export/import these assets and control the animations. These are likely going to be timely operations given my inexperience with graphics and I have other things to focus on.

Monday, 15 April 2013

An introduction

Developers don't do art. This is either going to be a name I love or one I sorely regret.

Right now I'm in the process of developing a 3D game, and I thought I'd start a development blog. The hope is that this will be informative on what to do and more than likely what not to do when developing your own game. This blog should also help me keep on track with my work, because although I don't do art I am being forced to by the nature of the project. A game needs graphics.

Technology wise I am using:
Programming:

  • Java as the language, I'm familiar with it and C++ however graphics programming for C++ is not something I particularly want to delve into.
  • JMonkeyEngine as the engine. I chose this as it has less restrictive licensing than other engines and I get to do a lot of programming myself not just a few scripts here and there.
Graphics and art
  • Blender - basic modelling and animation
  • MakeHuman - human modelling
  • Mocapdata.com - great source for motion capture data.
Right now on the game a level can be loaded (although not via a menu, there are no menus) and a character is introduced to it and walks around, jumps, shoots etc. Cubes can be introduced to this world and if they see you they run at you shooting. Both you and the cubes have ammo, reload times and damage and both can be killed (although you respawn). The cubes use a navmesh to navigate and they can't see you if you're out of range of sight and/or obscured by an object.

Now I'm working on making an animated character in blender and finding out how to get it in my game, my aim is to replace the cubes with this character and make something that looks 100x better, because people just don't get as excited about evil cubes as they should do.

At some point I'll get around to sharing how I implemented some of these features and my ideas however I am revising for exam season and I will be preoccupied.