Thursday, 13 June 2013

Enemy AI (Finite State Machine and Line of Sight)

So I feel that I've made great progress recently, characters are up (unfinished) and the AI is coming along great. I've taken my initial prototypes of various functionality (line of sight, path finding) and migrated them to a finite state machine.

Each state is a singleton and they share a unified interface with the methods CheckConditions, Start, DoAction, End. They all take the spatial they are controlling as an argument. The actual state machine is implemented as a controller attached to the character, so using the spatial the states can get access to the state machine and other controllers.

The reason I chose the singleton pattern is that when using a lot of characters I didn't want to have to create 3 states for each character and the singleton avoids this overhead, plus with the spatial it is commanded inputted as an argument it is able to do everything I need.

So without further ado I'll go into detail about the methods:

CheckConditions(Spatial spatial): this method checks if the conditions to enter the state are true or not. So when in the patrolling state AttackState.checkConditions(spatial) will be called and if this returns true, the next state will be the attackState meaning the spatial will attack. This method is also used inside the AttackState class to check that it can still attack, if it can't it will move back to patrol and set the place the player was last seen as its first patrol point.

Start(Spatial spatial): this method runs any start up logic necessary to move into that state, this includes and is not limited to animations for the character.

DoAction(Spatial spatial): this works out what the spatial should do and sets it to do it, its also responsible for moving to other states if the conditions are correct

End(Spatial spatial): runs any ending animations or logic.

Line of sight

Early on in the AI line of sight was one of the first things implemented, using the AIs view direction we find the angle between straight in front of the AI and the players location. If this is outside of the field of view then the AI can't see the player and this is demonstrated below:

Red line dictates field of view, green and red triangles player and AI respectively
If the player is inside the AIs field of view a ray test is then done from the centre of the AI to the player. This ray test will collide with the AIs geometry so if the number of entities the ray collides with is greater than 1 there is a geometry between the AI and the player, meaning that the player is obscured from view.

Right now this system just returns true or false however I am planning to implement a 2D Gaussian function so that the AI is more confident the closer and the nearer to the centre of vision the player is. This will mean that when the player is a distance away and creeping around the outside there will be a chance he/she remains unseen. Then the response of the AI can be dictated by numerous factors such as: aggression, fear or other emotional scores I could give.

This is a stretch goal, however my AI can see the player and move between states, it can effectively move between attack and patrol and respond in a semi-realistic manner. Now it's just a question of tailoring the system and augmenting it - something relatively easy with the highly modular system I'm using.

All in all very happy =D

No comments:

Post a Comment