What led to creating a game demo? I have long been interested in math formulas and algorithms, AI, Game Logic, and programming. I find myself working on a lot of business applications that have required me to familiarize myself with many different formulas such as Haversine and the more accurate Vincenty’s formulae, 3D formulas, Geo-Spatial and other GPS data and now LiDAR data is on the horizon as well. As a natural congruence of these interest, I find myself working a lot with 3D Points and calculations which has led to a bit of a growing tangent as my mind often drifts back to the days of 3D level design and games in general.
Navigation and Behavior – Heuristics and Psychology
I always have a desire to understand and I believe that the best way to obtain understanding is to immerse yourself in the task or problem at hand. As part of a general desire to understand the mechanics behind things, I delved into a research paper on A* path findingand soon set out to create an algorithm. To test this algorithm, I made a simple demo window to display small paths. As I refined the algorithm I wanted more complex and variable maps to test with.
The solution I chose was to create maps in Excel in a paint by number fashion. This allowed custom map layouts and sizes to be created as well as an effortless way to change terrain.
These numbered cells correlate to several layers of data. At the top is graphic information; texture tiles. Cascading down is movement speed data for the type of terrain, cost values, common pathways, as well as danger zones. Movement speed data is used in scenarios where players should move quicker on a road way than in grasslands. Cost values are used to provide penalties for terrains, such as roadways have a higher cost than crosswalks. Common pathways aide in navigation. Danger zones track areas of previous troubles. All of which help shape the behavior of the ‘players’ navigation. As the ‘players’ travel along their paths data is collected and analyzed to form pathways between certain nodes.
Beyond the heuristic navigation, the ‘players’ movements are affected by other factors as well. Needs and desires such as hunger, sleepiness and resources are factored into decisions, as well as time of day.
Another design element I decided upon early in the process was psychology. Using the theory of Robert Plutchik as a basis of classifying emotions, I categorized the major emotions and their lesser emotions. I coded emotional responses to work, interactions, eating, etc. These emotional responses help guide the ‘players’ interactions between each other, as well as work performance.
Math and Graphics
In order to properly show the ‘player’s I learned about GDI (+) graphics device interface libraries. This allowed me to use images with transparencies, create a tile based map in memory, rendering players, data, and solids, as well as the ability to scroll through and highlight ‘players’. Highlighted characters stats are displayed to the right showing the ‘players’ real time emotion, health, wealth and world stats.
Learning the GDI+ libraries was also an immense help in other business applications. This exercise expanded my knowledge of the libraries and basic concepts such as creating screen graphics in memory, image rotations, image scaling and transparencies.
Of course, a good deal of math is involved in calculating the needed data to draw a screen. In this exercise, I utilized formulas for radial searches (to locate nearby ‘players’ and entities, distance calculations, angular calculations, weighted randoms among other functions and formulas.
All of these advances introduced a number of difficulties along the way. Most notably issues of order of operations in regards to moving, rendering , applying game logic, and calculating paths for XX ‘players’ 30 times a second. The desire to keep a constant screen refresh rate (FPS) required me to learn to limit calculation times and I am currently tinkering with dividing calculations between screen refreshes.
The actual A* [A-Star] algorithm has been interesting to work with and interesting to watch. The process is really split into 2 separate functions. The first function is to seek out the goal point and can be thought of as a ‘player’ who has to pay for each step they take. This ‘player’ wants to pay the least to reach their goal.
To the left we can see each tile has a total cost in the upper left. In the image, the green tile is the ‘player’. Each possible step is assigned a value or cost.
We can see the cheapest step is directly right, the next cheapest step would be directly down. The next cheapest step after that would be directly down again. However, we can see that the most efficient and cheapest path over all is to step from the green tile diagonally down and to the right, then directly down.
The unrefined path cost: 3 steps @ 168, the refined path cost: 2 steps @ 128. You can see the efficiency in each step by dividing the cost by the steps. For the unrefined path, each step only consumes 56 cost units, the refined path consumes 64 cost units, 114% more efficient.
In basic terms, the path creation is accomplished by moving in the general direction of the goal until you hit a wall, then you go whichever way is closest still to the goal. There is a deal of calculations that provide insight as to where to go and when. These insights can be as simple as direction & distance from goal, to as complex as you want. For example; rocks cost more to walk on than grass, grass cost more to walk on than the street.
The logic is as follows. A ‘player’ wants to reach a destination. The ‘player’ walks in the same fashion as a real person would in so that the ‘player’ will be constantly looking for obstacles and easier paths as the path is traversed. At each step, the ‘player’ takes there is a search for walls and cheaper tiles to travel on, values are assigned to every possible step. All of those values are stored for the next step. Once the final step is reached the second process is started.
In the second process, the function is to look back at all of the valid steps that can complete the goal and locate the most efficient path from the available steps. What is left is a workable solution that is highly efficient and connects 2 random points across afield / maze of obstacles.
Graphically, the 2 processes look like a lightning strike as the first function seeks out a ground and the second function sends the charge up the completed path from the ground. Much like to the left, some paths are dead ends and are not used in the final path back to the origin.
What more can be tinkered with.
Moving forward from this project I look forward to working with D* path finding. This process seems to be a natural evolution of what I am working on now in terms of segmented path finding where as a ‘player’ will walk towards a goal until the ‘player’ reaches an obstacle. At that point, the ‘player’ would solve the navigation problem and resume walking. Working on segmenting calculations to further increase response speeds, D Star path finding, as well as further developing pathways between extended nodes, I expect to find further efficiencies.