Efficiently pathfinding many flocking enemies around obstaclesWhat algorithm to use for path mapping?pathfinding with obstacles in a Warcraft 3 like gameHow can I generate a 2d navigation mesh in a dynamic environment at runtime?Influence Maps for Pathfinding?Pathfinding for fleeingPathfinding in non tile-based 2d map (maybe potential field)A* in 2D grid with obstacles in between graph nodesAn algorithm for forming unit groupsHow do I implement A* pathfinding for enemies to follow the player?
What professions would a medieval village with a population of 100 need?
Why don't politicians push for fossil fuel reduction by pointing out their scarcity?
How should I think about joining a company whose business I do not understand?
Should my "average" PC be able to discern the potential of encountering a gelatinous cube from subtle clues?
Co-author responds to email by mistake cc'ing the EiC
Church Booleans
!I!n!s!e!r!t! !n!b!e!t!w!e!e!n!
Why doesn't mathematics collapse even though humans quite often make mistakes in their proofs?
What does it mean to have a subnet mask /32?
In an emergency, how do I find and share my position?
Does Git delete empty folders?
Don't understand MOSFET as amplifier
Are required indicators necessary for radio buttons?
Does adding the 'precise' tag to daggers break anything?
Can we save the word "unique"?
Is "stainless" a bulk or a surface property of stainless steel?
Is there such a thing as too inconvenient?
Potential new partner angry about first collaboration - how to answer email to close up this encounter in a graceful manner
How to create a summation symbol with a vertical bar?
What is the hex versus octal timeline?
Are illustrations in novels frowned upon?
Vacuum collapse -- why do strong metals implode but glass doesn't?
To "hit home" in German
How to avoid using System.String with Rfc2898DeriveBytes in C#
Efficiently pathfinding many flocking enemies around obstacles
What algorithm to use for path mapping?pathfinding with obstacles in a Warcraft 3 like gameHow can I generate a 2d navigation mesh in a dynamic environment at runtime?Influence Maps for Pathfinding?Pathfinding for fleeingPathfinding in non tile-based 2d map (maybe potential field)A* in 2D grid with obstacles in between graph nodesAn algorithm for forming unit groupsHow do I implement A* pathfinding for enemies to follow the player?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I'm working on trying to improve the pathfinding for my game's enemies. Right now, they basically just constantly move towards the player's exact position by calculating the angle between themselves and the players and moving in that direction. I also have a flocking algorithm which prevents the enemies from stacking on top of each other, so they will form up into groups rather than clip through each other.
However, now that I've added a tile-based map, I need the enemies to also be able to path around obstacles and walls for example. I initially tried adding a separation value to "non-walkable" tiles so that the flocking algorithm would consider the walls and obstacles as objects to move away from. I have yet to work out whether or not this is feasible because my initial test showed the enemies hitting an invisible "wall" where there are no non-walkable tiles, yet for some reason, they hit it and start spazzing out.
I was wondering if it might be too performance heavy to calculate a path to the player using A* and then use the flocking algorithm to prevent clumping. Originally my game was going to be a wave-based shooter, but I've decided instead to make it level-based in the vein of Hotline Miami, so it's likely I'll have fewer enemies, with the occasional horde, and just make them stronger.
Is this a viable solution? I'm using Java with Slick2D as my game engine. Or is there a better solution / algorithm that tackles both these problems?
java algorithm ai path-finding
$endgroup$
add a comment |
$begingroup$
I'm working on trying to improve the pathfinding for my game's enemies. Right now, they basically just constantly move towards the player's exact position by calculating the angle between themselves and the players and moving in that direction. I also have a flocking algorithm which prevents the enemies from stacking on top of each other, so they will form up into groups rather than clip through each other.
However, now that I've added a tile-based map, I need the enemies to also be able to path around obstacles and walls for example. I initially tried adding a separation value to "non-walkable" tiles so that the flocking algorithm would consider the walls and obstacles as objects to move away from. I have yet to work out whether or not this is feasible because my initial test showed the enemies hitting an invisible "wall" where there are no non-walkable tiles, yet for some reason, they hit it and start spazzing out.
I was wondering if it might be too performance heavy to calculate a path to the player using A* and then use the flocking algorithm to prevent clumping. Originally my game was going to be a wave-based shooter, but I've decided instead to make it level-based in the vein of Hotline Miami, so it's likely I'll have fewer enemies, with the occasional horde, and just make them stronger.
Is this a viable solution? I'm using Java with Slick2D as my game engine. Or is there a better solution / algorithm that tackles both these problems?
java algorithm ai path-finding
$endgroup$
2
$begingroup$
As I described in the edit, "is this too heavy" is a question to ask your profiler, because it will depend on your implementation, target hardware, performance budget, and the context of your game — all stuff that you and your profiler know intimately but Internet strangers do not. If you want to get flocks pathfinding efficiently, we can suggest strategies to help with that, but only your own profiling can answer what's efficient enough for your needs. If you profile and identify a specific performance problem, we can also help you find how to solve that problem.
$endgroup$
– DMGregory♦
12 hours ago
$begingroup$
How you implement them affects performance. For instance, only running A* on leaders & relying on flocking for followers.
$endgroup$
– Pikalek
12 hours ago
add a comment |
$begingroup$
I'm working on trying to improve the pathfinding for my game's enemies. Right now, they basically just constantly move towards the player's exact position by calculating the angle between themselves and the players and moving in that direction. I also have a flocking algorithm which prevents the enemies from stacking on top of each other, so they will form up into groups rather than clip through each other.
However, now that I've added a tile-based map, I need the enemies to also be able to path around obstacles and walls for example. I initially tried adding a separation value to "non-walkable" tiles so that the flocking algorithm would consider the walls and obstacles as objects to move away from. I have yet to work out whether or not this is feasible because my initial test showed the enemies hitting an invisible "wall" where there are no non-walkable tiles, yet for some reason, they hit it and start spazzing out.
I was wondering if it might be too performance heavy to calculate a path to the player using A* and then use the flocking algorithm to prevent clumping. Originally my game was going to be a wave-based shooter, but I've decided instead to make it level-based in the vein of Hotline Miami, so it's likely I'll have fewer enemies, with the occasional horde, and just make them stronger.
Is this a viable solution? I'm using Java with Slick2D as my game engine. Or is there a better solution / algorithm that tackles both these problems?
java algorithm ai path-finding
$endgroup$
I'm working on trying to improve the pathfinding for my game's enemies. Right now, they basically just constantly move towards the player's exact position by calculating the angle between themselves and the players and moving in that direction. I also have a flocking algorithm which prevents the enemies from stacking on top of each other, so they will form up into groups rather than clip through each other.
However, now that I've added a tile-based map, I need the enemies to also be able to path around obstacles and walls for example. I initially tried adding a separation value to "non-walkable" tiles so that the flocking algorithm would consider the walls and obstacles as objects to move away from. I have yet to work out whether or not this is feasible because my initial test showed the enemies hitting an invisible "wall" where there are no non-walkable tiles, yet for some reason, they hit it and start spazzing out.
I was wondering if it might be too performance heavy to calculate a path to the player using A* and then use the flocking algorithm to prevent clumping. Originally my game was going to be a wave-based shooter, but I've decided instead to make it level-based in the vein of Hotline Miami, so it's likely I'll have fewer enemies, with the occasional horde, and just make them stronger.
Is this a viable solution? I'm using Java with Slick2D as my game engine. Or is there a better solution / algorithm that tackles both these problems?
java algorithm ai path-finding
java algorithm ai path-finding
edited 19 mins ago


DMGregory♦
71k16 gold badges126 silver badges199 bronze badges
71k16 gold badges126 silver badges199 bronze badges
asked 14 hours ago
Darin BeaudreauDarin Beaudreau
2099 bronze badges
2099 bronze badges
2
$begingroup$
As I described in the edit, "is this too heavy" is a question to ask your profiler, because it will depend on your implementation, target hardware, performance budget, and the context of your game — all stuff that you and your profiler know intimately but Internet strangers do not. If you want to get flocks pathfinding efficiently, we can suggest strategies to help with that, but only your own profiling can answer what's efficient enough for your needs. If you profile and identify a specific performance problem, we can also help you find how to solve that problem.
$endgroup$
– DMGregory♦
12 hours ago
$begingroup$
How you implement them affects performance. For instance, only running A* on leaders & relying on flocking for followers.
$endgroup$
– Pikalek
12 hours ago
add a comment |
2
$begingroup$
As I described in the edit, "is this too heavy" is a question to ask your profiler, because it will depend on your implementation, target hardware, performance budget, and the context of your game — all stuff that you and your profiler know intimately but Internet strangers do not. If you want to get flocks pathfinding efficiently, we can suggest strategies to help with that, but only your own profiling can answer what's efficient enough for your needs. If you profile and identify a specific performance problem, we can also help you find how to solve that problem.
$endgroup$
– DMGregory♦
12 hours ago
$begingroup$
How you implement them affects performance. For instance, only running A* on leaders & relying on flocking for followers.
$endgroup$
– Pikalek
12 hours ago
2
2
$begingroup$
As I described in the edit, "is this too heavy" is a question to ask your profiler, because it will depend on your implementation, target hardware, performance budget, and the context of your game — all stuff that you and your profiler know intimately but Internet strangers do not. If you want to get flocks pathfinding efficiently, we can suggest strategies to help with that, but only your own profiling can answer what's efficient enough for your needs. If you profile and identify a specific performance problem, we can also help you find how to solve that problem.
$endgroup$
– DMGregory♦
12 hours ago
$begingroup$
As I described in the edit, "is this too heavy" is a question to ask your profiler, because it will depend on your implementation, target hardware, performance budget, and the context of your game — all stuff that you and your profiler know intimately but Internet strangers do not. If you want to get flocks pathfinding efficiently, we can suggest strategies to help with that, but only your own profiling can answer what's efficient enough for your needs. If you profile and identify a specific performance problem, we can also help you find how to solve that problem.
$endgroup$
– DMGregory♦
12 hours ago
$begingroup$
How you implement them affects performance. For instance, only running A* on leaders & relying on flocking for followers.
$endgroup$
– Pikalek
12 hours ago
$begingroup$
How you implement them affects performance. For instance, only running A* on leaders & relying on flocking for followers.
$endgroup$
– Pikalek
12 hours ago
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
This sounds like a use case for Flow Fields.
In this technique, you do a single pathfinding query outward from your player object(s), marking each cell you encounter with the cell you reached it from.
If all your tiles/edges have equal traversal cost, then you can use a simple breadth-first search for this. Otherwise, Dijkstra's algorithm (like A* with no goal/heuristic) works.
This creates a flow field: a lookup table that associates each cell with the next step toward the closest player object from that position.
Now your enemies can each look up their current position in the flow field to find the next step in their shortest obstacle-avoiding path to the closest player object, without each doing their own pathfinding query.
This scales better and better the more enemies you have in your flock. For a single enemy, it's more expensive than A* because it searches the whole map (though you can early-out once you've reached all pathfinding agents). But as you add more enemies, they get to share more and more of the pathfinding cost by computing shared path segments once rather than over and over. You also gain an edge from the fact that BFS/Dijkdtra's are simpler than A*, and typically cheaper to evaluate per cell inspected.
Exactly where the break-even point hits, from individual A* being cheaper, to A* with memoization being cheaper (where you re-use some of the results for a past pathfinding query to speed up the next one), to flow fields being cheaper, will depend on your implementation, the number of agents, and the size of your map. But if you ever plan a big swarm of enemies approaching from multiple directions in a confined area, one flow field will almost certainly be cheaper than iterated A*.
As an extreme example, you can see a video here with 20 000 agents all simultaneously pathfinding on a reasonably small grid.
$endgroup$
$begingroup$
This technique sounds really neat. I'll check it out.
$endgroup$
– Darin Beaudreau
11 hours ago
3
$begingroup$
It's possible to use a hybrid algorithm that constructs a partial flow field without searching more of the map than repeated calls to A* would, and never searching the same position twice. The basic idea is to pick an arbitrary enemy and start an A* search from the player towards that enemy, marking cells as you encounter them just like in normal flow field generation. Once the search finds that enemy, pick another enemy (that you haven't found yet) as the target, re-sort the open set according to the new heuristic and continue searching. Stop when you've found all enemies.
$endgroup$
– Ilmari Karonen
5 hours ago
add a comment |
$begingroup$
A* is not performance heavy. I would approach this situation by varying the algorithms. Do A* from time to time and in between check whether the next step is free to step onto or you need evasion.
For example, track the players distance from the A* target location, if it's above a threshold recalculate a* and then just do update movements. Most games use a combination of way points, e.g. a simplified grid for path finding and a logic that handles the movement between waypoints with evasion steering algorithms using raycasts. The agents try to run to a distant waypoint by maneuvering around obstacles in their proximity is the best approach in my opinion.
It's best to work with finite state machines here and read the book "Programming Game AI By Example" by Mat Buckland. The book offers proven techniques for your problem and details the math required. Source code from the book is available on the web; the book is in C++ but some translations (including Java) are available.
$endgroup$
$begingroup$
With an infrequently-updating A* approach, it may be helpful to stagger your updates, maintaining a budget for how many enemies are allowed to re-path on a single frame. That way you can keep your peak pathfinding cost per frame capped, and more robustly handle many AI pathing by amortizing their total cost over several frames. An AI using a stale path for a frame or two when the budget for the frame has been exceeded, or falling back on dead reckoning if close, usually won't be disruptive.
$endgroup$
– DMGregory♦
11 hours ago
add a comment |
$begingroup$
Not only is it feasible, I believe it was done in a commercial game in the 90s - BattleZone (1998).
That game had 3D units with free non-tile-based movement, and tile-based base construction.
This is how it seemed to work:
First, A* or something similar (likely a variation of A* with strict limits on how long a path it can find, so it never takes too many resources to run but doesn't always find a path all the way to the destination) would be used to find a path for a hovertank to get to its destination without getting stuck in tile-based obstacles.
Then the tank would fly around untiled space as if it was attracted to the centre of a nearby tile in its path, and repulsed by obstacles, other nearby tanks, etc.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "53"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgamedev.stackexchange.com%2fquestions%2f174801%2fefficiently-pathfinding-many-flocking-enemies-around-obstacles%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This sounds like a use case for Flow Fields.
In this technique, you do a single pathfinding query outward from your player object(s), marking each cell you encounter with the cell you reached it from.
If all your tiles/edges have equal traversal cost, then you can use a simple breadth-first search for this. Otherwise, Dijkstra's algorithm (like A* with no goal/heuristic) works.
This creates a flow field: a lookup table that associates each cell with the next step toward the closest player object from that position.
Now your enemies can each look up their current position in the flow field to find the next step in their shortest obstacle-avoiding path to the closest player object, without each doing their own pathfinding query.
This scales better and better the more enemies you have in your flock. For a single enemy, it's more expensive than A* because it searches the whole map (though you can early-out once you've reached all pathfinding agents). But as you add more enemies, they get to share more and more of the pathfinding cost by computing shared path segments once rather than over and over. You also gain an edge from the fact that BFS/Dijkdtra's are simpler than A*, and typically cheaper to evaluate per cell inspected.
Exactly where the break-even point hits, from individual A* being cheaper, to A* with memoization being cheaper (where you re-use some of the results for a past pathfinding query to speed up the next one), to flow fields being cheaper, will depend on your implementation, the number of agents, and the size of your map. But if you ever plan a big swarm of enemies approaching from multiple directions in a confined area, one flow field will almost certainly be cheaper than iterated A*.
As an extreme example, you can see a video here with 20 000 agents all simultaneously pathfinding on a reasonably small grid.
$endgroup$
$begingroup$
This technique sounds really neat. I'll check it out.
$endgroup$
– Darin Beaudreau
11 hours ago
3
$begingroup$
It's possible to use a hybrid algorithm that constructs a partial flow field without searching more of the map than repeated calls to A* would, and never searching the same position twice. The basic idea is to pick an arbitrary enemy and start an A* search from the player towards that enemy, marking cells as you encounter them just like in normal flow field generation. Once the search finds that enemy, pick another enemy (that you haven't found yet) as the target, re-sort the open set according to the new heuristic and continue searching. Stop when you've found all enemies.
$endgroup$
– Ilmari Karonen
5 hours ago
add a comment |
$begingroup$
This sounds like a use case for Flow Fields.
In this technique, you do a single pathfinding query outward from your player object(s), marking each cell you encounter with the cell you reached it from.
If all your tiles/edges have equal traversal cost, then you can use a simple breadth-first search for this. Otherwise, Dijkstra's algorithm (like A* with no goal/heuristic) works.
This creates a flow field: a lookup table that associates each cell with the next step toward the closest player object from that position.
Now your enemies can each look up their current position in the flow field to find the next step in their shortest obstacle-avoiding path to the closest player object, without each doing their own pathfinding query.
This scales better and better the more enemies you have in your flock. For a single enemy, it's more expensive than A* because it searches the whole map (though you can early-out once you've reached all pathfinding agents). But as you add more enemies, they get to share more and more of the pathfinding cost by computing shared path segments once rather than over and over. You also gain an edge from the fact that BFS/Dijkdtra's are simpler than A*, and typically cheaper to evaluate per cell inspected.
Exactly where the break-even point hits, from individual A* being cheaper, to A* with memoization being cheaper (where you re-use some of the results for a past pathfinding query to speed up the next one), to flow fields being cheaper, will depend on your implementation, the number of agents, and the size of your map. But if you ever plan a big swarm of enemies approaching from multiple directions in a confined area, one flow field will almost certainly be cheaper than iterated A*.
As an extreme example, you can see a video here with 20 000 agents all simultaneously pathfinding on a reasonably small grid.
$endgroup$
$begingroup$
This technique sounds really neat. I'll check it out.
$endgroup$
– Darin Beaudreau
11 hours ago
3
$begingroup$
It's possible to use a hybrid algorithm that constructs a partial flow field without searching more of the map than repeated calls to A* would, and never searching the same position twice. The basic idea is to pick an arbitrary enemy and start an A* search from the player towards that enemy, marking cells as you encounter them just like in normal flow field generation. Once the search finds that enemy, pick another enemy (that you haven't found yet) as the target, re-sort the open set according to the new heuristic and continue searching. Stop when you've found all enemies.
$endgroup$
– Ilmari Karonen
5 hours ago
add a comment |
$begingroup$
This sounds like a use case for Flow Fields.
In this technique, you do a single pathfinding query outward from your player object(s), marking each cell you encounter with the cell you reached it from.
If all your tiles/edges have equal traversal cost, then you can use a simple breadth-first search for this. Otherwise, Dijkstra's algorithm (like A* with no goal/heuristic) works.
This creates a flow field: a lookup table that associates each cell with the next step toward the closest player object from that position.
Now your enemies can each look up their current position in the flow field to find the next step in their shortest obstacle-avoiding path to the closest player object, without each doing their own pathfinding query.
This scales better and better the more enemies you have in your flock. For a single enemy, it's more expensive than A* because it searches the whole map (though you can early-out once you've reached all pathfinding agents). But as you add more enemies, they get to share more and more of the pathfinding cost by computing shared path segments once rather than over and over. You also gain an edge from the fact that BFS/Dijkdtra's are simpler than A*, and typically cheaper to evaluate per cell inspected.
Exactly where the break-even point hits, from individual A* being cheaper, to A* with memoization being cheaper (where you re-use some of the results for a past pathfinding query to speed up the next one), to flow fields being cheaper, will depend on your implementation, the number of agents, and the size of your map. But if you ever plan a big swarm of enemies approaching from multiple directions in a confined area, one flow field will almost certainly be cheaper than iterated A*.
As an extreme example, you can see a video here with 20 000 agents all simultaneously pathfinding on a reasonably small grid.
$endgroup$
This sounds like a use case for Flow Fields.
In this technique, you do a single pathfinding query outward from your player object(s), marking each cell you encounter with the cell you reached it from.
If all your tiles/edges have equal traversal cost, then you can use a simple breadth-first search for this. Otherwise, Dijkstra's algorithm (like A* with no goal/heuristic) works.
This creates a flow field: a lookup table that associates each cell with the next step toward the closest player object from that position.
Now your enemies can each look up their current position in the flow field to find the next step in their shortest obstacle-avoiding path to the closest player object, without each doing their own pathfinding query.
This scales better and better the more enemies you have in your flock. For a single enemy, it's more expensive than A* because it searches the whole map (though you can early-out once you've reached all pathfinding agents). But as you add more enemies, they get to share more and more of the pathfinding cost by computing shared path segments once rather than over and over. You also gain an edge from the fact that BFS/Dijkdtra's are simpler than A*, and typically cheaper to evaluate per cell inspected.
Exactly where the break-even point hits, from individual A* being cheaper, to A* with memoization being cheaper (where you re-use some of the results for a past pathfinding query to speed up the next one), to flow fields being cheaper, will depend on your implementation, the number of agents, and the size of your map. But if you ever plan a big swarm of enemies approaching from multiple directions in a confined area, one flow field will almost certainly be cheaper than iterated A*.
As an extreme example, you can see a video here with 20 000 agents all simultaneously pathfinding on a reasonably small grid.
edited 24 mins ago
answered 13 hours ago


DMGregory♦DMGregory
71k16 gold badges126 silver badges199 bronze badges
71k16 gold badges126 silver badges199 bronze badges
$begingroup$
This technique sounds really neat. I'll check it out.
$endgroup$
– Darin Beaudreau
11 hours ago
3
$begingroup$
It's possible to use a hybrid algorithm that constructs a partial flow field without searching more of the map than repeated calls to A* would, and never searching the same position twice. The basic idea is to pick an arbitrary enemy and start an A* search from the player towards that enemy, marking cells as you encounter them just like in normal flow field generation. Once the search finds that enemy, pick another enemy (that you haven't found yet) as the target, re-sort the open set according to the new heuristic and continue searching. Stop when you've found all enemies.
$endgroup$
– Ilmari Karonen
5 hours ago
add a comment |
$begingroup$
This technique sounds really neat. I'll check it out.
$endgroup$
– Darin Beaudreau
11 hours ago
3
$begingroup$
It's possible to use a hybrid algorithm that constructs a partial flow field without searching more of the map than repeated calls to A* would, and never searching the same position twice. The basic idea is to pick an arbitrary enemy and start an A* search from the player towards that enemy, marking cells as you encounter them just like in normal flow field generation. Once the search finds that enemy, pick another enemy (that you haven't found yet) as the target, re-sort the open set according to the new heuristic and continue searching. Stop when you've found all enemies.
$endgroup$
– Ilmari Karonen
5 hours ago
$begingroup$
This technique sounds really neat. I'll check it out.
$endgroup$
– Darin Beaudreau
11 hours ago
$begingroup$
This technique sounds really neat. I'll check it out.
$endgroup$
– Darin Beaudreau
11 hours ago
3
3
$begingroup$
It's possible to use a hybrid algorithm that constructs a partial flow field without searching more of the map than repeated calls to A* would, and never searching the same position twice. The basic idea is to pick an arbitrary enemy and start an A* search from the player towards that enemy, marking cells as you encounter them just like in normal flow field generation. Once the search finds that enemy, pick another enemy (that you haven't found yet) as the target, re-sort the open set according to the new heuristic and continue searching. Stop when you've found all enemies.
$endgroup$
– Ilmari Karonen
5 hours ago
$begingroup$
It's possible to use a hybrid algorithm that constructs a partial flow field without searching more of the map than repeated calls to A* would, and never searching the same position twice. The basic idea is to pick an arbitrary enemy and start an A* search from the player towards that enemy, marking cells as you encounter them just like in normal flow field generation. Once the search finds that enemy, pick another enemy (that you haven't found yet) as the target, re-sort the open set according to the new heuristic and continue searching. Stop when you've found all enemies.
$endgroup$
– Ilmari Karonen
5 hours ago
add a comment |
$begingroup$
A* is not performance heavy. I would approach this situation by varying the algorithms. Do A* from time to time and in between check whether the next step is free to step onto or you need evasion.
For example, track the players distance from the A* target location, if it's above a threshold recalculate a* and then just do update movements. Most games use a combination of way points, e.g. a simplified grid for path finding and a logic that handles the movement between waypoints with evasion steering algorithms using raycasts. The agents try to run to a distant waypoint by maneuvering around obstacles in their proximity is the best approach in my opinion.
It's best to work with finite state machines here and read the book "Programming Game AI By Example" by Mat Buckland. The book offers proven techniques for your problem and details the math required. Source code from the book is available on the web; the book is in C++ but some translations (including Java) are available.
$endgroup$
$begingroup$
With an infrequently-updating A* approach, it may be helpful to stagger your updates, maintaining a budget for how many enemies are allowed to re-path on a single frame. That way you can keep your peak pathfinding cost per frame capped, and more robustly handle many AI pathing by amortizing their total cost over several frames. An AI using a stale path for a frame or two when the budget for the frame has been exceeded, or falling back on dead reckoning if close, usually won't be disruptive.
$endgroup$
– DMGregory♦
11 hours ago
add a comment |
$begingroup$
A* is not performance heavy. I would approach this situation by varying the algorithms. Do A* from time to time and in between check whether the next step is free to step onto or you need evasion.
For example, track the players distance from the A* target location, if it's above a threshold recalculate a* and then just do update movements. Most games use a combination of way points, e.g. a simplified grid for path finding and a logic that handles the movement between waypoints with evasion steering algorithms using raycasts. The agents try to run to a distant waypoint by maneuvering around obstacles in their proximity is the best approach in my opinion.
It's best to work with finite state machines here and read the book "Programming Game AI By Example" by Mat Buckland. The book offers proven techniques for your problem and details the math required. Source code from the book is available on the web; the book is in C++ but some translations (including Java) are available.
$endgroup$
$begingroup$
With an infrequently-updating A* approach, it may be helpful to stagger your updates, maintaining a budget for how many enemies are allowed to re-path on a single frame. That way you can keep your peak pathfinding cost per frame capped, and more robustly handle many AI pathing by amortizing their total cost over several frames. An AI using a stale path for a frame or two when the budget for the frame has been exceeded, or falling back on dead reckoning if close, usually won't be disruptive.
$endgroup$
– DMGregory♦
11 hours ago
add a comment |
$begingroup$
A* is not performance heavy. I would approach this situation by varying the algorithms. Do A* from time to time and in between check whether the next step is free to step onto or you need evasion.
For example, track the players distance from the A* target location, if it's above a threshold recalculate a* and then just do update movements. Most games use a combination of way points, e.g. a simplified grid for path finding and a logic that handles the movement between waypoints with evasion steering algorithms using raycasts. The agents try to run to a distant waypoint by maneuvering around obstacles in their proximity is the best approach in my opinion.
It's best to work with finite state machines here and read the book "Programming Game AI By Example" by Mat Buckland. The book offers proven techniques for your problem and details the math required. Source code from the book is available on the web; the book is in C++ but some translations (including Java) are available.
$endgroup$
A* is not performance heavy. I would approach this situation by varying the algorithms. Do A* from time to time and in between check whether the next step is free to step onto or you need evasion.
For example, track the players distance from the A* target location, if it's above a threshold recalculate a* and then just do update movements. Most games use a combination of way points, e.g. a simplified grid for path finding and a logic that handles the movement between waypoints with evasion steering algorithms using raycasts. The agents try to run to a distant waypoint by maneuvering around obstacles in their proximity is the best approach in my opinion.
It's best to work with finite state machines here and read the book "Programming Game AI By Example" by Mat Buckland. The book offers proven techniques for your problem and details the math required. Source code from the book is available on the web; the book is in C++ but some translations (including Java) are available.
edited 12 hours ago


Pikalek
7,2732 gold badges26 silver badges39 bronze badges
7,2732 gold badges26 silver badges39 bronze badges
answered 13 hours ago
D3d_devD3d_dev
1236 bronze badges
1236 bronze badges
$begingroup$
With an infrequently-updating A* approach, it may be helpful to stagger your updates, maintaining a budget for how many enemies are allowed to re-path on a single frame. That way you can keep your peak pathfinding cost per frame capped, and more robustly handle many AI pathing by amortizing their total cost over several frames. An AI using a stale path for a frame or two when the budget for the frame has been exceeded, or falling back on dead reckoning if close, usually won't be disruptive.
$endgroup$
– DMGregory♦
11 hours ago
add a comment |
$begingroup$
With an infrequently-updating A* approach, it may be helpful to stagger your updates, maintaining a budget for how many enemies are allowed to re-path on a single frame. That way you can keep your peak pathfinding cost per frame capped, and more robustly handle many AI pathing by amortizing their total cost over several frames. An AI using a stale path for a frame or two when the budget for the frame has been exceeded, or falling back on dead reckoning if close, usually won't be disruptive.
$endgroup$
– DMGregory♦
11 hours ago
$begingroup$
With an infrequently-updating A* approach, it may be helpful to stagger your updates, maintaining a budget for how many enemies are allowed to re-path on a single frame. That way you can keep your peak pathfinding cost per frame capped, and more robustly handle many AI pathing by amortizing their total cost over several frames. An AI using a stale path for a frame or two when the budget for the frame has been exceeded, or falling back on dead reckoning if close, usually won't be disruptive.
$endgroup$
– DMGregory♦
11 hours ago
$begingroup$
With an infrequently-updating A* approach, it may be helpful to stagger your updates, maintaining a budget for how many enemies are allowed to re-path on a single frame. That way you can keep your peak pathfinding cost per frame capped, and more robustly handle many AI pathing by amortizing their total cost over several frames. An AI using a stale path for a frame or two when the budget for the frame has been exceeded, or falling back on dead reckoning if close, usually won't be disruptive.
$endgroup$
– DMGregory♦
11 hours ago
add a comment |
$begingroup$
Not only is it feasible, I believe it was done in a commercial game in the 90s - BattleZone (1998).
That game had 3D units with free non-tile-based movement, and tile-based base construction.
This is how it seemed to work:
First, A* or something similar (likely a variation of A* with strict limits on how long a path it can find, so it never takes too many resources to run but doesn't always find a path all the way to the destination) would be used to find a path for a hovertank to get to its destination without getting stuck in tile-based obstacles.
Then the tank would fly around untiled space as if it was attracted to the centre of a nearby tile in its path, and repulsed by obstacles, other nearby tanks, etc.
$endgroup$
add a comment |
$begingroup$
Not only is it feasible, I believe it was done in a commercial game in the 90s - BattleZone (1998).
That game had 3D units with free non-tile-based movement, and tile-based base construction.
This is how it seemed to work:
First, A* or something similar (likely a variation of A* with strict limits on how long a path it can find, so it never takes too many resources to run but doesn't always find a path all the way to the destination) would be used to find a path for a hovertank to get to its destination without getting stuck in tile-based obstacles.
Then the tank would fly around untiled space as if it was attracted to the centre of a nearby tile in its path, and repulsed by obstacles, other nearby tanks, etc.
$endgroup$
add a comment |
$begingroup$
Not only is it feasible, I believe it was done in a commercial game in the 90s - BattleZone (1998).
That game had 3D units with free non-tile-based movement, and tile-based base construction.
This is how it seemed to work:
First, A* or something similar (likely a variation of A* with strict limits on how long a path it can find, so it never takes too many resources to run but doesn't always find a path all the way to the destination) would be used to find a path for a hovertank to get to its destination without getting stuck in tile-based obstacles.
Then the tank would fly around untiled space as if it was attracted to the centre of a nearby tile in its path, and repulsed by obstacles, other nearby tanks, etc.
$endgroup$
Not only is it feasible, I believe it was done in a commercial game in the 90s - BattleZone (1998).
That game had 3D units with free non-tile-based movement, and tile-based base construction.
This is how it seemed to work:
First, A* or something similar (likely a variation of A* with strict limits on how long a path it can find, so it never takes too many resources to run but doesn't always find a path all the way to the destination) would be used to find a path for a hovertank to get to its destination without getting stuck in tile-based obstacles.
Then the tank would fly around untiled space as if it was attracted to the centre of a nearby tile in its path, and repulsed by obstacles, other nearby tanks, etc.
edited 36 mins ago
answered 2 hours ago


RobynRobyn
1214 bronze badges
1214 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Game Development Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgamedev.stackexchange.com%2fquestions%2f174801%2fefficiently-pathfinding-many-flocking-enemies-around-obstacles%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
As I described in the edit, "is this too heavy" is a question to ask your profiler, because it will depend on your implementation, target hardware, performance budget, and the context of your game — all stuff that you and your profiler know intimately but Internet strangers do not. If you want to get flocks pathfinding efficiently, we can suggest strategies to help with that, but only your own profiling can answer what's efficient enough for your needs. If you profile and identify a specific performance problem, we can also help you find how to solve that problem.
$endgroup$
– DMGregory♦
12 hours ago
$begingroup$
How you implement them affects performance. For instance, only running A* on leaders & relying on flocking for followers.
$endgroup$
– Pikalek
12 hours ago