Re: "out of Topnodes" vs "maximum number of tree-nodes reached"

From: Volker Springel <volker_at_MPA-Garching.MPG.DE>
Date: Sat, 05 Dec 2009 11:11:42 +0100

Hi Mark,

Yes, you could increase MAXTOPNODES of decrease TOPNODEFACTOR, or both.

However, you are also using far too many processors for the problem size of 10^6
particles. Going to a more reasonable number (say ~16) would help too.


Mark Baumann wrote:
> Hello,
> As I've steadily increased the number of particles in my runs, I've run
> into the following error:
> "We are out of Topnodes. Increasing the constant MAXTOPNODES might help"
> This sounds to me like I might've run out of processor memory. Is that
> the case, and if so what is the best course of action? Should I actually
> change MAXTOPNODES and recompile? Or should I just increase the number of
> processors?
> I've tried doing the latter (increasing the number of processors) because
> it seemed like the easiest solution. But then I ran into the "maximum
> number [number] of tree-nodes reached" error. I presume this is because
> my # of particles per processor dropped too low. So perhaps I've gone
> from having too few processors to too many processors.
> I've played with the number of processors and the TreeAllocFac to try to
> find a happy medium to make things run, without success. Here are details
> of the various combinations that I've tried:
> num particles = 1e6
> num processors = 256 up to 320, in increments of 16
> error is "out of Topnodes"
> TreeAllocFac = {0.8, 1.2, 1.5}
> num processors = 336 up to 512, in increments of 16
> error is "max number of tree-nodes reached" (in create empty nodes)
> TreeAllocFac range = {1.2, 1.5, 2.0}
> I would be very appreciative of any suggestions that might help get me off
> the ground on a run containing 1e6 particles.
> Thank you for the help!
> Mark
> -----------------------------------------------------------
> If you wish to unsubscribe from this mailing, send mail to
> with a subject of: unsubscribe gadget-list
> A web-archive of this mailing list is available here:
Received on 2009-12-05 11:11:42

This archive was generated by hypermail 2.3.0 : 2023-01-10 10:01:31 CET