Re: work-load imbalance

From: Volker Springel <volker_at_MPA-Garching.MPG.DE>
Date: Fri, 03 Nov 2006 19:47:02 +0100

Hi Pedro,

the amount of work-load imbalance you experience will very much depend
on the type of problem you simulate, as well as on the particle
number/resolution and on the number of processor you're using. In
general, systems with only one or a few high-density region(s) (say an
individual halo, or a galaxy merger) are much harder in this respect
than a cosmological volume with lots of halos.

The scalability of gadget2 is generally limited by work-load imbalances,
not communication times. A too small value of BufferSize and
PartAllocFactor can make things much worse, but once the settings for
these parameters are large enough, a further increase won't make a
difference, and the scalability for certain isolated system can remain
quite poor. For a larger particle number at fixed processor number
things normally get a bit better, but a substantial improvement on this
requires algorithmic changes in the code.


Pedro Colin wrote:
> Hi,
> Does anyone have a sort of recipe to optimize the work load balance and
> thus to improve the multi-cpu computer performance? I just got a quad with
> double core opteron 64bits proccesors at 2.0 Ghz, so 8 procs in total.
> When I compare the wall time of this computer with a dual opteron at 2.4
> Ghz (single core) in a Gadget2 simulation (N-body only) I only get a
> factor of 2 gain. When I look at the timings.txt I see that this is due to
> work-load imbalance. The procs in the quad run on average twice slower
> than those in the dual. I have made some changes to *BufferSize* and
> *PartAllocFactor* and have got some improvement but still I think I am not
> getting the most of it.
> Cheers,
> Pedro
> -----------------------------------------------------------
> If you wish to unsubscribe from this mailing, send mail to
> with a subject of: unsubscribe gadget-list
> A web-archive of this mailing list is available here:
Received on 2006-11-03 19:47:02

This archive was generated by hypermail 2.3.0 : 2023-01-10 10:01:30 CET