Re: Problems with treebuild -- setting the TREE_NUM_BEFORE_NODESPLIT

From: Robin Booth <robin.booth_at_sussex.ac.uk>
Date: Thu, 7 Oct 2021 09:28:58 +0000

Hi Volker

Thanks for that explanation on the use of ErrTolTheta. It is now clear to me how to use this parameter for specifying SUBFIND accuracy.
We did have the LIGHTCONE_PARTICLES_GROUPS config option enabled for our run so, as you pointed out, this in combination with a too-low value for ErrTolTheta would account for the excessive run times we encountered.

With your explanation of the bug in restarting from a restart file created immediately after a snapshot dump, I went back and re-analysed the scale factors of the snapshot files from our GADGET4 run, and it was then easy to spot where the extra snapshots had been invoked. Stripping these out, the scale factors for the remaining snapshots do indeed line up very closely with the time (a) values specified in the snapshot list file, so no need for any further investigation on your part. Thanks for identifying and resolving this issue.

Although I am continuing to analyse the various outputs from our recent GADGET4 run, I believe that we have learned enough from this exercise to be reasonably confident about proceeding with a more extensive suite of simulations as part of the ongoing Virgo program.

Thanks again for your help in getting us this far.

Regards

Robin
________________________________
From: Volker Springel <vspringel_at_MPA-Garching.MPG.DE>
Sent: 05 October 2021 22:31
To: Gadget General Discussion <gadget-list_at_MPA-Garching.MPG.DE>
Subject: Re: [gadget-list] Problems with treebuild -- setting the TREE_NUM_BEFORE_NODESPLIT


Hi Robin,

> On 22. Sep 2021, at 14:11, Robin Booth <robin.booth_at_sussex.ac.uk> wrote:
>
> Hi Volker
>
> Many thanks for the comments and explanations.
>
> I can confirm that my recent run did not encounter the same bug as the one that affected Weiguang as
> SUBFIND: Number of FOF halos treated with collective SubFind algorithm was always =0 in my case.
>
> Based on your comments on the setting of the ErrTolTheta parameter, I think I have found the root of the problem. In my simulation this was set to 0.5, which was by design, as part of the objective was to do a comparison between the output from a previous GADGET-2 run and this run with the new GADGET-4 code, using the same ICs and parameters. However, in GADGET-2 there is a separate parameter: ErrTolThetaSubfind, which was set to the lower accuracy of 0.7 in that earlier simulation. This parameter does not appear to exist in GADGET-4 and hence SUBFIND seems to have been run at the unreasonably high accuracy of ErrTolTheta = 0.5 by default, thus largely accounting for the high CPU hours for this step. Was there any specific rationale behind the removal of ErrTolThetaSubfind in GADGET-4? If not, would it be easy enough to reinstate it in a future update


ErrTolThetaSubfind was removed to reduce redundancy in the parameter set, and to reduce/simplify it. Note that normally ErrTolTheta is not in use for the force computation, as the better approach is to use the relative opening criterion. In the latter case, ErrTolTheta will be used only for an initial estimate of the force. As this is subsequently discarded and not used to push particle momenta, the accuracy of this estimate doesn't need to be high, so it really doesn't matter whether you compute the first force estimate with 0.5 or 0.7 - or with whatever value you want to use for Subfind for that matter. This is why ErrTolTheta effectively takes the role of the old ErrTolThetaSubfind, and two separate parameters are not really needed.


> Looking at the GADGET-4 paper, would I be right in assuming that specifying lightcone output with the SUBFIND flag set will cause the SUBFIND process to run whenever a new batch of particles is dumped to the lightcone, and not solely on snapshot output? If so, presumably this would exacerbate the ErrTolTheta issue?

This will only happend if you activate LIGHTCONE_PARTICLES_GROUPS in addition to LIGHTCONE_PARTICLES. But if you, this will indeed exacerbate group finding costs for a low value of ErrTolTheta quite a bit.


> One (hopefully!) final question relating to this run. I specified OutputListOn = 1 and provided a file containing a input_list of 65 scale factor (a) values for which snapshot output was required, whereas in the run itself, 69 snapshots were generated, with none of the a values corresponding to the ones in the input_list,

Gadget4 by default only outputs at times that are synchronized with global timesteps (i.e. all timesteps of the hierarchy have completed), and that are aligned with the power-of-two subdivision of the simulated timespan induced by MaxSizeTimestep. If you specify an output list, all the times in this list are mapped to the closest possible time, and thus the difference to your desired output time will be at most +/- 0.5 * MaxSizeTimestep. (In cosmological integration, this refers to differences in log(a)).

It is possible in principle to override this behaviour with the OUTPUT_NON_SYNCHRONIZED_ALLOWED option. Then outputs occur precisely at the prescribed desired output times, with particles being (linearly) drifted to this time, while velocities stay at the values they had after the last kick. (Which is what Gadget2 had done.)

The fact that you got 69 instead of 65 outputs stems from a hideous bug that I have just fixed in the code. It could happen that if a restart set was written right after creating a snapshot dump, and the code was resumed later on from this restart, a spurious extra snapshot dump unspecified in the list could be created, with a temporal spacing of MaxSizeTimestep after resuming the run. The reason for this is contrived. It happened because the code always newly determines the next desired snapshot time after a restart (because the user may have added/removed desired output times...). While the code logic tried to correctly omit an output time just created, the intended corresponding line in begrun2(),
All.Ti_nextoutput = find_next_outputtime(All.Ti_Current + 1);
got unfortunately not executed, instead the code did execute the other if-branch,
All.Ti_nextoutput = find_next_outputtime(All.Ti_Current);
allowing a closely spaced unintended "double output" to be created under the above circumstances. I'm pretty sure this is most likely what happened in your case, and gave you four extra dumps...

> even accounting for the fact that the code uses the timestep that is closest to the specified a value when doing a snapshot dump. Nor were the actual a values following any apparent regular sequence, e.g. equal increments of log(a).

This I don't quite believe. If you send me your TimeBegin, TimeMax, MaxSizeTimestep settings, as well as your input output times, plus the list of actual output times created, I'm happy to check this and show that the output times created (well, 65 of them), are directly induced by your output list in the above sense.

> I was wondering whether this was due to the requirement for lightcone particle output overriding the specified scale factor values?

No, this should have no influence on this.

Best,
Volker

>
> Regards
>
> Robin
> From: Volker Springel <vspringel_at_MPA-Garching.MPG.DE>
> Sent: 18 September 2021 10:02
> To: Gadget General Discussion <gadget-list_at_MPA-Garching.MPG.DE>
> Subject: Re: [gadget-list] Problems with treebuild -- setting the TREE_NUM_BEFORE_NODESPLIT
>
>
> Hi Robin,
>
> > On 14. Sep 2021, at 15:32, Robin Booth <robin.booth_at_sussex.ac.uk> wrote:
> >
> > Thankfully my Gadget4 run has now completed after many thousands of CPU hours, and the results I have analysed so far are all looking good. Before doing any further runs, I need to find a solution to the SUBFIND issue referred to in my email of 2 September. Is there any possibility that the bug referred to in your recent email to Weiguang could have any bearing on the problems I have been encountering with long SUBFIND FoF steps, as I too have been using output to lightcone? If not, any suggestions for parameter settings that would alleviate the problem?
>
> As I wrote in my previous note, I think the problem was caused by the setting for ErrTolTheta that you had used.
>
> The bug that Weiguang encountered should not have affected you, unless the collective Subfind algorithm was invoked when the code worked on lightcone particles. You can check in the output log-files for this line:
> SUBFIND: Number of FOF halos treated with collective SubFind algorithm = 0
> If the number given here was always 0 for you, you have not been affected by the bug. If it was somewhere non-zero, it depends on whether the corresponding subfind run was for a regular timeslice, or for particles on the lightcone. Only in the latter case, the bug was active.
>
> >
> > On the subject of lightcone halo outputs, I was somewhat disconcerted to find that the simulation did not generate discrete lightcone group directories for each (of the two) lightcones defined in my run. The output appears to be a combination of the two sets of halo data - one for a full sky lightcone and one for an octant - see image below:
> > <Halo lightcone 2.jpg>
> >
> > Further analysis indicated that the output halo file is a union of the two datasets (as opposed to a sum), so the appropriate data can relatively easily be extracted. Nevertheless, a mention of this somewhere in the documentation would, I think, be helpful to future users thinking about enabling lightcone halo output.
> >
>
> Yes, this is right. The group finding on the lightcone particles is currently done on the geometric union of all the lightcones that you specify, and there is only one group catalogue produced for this at the moment. The group finding feature is mostly meant for all-sky lightcones because if you have deeper particle lightcones that are not covering the full sky, there is the issue of what happens with groups that intersect the geometric boundaries of the lightcone (transverse to the line of sight). Note that the code will only find those parts of groups that fall within the geometric boundaries of the lightcone(s), i.e. groups intersected by lightcone boundaries can be incomplete, and this may need to be taken into account in quantitative analysis, for example by discarding halos that touch the corresponding boundary. I'll add a note about this in the documentation.
>
> Regards,
> Volker
>
>
> > Regards
> >
> > Robin
> > From: Volker Springel <vspringel_at_MPA-Garching.MPG.DE>
> > Sent: 13 September 2021 18:06
> > To: Gadget General Discussion <gadget-list_at_MPA-Garching.MPG.DE>
> > Subject: Re: [gadget-list] Problems with treebuild -- setting the TREE_NUM_BEFORE_NODESPLIT
> >
> >
> > Dear Weiguang,
> >
> > I have now found the cause of the problem you experienced with SUBFIND. This originated in a small bug in src/subfind/subfind_distribute.cc, which is used only when a halos must be processed by more than one core ("collective subfind"). The routine was correct for normal particle data, but did not work correctly for particles stored on a lightcone because of a missing templating of a sizeof() statement in an MPI-call. It turns out that your run triggered the bug, yielding corrupted particle positions (all zeros), which then led to a failed tree construction. The suggestion by the code's error message to increase TREE_NUM_BEFORE_NODESPLIT to fix this was a red herring...
> >
> > When increasing TREE_NUM_BEFORE_NODESPLIT, you experience as distraction another issue in the fmm-routine due to the sizing of some communication buffers. While you could circumvent this with the changes you tried below, these are not good fixes for the problem. So I have now fixed this more universally in the code.
> >
> > I note that a setting TREE_NUM_BEFORE_NODESPLIT to a large number like 96 is not normally recommended. This will be quite slow (as you found, I think), because it will reduce the number of node-level interactions that can be computed, while driving up the number of particle-particle interactions. In the limit of an extremely large TREE_NUM_BEFORE_NODESPLIT one gets then ever closer to direct summation.
> >
> > Cheers,
> > Volker
> >
> >
> > > On 9. Sep 2021, at 17:26, Weiguang Cui <cuiweiguang_at_gmail.com> wrote:
> > >
> > > Dear Volker,
> > >
> > > For the problem of MPI_Sendrecv call in SUBFIND, I think it happened in this function ` SubDomain->particle_exchange_based_on_PS(SubComm);` -- line 101 in subfind_processing.cc. After changing the "MPI_Sendrecv" with "myMPI_Sendrecv" within this function in file `domain_exchange.cc`. The code does not report the MPI_Sendrecv error.
> > >
> > > However, the original SUBFIND problem shows up again:
> > > ```
> > > Code termination on task=2, function treebuild_insert_group_of_points(), file src/tree/tree.cc, line 489: It appears we have reached the bottom of the tree because there are more than TREE_NUM_BEFORE_NODESPLIT=96 particles in the smallest tree node representable for BITS_FOR_POSITIONS=64 .
> > > Either eliminate the particles at (nearly) indentical coordinates, increase the setting for TREE_NUM_BEFORE_NODESPLIT, or possibly enlarge BITS_FOR_POSITIONS if you have really not enough dynamic range
> > > ```
> > > As you can see, I have increased "TREE_NUM_BEFORE_NODESPLIT=96". Increasing this value to 128 requires an encasement of the MaxOnFetchStack in the fmm.cc which caused memory problems. Here is my current set:
> > > `MaxOnFetchStack = std::max<int>(50 * (Tp->NumPart + NumPartImported), 9 * TREE_MIN_WORKSTACK_SIZE);`
> > > If you suggest an even larger value, I can only restart form snapshot and change the number of nodes.
> > > By the way, the code runs a little bit slow with a large value of TREE_MIN_WORKSTACK_SIZE
> > >
> > > I have rsynced the recent run slurm.3774931.out to the m200n2048-dm/ for your reference.
> > >
> > > Thank you for the comment on the lightcone thickness, my mistake of failing to notice the unit. I hope that won't connect to the SUBFIND problem and I have increased the value to 2 Mpc/h.
> > >
> > > Thank you for your help!
> > >
> > > Best,
> > > Weiguang
> > >
> > > -------------------------------------------
> > > https://weiguangcui.github.io/<https://weiguangcui.github.io>
> > >
> > >
> > > On Wed, Sep 8, 2021 at 8:33 PM Volker Springel <vspringel_at_mpa-garching.mpg.de> wrote:
> > >
> > > Dear Weiguang,
> > >
> > > Sorry for my sluggish answer. Too many other things on my plate.
> > >
> > > I think the crash you experienced in an MPI_Sendrecv call in SUBFIND happens most likely in line 270 of the file src/subfind/subfind_distribute.cc, because this call there is not protected yet against transfer sizes that exceed 2 GB in total... For the particle number and setup you're using, you are actually having a particle storage of ~1.52 GB or so on average. With a memory imbalance of ~30% (which you actually just reach according to your log file), it is possible that you reach the 2GB at this place, causing the native call of MPI_Sendrecv to fail.
> > >
> > > If this is indeed the problem, then replacing in line 270 to 274 "MPI_Sendrecv" with "myMPI_Sendrecv" should fix it. I have also made this change in the code repository also.
> > >
> > >
> > > Thanks for letting me know that you had to change the default size of TREE_MIN_WORKSTACK_SIZE to get around the bookkeeping buffer problem you experienced in fmm.cc. I guess I need to think about how this setting can be adjusted automatically so that it works in conditions like the one you created in your run.
> > >
> > > Best,
> > > Volker
> > >
> > >
> > >
> > >
> > > > On 2. Sep 2021, at 11:23, Weiguang Cui <cuiweiguang_at_gmail.com> wrote:
> > > >
> > > > Hi Volker,
> > > >
> > > > Did you find some time to look at the problem? I would like to have this run finished a.s.a.p. So I further modified the code (see the Gadget4 folder for changes with git diff):
> > > > FMM factor is increased to 50
> > > > - MaxOnFetchStack = std::max<int>(0.1 * (Tp->NumPart + NumPartImported), TREE_MIN_WORKSTACK_SIZE);
> > > > + MaxOnFetchStack = std::max<int>(50 * (Tp->NumPart + NumPartImported), 10 * TREE_MIN_WORKSTACK_SIZE);
> > > > and the tree_min_workstack_size in gravtree.h is also increased:
> > > > -#define TREE_MIN_WORKSTACK_SIZE 100000
> > > > +#define TREE_MIN_WORKSTACK_SIZE 400000
> > > >
> > > > With these modifications, the code did not show the `Can't even process a single particle` problem in fmm, but crashed with an MPI_Sendrecv problem at subfind. See the job slurm.3757634 for details. Maybe this is connected with the previous SUBFIND construction problem, just too many particles in the halo??
> > > > If there is no easy fix, I probably will exclude the SUBFIND part to finish the run which is a pity as the full merge tree needs to be redone.
> > > >
> > > > Thank you.
> > > >
> > > > Best,
> > > > Weiguang
> > > >
> > > > -------------------------------------------
> > > > https://weiguangcui.github.io/<https://weiguangcui.github.io/>
> > > >
> > > >
> > > > On Sun, Aug 29, 2021 at 5:57 PM Volker Springel <vspringel_at_mpa-garching.mpg.de> wrote:
> > > >
> > > > Hi Weiguang,
> > > >
> > > > The tree construction problem in subfind is odd and still bothers me. Could you perhaps make the run available to me on cosma7 so that I can investigate this myself?
> > > >
> > > > I agree that there should be enough total memory for FMM, but the termination of the code looks to be caused by an insufficient size allocation of internal bookkeeping buffers related to the communication parts of the algorithm. If you're add it, you could also make this setup available to me, then I can take a look why this happens.
> > > >
> > > > Regards,
> > > > Volker
> > > >
> > > > > On 24. Aug 2021, at 12:29, Weiguang Cui <cuiweiguang_at_gmail.com> wrote:
> > > > >
> > > > > Hi Volker,
> > > > >
> > > > > This is a pure dark-matter particle run. This happens when the simulation ran to z~0.3.
> > > > > As you can see from the attached config options, this simulation used an old IC file, neither the double-precision output is opened.
> > > > >
> > > > > I increased the factor from 0.1 to 0.5, which still resulted in the same error in the fmm.cc. I don't think memory is an issue here. As shown in memory.txt, the maximum occupied memory (in the whole file) is
> > > > > ```MEMORY: Largest Allocation = 11263.9 Mbyte | Largest Allocation Without Generic = 11263.9 Mbyte``` and the parameter ```MaxMemSize 18000 % in MByte``` is in agreement with the machine's memory (cosma7). I will increase the factor to an even higher value to see if that works.
> > > > >
> > > > > If the single-precision position is not an issue, could it be caused by the `FoFGravTree.treebuild(num, d);` or `FoFGravTree.treebuild(num_removed, dremoved);` in subfind_unbind in which an FoF group has too many particles in a very small volume to build the tree?
> > > > >
> > > > > Any suggestions are welcome. Many thanks!
> > > > >
> > > > > ==================================
> > > > > ALLOW_HDF5_COMPRESSION
> > > > > ASMTH=1.2
> > > > > DOUBLEPRECISION=1
> > > > > DOUBLEPRECISION_FFTW
> > > > > FMM
> > > > > FOF
> > > > > FOF_GROUP_MIN_LEN=32
> > > > > FOF_LINKLENGTH=0.2
> > > > > FOF_PRIMARY_LINK_TYPES=2
> > > > > FOF_SECONDARY_LINK_TYPES=1+16+32
> > > > > GADGET2_HEADER
> > > > > IDS_64BIT
> > > > > LIGHTCONE
> > > > > LIGHTCONE_IMAGE_COMP_HSML_VELDISP
> > > > > LIGHTCONE_MASSMAPS
> > > > > LIGHTCONE_PARTICLES
> > > > > LIGHTCONE_PARTICLES_GROUPS
> > > > > MERGERTREE
> > > > > MULTIPOLE_ORDER=3
> > > > > NTAB=128
> > > > > NTYPES=6
> > > > > PERIODIC
> > > > > PMGRID=4096
> > > > > RANDOMIZE_DOMAINCENTER
> > > > > RCUT=4.5
> > > > > SELFGRAVITY
> > > > > SUBFIND
> > > > > SUBFIND_HBT
> > > > > TREE_NUM_BEFORE_NODESPLIT=64
> > > > > ===========================================================
> > > > >
> > > > >
> > > > > Best,
> > > > > Weiguang
> > > > >
> > > > > -------------------------------------------
> > > > > https://weiguangcui.github.io/<https://weiguangcui.github.io/>
> > > > >
> > > > >
> > > > > On Mon, Aug 23, 2021 at 1:49 PM Volker Springel <vspringel_at_mpa-garching.mpg.de> wrote:
> > > > >
> > > > > Hi Weiguang,
> > > > >
> > > > > The code termination you experienced in the tree construction during subfind is quite puzzling to me, especially since you used BITS_FOR_POSITIONS=64... In principle, this situation should only arise if you have a small group of particles (~16) in a region about 10^18 smaller than the boxsize. Has this situation occurred during a simulation run, or in postprocessing? If you have used single precision for storing positions in a snapshot file, or if you have dense blobs of gas with intense star formation, then you can get occasional coordinate collisions of two or several particles, but ~16 seems increasingly unlikely. So I'm not sure what's really going on here. Have things acually worked when setting TREE_NUM_BEFORE_NODESPLIT=64?
> > > > >
> > > > > The issue in FMM is a memory issue. It should be possible to resolve it with a higher setting of MaxMemSize, or by enlarging the factor 0.1 in line 1745 of fmm.cc,
> > > > > MaxOnFetchStack = std::max<int>(0.1 * (Tp->NumPart + NumPartImported), TREE_MIN_WORKSTACK_SIZE);
> > > > >
> > > > > Best,
> > > > > Volker
> > > > >
> > > > >
> > > > > > On 21. Aug 2021, at 10:10, Weiguang Cui <cuiweiguang_at_gmail.com> wrote:
> > > > > >
> > > > > > Dear all,
> > > > > >
> > > > > > I recently met another problem with the 2048^3, 200 mpc/h run.
> > > > > >
> > > > > > treebuild in SUBFIND requires a higher value for TREE_NUM_BEFORE_NODESPLIT:
> > > > > > ==========================================================
> > > > > > SUBFIND: We now execute a parallel version of SUBFIND.
> > > > > > SUBFIND: Previous subhalo catalogue had approximately a size 2.42768e+09, and the summed squared subhalo size was 8.42698e+16
> > > > > > SUBFIND: Number of FOF halos treated with collective SubFind algorithm = 1
> > > > > > SUBFIND: Number of processors used in different partitions for the collective SubFind code = 2
> > > > > > SUBFIND: (The adopted size-limit for the collective algorithm was 9631634 particles, for threshold size factor 0.6)
> > > > > > SUBFIND: The other 10021349 FOF halos are treated in parallel with serial code
> > > > > > SUBFIND: subfind_distribute_groups() took 0.044379 sec
> > > > > > SUBFIND: particle balance=1.10537
> > > > > > SUBFIND: subfind_exchange() took 30.2562 sec
> > > > > > SUBFIND: particle balance for processing=1
> > > > > > SUBFIND: root-task=0: Collectively doing halo 0 of length 10426033 on 2 processors.
> > > > > > SUBFIND: subdomain decomposition took 8.54527 sec
> > > > > > SUBFIND: serial subfind subdomain decomposition took 6.0162 sec
> > > > > > SUBFIND: root-task=0: total number of subhalo coll_candidates=1454
> > > > > > SUBFIND: root-task=0: number of subhalo candidates small enough to be done with one cpu: 1453. (Largest size 81455)
> > > > > > Code termination on task=0, function treebuild_insert_group_of_points(), file src/tree/tree.cc, line 489: It appears we have reached the bottom of the tree because there are more than TREE_NUM_BEFORE_NODESPLIT=16 particles in the smallest tree node representable for BITS_FOR_POSITIONS=64.
> > > > > > Either eliminate the particles at (nearly) indentical coordinates, increase the setting for TREE_NUM_BEFORE_NODESPLIT, or possibly enlarge BITS_FOR_POSITIONS if you have really not enough dynamic range
> > > > > > ==============================================
> > > > > >
> > > > > > But, if I increase the TREE_NUM_BEFORE_NODESPLIT to 64, FMM seems not working:
> > > > > > =============================================================
> > > > > > Sync-Point 19835, Time: 0.750591, Redshift: 0.332284, Systemstep: 5.27389e-05, Dloga: 7.02657e-05, Nsync-grv: 31415, Nsync-hyd: 0
> > > > > > ACCEL: Start tree gravity force computation... (31415 particles)
> > > > > > TREE: Full tree construction for all particles. (presently allocated=7626.51 MB)
> > > > > > GRAVTREE: Tree construction done. took 13.4471 sec <numnodes>=206492 NTopnodes=115433 NTopleaves=101004 tree-build-scalability=0.441627
> > > > > > FMM: Begin tree force. timebin=13 (presently allocated=0.5 MB)
> > > > > > Code termination on task=0, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=887, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=40, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=888, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=889, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=3, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=890, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=6, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=891, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=9, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=892, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=893, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=894, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > Code termination on task=20, function gravity_fmm(), file src/fmm/fmm.cc, line 1879: Can't even process a single particle
> > > > > > ======================================
> > > > > >
> > > > > > I don't think fine-tuning the value for TREE_NUM_BEFORE_NODESPLIT is a solution.
> > > > > > I can try to use BITS_FOR_POSITIONS=128 by setting POSITIONS_IN_128BIT, but I am afraid that the code may not be able to run from restart files.
> > > > > > Any suggestions?
> > > > > > Many thanks.
> > > > > >
> > > > > > Best,
> > > > > > Weiguang
> > > > > >
> > > > > > -------------------------------------------
> > > > > > https://weiguangcui.github.io/<https://weiguangcui.github.io/>
> > > > > >
> > > > > > -----------------------------------------------------------
> > > > > >
> > > > > > If you wish to unsubscribe from this mailing, send mail to
> > > > > > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > > > > > A web-archive of this mailing list is available here:
> > > > > > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > -----------------------------------------------------------
> > > > >
> > > > > If you wish to unsubscribe from this mailing, send mail to
> > > > > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > > > > A web-archive of this mailing list is available here:
> > > > > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> > > > >
> > > > > -----------------------------------------------------------
> > > > >
> > > > > If you wish to unsubscribe from this mailing, send mail to
> > > > > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > > > > A web-archive of this mailing list is available here:
> > > > > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> > > >
> > > >
> > > >
> > > >
> > > > -----------------------------------------------------------
> > > >
> > > > If you wish to unsubscribe from this mailing, send mail to
> > > > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > > > A web-archive of this mailing list is available here:
> > > > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> > > >
> > > > -----------------------------------------------------------
> > > >
> > > > If you wish to unsubscribe from this mailing, send mail to
> > > > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > > > A web-archive of this mailing list is available here:
> > > > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> > >
> > >
> > >
> > >
> > > -----------------------------------------------------------
> > >
> > > If you wish to unsubscribe from this mailing, send mail to
> > > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > > A web-archive of this mailing list is available here:
> > > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> > >
> > > -----------------------------------------------------------
> > >
> > > If you wish to unsubscribe from this mailing, send mail to
> > > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > > A web-archive of this mailing list is available here:
> > > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> >
> >
> >
> >
> > -----------------------------------------------------------
> >
> > If you wish to unsubscribe from this mailing, send mail to
> > minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> > A web-archive of this mailing list is available here:
> > http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
> > <footer.txt>
>
>
>
>
> -----------------------------------------------------------
>
> If you wish to unsubscribe from this mailing, send mail to
> minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> A web-archive of this mailing list is available here:
> http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
>
> -----------------------------------------------------------
>
> If you wish to unsubscribe from this mailing, send mail to
> minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
> A web-archive of this mailing list is available here:
> http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>




-----------------------------------------------------------

If you wish to unsubscribe from this mailing, send mail to
minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
A web-archive of this mailing list is available here:
http://www.mpa-garching.mpg.de/gadget/gadget-list<http://www.mpa-garching.mpg.de/gadget/gadget-list>
Received on 2021-10-07 11:29:15

This archive was generated by hypermail 2.3.0 : 2023-01-10 10:01:33 CET