Re: modifying the particle list on multiple processors

From: Aldo Alberto Batta Marquez <>
Date: Tue, 15 Mar 2011 12:09:16 -0600

Hi Mark, Sorry for not responding earlier but I've had tons of work, I'm
glad it helped.

So at the end it should be enough to deal with the arrays locally and to
>> update the global variables like All.TotN_gas.
> That's good news! Do you think the particle counts are the only global
> variables that need to be updated? That's what I'm doing currently but I
> wasn't sure if I was overlooking anything.
Yes, I think those are the only global variables you must have updated, and
it seems you're already updating correctly the gravitational tree and the
neighbor list.

>> Would I need to check for "collisions" between a STAR particle
>> on the local processor and GAS particles on other processors, or
>> is it impossible for such a thing to happen? And conversely, do
>> I need to check for collisions between a GAS particle on the
>> local processor and STAR particles on the other processors?
>> Yes, what you're doing right now is to check only collisions or accretion
>> within local particles (on the same processor), remember that this doesn't
>> necessarily means that these particles are close in distance, so you could
>> have particles on other processors that lie within your predefined
>> distance
>> for accretion.
>> If you want to do it on several processors you should first look for sink
>> particles within each processor, and compute the local gas swallowing,
>> then
>> you should export the information about these sink particles to other
>> processors so each processor can compute the gas accretion of their own
>> gas
>> particles on such sink particles. You can figure out how to do this by
>> looking at the density_evaluate() function. Hope this helps.
> Thank you for this information.
> For obvious reasons I was trying to avoid doing N^2 comparisons at every
> step (where N = number of particles in the entire simulation) to check for
> collisions. I wonder if there's a faster way to do it. For instance, I'm
> wondering if I could use ngb_treefind_variable() to get a list of nearby
> particles from other processors that might collide with a sink particle on
> the current process (this would mirror what is done in density_evaluate()
> ).. I could pass the sink particle's position as the first parameter to
> ngb_treefind_variable() and the "accretion radius" of the sink particle as
> the second parameter. One thing I'm unclear on is: what is the purpose of
> the third parameter ("startnode") of ngb_treefind_variable()? In
> density_evaluate, the third parameter is set to All.MaxPart before being
> passed.

** The good thing of this code is that commonly the name suggests what the
variable stands for, fortunately this is one applies, the pointer startnode
passes the initial value of the node (All.MaxPart) that the function
ngb_treefind_variable() will check. As this function is based upon the tree
constructed for gravitational calculations, it also uses the nodes (groups
of particles) it constructed. It is kind of tricky how the nodes are
labeled, but you can check this out on the file forcetee.c. There's
something like this:


The index convention for accessing tree nodes is the following: the
 * indices 0...NumPart reference single particles, the indices
 * All.MaxPart.... All.MaxPart+nodes-1 reference tree nodes. `Nodes_base'
 * points to the first tree node, while `nodes' is shifted such that
 * nodes[All.MaxPart] gives the first tree node. Finally, node indices with
 * values 'All.MaxPart + MaxNodes' and larger indicate "pseudo particles",
 * i.e. multipole moments of top-level nodes that lie on different CPUs. If
 * such a node needs to be opened, the corresponding particle must be
 * exported to that CPU. The 'Extnodes' structure paralles that of
 * 'Nodes'. Its information is only needed for the SPH part of the
 * computation, but a merger with 'Nodes' would generate somewhat bigger
 * nodes also for gravity, which would reduce cache utilization slightly.


So at the end, ngb_treefind_variables must use this startnode to begin with
and then go through all other nodes to check for possible neighbors. Hope
this give you a hint about that pointer.

And I guess you should be able to use this function to get a list of the
closest particles, but I haven't tried it yet. You could also construct a
similar function in which you can especify how close your neighbors must be
to be accounted for on the list. But for this you must calculate the Domain
decomposition first (cause it's used by the tree construction) then
construct the tree (done on forcetree.c) then calculate collisions and after
all that, update everything. I'm not sure if you have to update also the
domain decomposition or if it's enough to just update the tree. I'd have to
check on that.

What I think is that this ngb_treefind function, gives you the ngb list of
the local particles, and you must export the information of the sink
particles to other processors so that each one calculates collisions with
it's own local particles.

> Also, using the above method, if I do find a collision then I would need to
> export this information to other processes so that they can update their
> particle lists accordingly.

If for example, you hav a sink particle on processor 1 you check first if
there's any collisions with your local gas particles and update the mass
(and any other variable you change) of your sink particle. Then you have to
send the info (position, velocity, mass or whatever is needed) of the sink
particle to the other processors. Lets say that you send it to processor 2,
so now if you have your list of ngb around the imported sink particle, you
compute if there's any collisions. If there's any, you should export the new
info (position, velocity, mass, so on) of the sink particle, back to the
original processor (processor 1 in this case).

There's an example of how to do it on density.c. Look for pointers named as
 *DensDataPartialResult[source] *or *variablePartialResult[]* , as those are
the results obtained from other processors. This should give you a hint of
how this works.

> On that note, maybe it would be simpler (and/or faster?) to export the
> location of each sink particle to all of the other processes, and let them
> deal with collisions and deletions locally.

Sure, that's what you gotta do.

> Your thoughts?
> Best,
> Mark
> -----------------------------------------------------------
> If you wish to unsubscribe from this mailing, send mail to
> with a subject of: unsubscribe gadget-list
> A web-archive of this mailing list is available here:



Aldo Alberto Batta Márquez
Instituto de Astronomia
Universidad Nacional Autonoma de Mexico (UNAM)
Received on 2011-03-15 19:09:19

This archive was generated by hypermail 2.3.0 : 2022-09-01 14:03:42 CEST