Re: Need help with error(1313)

From: sandeep.kataria <sandeep.kataria_at_iiap.res.in>
Date: Tue, 28 Jul 2015 17:50:02 +0530

Hello Antonio,

I am using PartAllocFactor=3.

Regards,

Sandeep

On 2015-07-28 17:46, Antonio Bibiano wrote:

> Hello,
> What is your PartAllocFactor parameter?
>
> Maybe that is too low
>
> Cheers,
> Antonio
> On Jul 28, 2015 2:00 PM, "Sandeep Kumar Kataria" <sandeep.kataria_at_iiap.res.in> wrote:
>
>> Hello Everyone,
>>
>> I am getting following error while running the code with initial
>> conditions generated by NEMO package. I checked initial condition can be
>> read with splash and the snapread.c file.
>>
>>>
>> This is Gadget, version `2.0'.
>>
>> Running on 2 processors.
>>
>> Allocated 25 MByte communication buffer per processor.
>>
>> Communication buffer has room for 504122 particles in gravity computation
>> Communication buffer has room for 204800 particles in density computation
>> Communication buffer has room for 163840 particles in hydro computation
>> Communication buffer has room for 163840 particles in domain decomposition
>>
>> Hubble (internal units) = 0.1
>> G (internal units) = 43007.1
>> UnitMass_in_g = 1.989e+43
>> UnitTime_in_s = 3.08568e+16
>> UnitVelocity_in_cm_per_s = 100000
>> UnitDensity_in_cgs = 6.76991e-22
>> UnitEnergy_in_cgs = 1.989e+53
>>
>> reading file `/home/sandeep/Gadget-2.0.7/ICs/Galaxy_checkg' on task=0 (contains 15000 particles.)
>> distributing this file to tasks 0-1
>> Type 0 (gas): 0 (tot= 0000000000) masstab=0
>> Type 1 (halo): 10000 (tot= 0000010000) masstab=0
>> too many particles
>> task 1: endrun called with an error level of 1313
>>
>> Type 2 (disk): 5000 (tot= 0000005000) masstab=0.0005
>> Type 3 (bulge): 0 (tot= 0000000000) masstab=0
>> Type 4 (stars): 0 (tot= 0000000000) masstab=0
>> Type 5 (bndry): 0 (tot= 0000000000) masstab=0
>>
>> too many particles
>> task 0: endrun called with an error level of 1313
>>
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
>> with errorcode 1313.
>>
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>>
>> Regards,
>> Sandeep Kumar
>> Indian Institute of Astrophysics
>> Bangalore,India
>>
>> -----------------------------------------------------------
>> If you wish to unsubscribe from this mailing, send mail to
>> minimalist_at_MPA-Garching.MPG.de with a subject of: unsubscribe gadget-list
>> A web-archive of this mailing list is available here:
>> http://www.mpa-garching.mpg.de/gadget/gadget-list [1]



Links:
------
[1] http://www.mpa-garching.mpg.de/gadget/gadget-list
Received on 2015-07-28 14:20:07

This archive was generated by hypermail 2.3.0 : 2023-01-10 10:01:32 CET