<div>Axel,</div>
<div> </div>
<div>Is this related to OPENMPI gobbling up tons of memory?<br><br> </div>
<div><span class="gmail_quote">On 3/10/08, <b class="gmail_sendername">Axel</b> <<a href="mailto:akoh...@gmail.com">akoh...@gmail.com</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><br><br><br>On Mar 10, 5:49 pm, "Nichols A. Romero" <<a href="mailto:naro...@gmail.com">naro...@gmail.com</a>> wrote:<br>
> Teo,<br>><br>> I was just able to reproduce this on on another machine.http://www.mhpcc.hpc.mil/doc/jaws.html<br>><br>> I just ran it on 256 processors. Compiled it with ifort 9.1.045 and mvapich<br>> 1.2.7.<br>
> I attach the arch file.<br><br>nick,<br><br>here's another caveat which has most likely nothing to do<br>with the immediate error that you are seeing, but may bite<br>you later.<br><br>when running on large infiniband clusters, you may have to limit<br>
the number of processes per node. for the way openfabrics seems<br>to work (at least at the moment) you need _physical_ memory<br>as "backing store" for each RDMA connection, i.e. for each MPI<br>task you'll lose some physical memory regardless of the memory<br>
requirements of your jobs. i've seen this on the NCSA 'abe' cluster<br>where i ran out of memory for rather small jobs despite having 1GB/<br>core<br>simply by increasing the requested number of cpus. also, you<br>
may get better performance by using half the cpu cores requested.<br>i had to go down to a quarter (abe is dual quad-core, though) for<br>really big jobs. :-(<br><br>cheers,<br> axel.<br><br><br><br><br>><br>> Here is the error that I am seeing.<br>
><br>> Out of memory ...<br>><br>> *<br>> *** ERROR in get_my_tasks ***<br>> *<br>><br>> *** The memory allocation for the data object <send_buf_r> failed. The ***<br>> *** requested memory size is 1931215 Kbytes ***<br>
><br><br>[...]<br><br><br>> --<br>> Nichols A. Romero, Ph.D.<br>> DoD User Productivity Enhancement and Technology Transfer (PET) Group<br>> High Performance Technologies, Inc.<br>> Reston, VA<br>> 443-567-8328 (C)<br>
> 410-278-2692 (O)<br>><br>> Linux-x86-64-intel.popt<br>> 1KDownload<br><br clear="all"><br>-- <br>Nichols A. Romero, Ph.D.<br>DoD User Productivity Enhancement and Technology Transfer (PET) Group<br>High Performance Technologies, Inc.<br>Reston, VA<br>443-567-8328 (C)<br>410-278-2692 (O)