c++ - 'new' runs out of memory with very little memory allocated -


i'm working on large scientific code run non-interactively on cluster. code uses singleton scheme store large matrices in memory, repeated future multiplications. lately code has been crashing, after allocation of type of these matrices, our exception handler returning output this:

## std::bad_alloc caught in terminatehandler ##########

new ran out of memory.
peak rss memory = 905179136 bytes
current rss memory = 905179136 bytes

#######################################################

each process ought have @ least 16 gb of ram available it, difficult see how running out of memory @ 905179136 bytes; furthermore, manipulating code use more or less memory results in program still crashing, smaller (larger) value 'peak rss memory' reported.

i can think of 2 possibilities:

  1. memory fragmentation.
  2. a large 'new' request being made reason, such exceed 16 gb if serviced.

it may relevant cuda application, , matrices being built on cpu in order sent gpu , deleted. gpu reports no errors, however.

unfortunately, unable reproduce bug in tests, , running within valgrind or gdb crash sufficiently slow (i.e. days of wallclock time) queue system kills job. question how distinguish between these possibilities, or others might there be? in particular there way report size of calls 'new'?


Comments

Popular posts from this blog

asp.net mvc - SSO between MVCForum and Umbraco7 -

Python Tkinter keyboard using bind -

ubuntu - Selenium Node Not Connecting to Hub, Not Opening Port -