Operational Defect Database

BugZero found this defect 2419 days ago.

MongoDB | 403414

[SERVER-30120] InMemory engine: 'inMemorySizeGB' doesn't limit the process memory usage

Last update date:

10/27/2023

Affected products:

MongoDB Server

Affected releases:

3.2.15

Fixed releases:

No fixed releases provided.

Description:

Info

Using MongoDB Enterprise v3.2.15 on Ubuntu 16.04 Starting mongod as a service using: /usr/bin/mongod --storageEngine inMemory --inMemorySizeGB 2 --quiet --config /etc/mongod.conf It seems that the memory limit doesn't really limit the process memory, mongod process could take more than 3GB of memory. Note that on our implementation we work with few hundreds of DBs. Specifically our auto tests creates a large amount of databases and drop them after the test ends. However the memory isn't released. Also after a while the process start to take ~100% CPU even without any client load. Thanks, Assaf

Top User Comments

assaf-oren commented on Thu, 3 Aug 2017 11:47:12 +0000: Hi, Is there any update on this? Thanks, Assaf assaf-oren commented on Wed, 26 Jul 2017 19:37:15 +0000: Hi Mark, I think there were about 4700 DBs created. So the memory leak is on index allocations/releases? Index memory is never released? Spacing these won't help, even when we delete all DBs and close all connections to the process, the memory won't go down. BTW, we tried the latest enterprise release - v3.4.6 and we can see this issue there as well. Thanks, Assaf mark.agarunov commented on Wed, 26 Jul 2017 18:29:23 +0000: Hello assaf-oren, Thank you for providing the data and logs. After looking over this, it seems that the memory usage is mostly due to index builds. According to the logs, there were 199321 index builds during this duration. Spacing these out or reducing the number of index rebuilds may alleviate the memory usage. Thanks, Mark assaf-oren commented on Tue, 25 Jul 2017 06:03:47 +0000: Hi, Is there any update on this? Also, is there any way to limit the process memory? (not only the engine/cache size) Thanks, Assaf assaf-oren commented on Wed, 19 Jul 2017 11:48:00 +0000: Hi Mark, I uploaded two files (see those with '2017_07_19' in the filename), it include the diagnostics and full log. The run from today is using --setParameter heapProfilingEnabled=true Thanks, Assaf mark.agarunov commented on Mon, 17 Jul 2017 19:07:15 +0000: Hello assaf-oren, Thank you for providing this data. After looking over this I agree that the memory is growing beyond what it should be using; this may be indicative of a memory leak. So that we can see exactly where the memory may be leaking, please do the following: Start mongod with the additional flag --setParameter heapProfilingEnabled=true Run mongod until reaching the high memory usage condition. Archive and upload the $dbpath/diagnostic.data so that we can examine the data. Provide the complete mongod logs while this issue is present. Thanks, Mark assaf-oren commented on Mon, 17 Jul 2017 16:01:38 +0000: Thanks Mark, just upload the file 'JIRA-30120--....tar.gz' including the diagnostic data for the Jul 12 and 13 where mongo memory got above 4GB while limited to 2GB. mark.agarunov commented on Mon, 17 Jul 2017 15:37:33 +0000: Hello assaf-oren, I've generated a secure upload portal so that you can send us the diagnostic data privately. The number of collections/indexes may cause overhead in some circumstances, however the diagnostic data may allow us to diagnose if this is the case. Thanks, Mark assaf-oren commented on Sun, 16 Jul 2017 06:20:51 +0000: Thank you for your reply. Is there a way to send you the diagnostic data in a private channel? I suspect the cause for this is the amount of DBs we are using. Is there an overhead per DB (or collection/index) even if there is no client connected to this DB? Thanks, Assaf mark.agarunov commented on Thu, 13 Jul 2017 21:45:44 +0000: Hello assaf-oren, Thank you for the report. The --inMemorySizeGB will set the size for the storage engine, not the entire process, so it is possible for the total memory usage to be greater than the value set for inMemorySizeGB. To better investigate this, please archive and upload the $dbpath/diagnostic.data directory so that we can get some insight into way may be causing the higher memory and high CPU usage. Thanks, Mark

Additional Resources / Links

Share:

BugZero Risk Score

Coming soon

Status

Closed

Have you been affected by this bug?

cost-cta-background

Do you know how much operational outages are costing you?

Understand the cost to your business and how BugZero can help you reduce those costs.

Discussion

Login to read and write comments.

Have you ever...

had your data corrupted from a

VMware

bug?

Search:

...