Operational Defect Database

BugZero found this defect 2435 days ago.

MongoDB | 398503

[SERVER-29871] Resident Memory higher than cacheSizeGB

Last update date:

1/7/2018

Affected products:

MongoDB Server

Affected releases:

3.2.11

Fixed releases:

No fixed releases provided.

Description:

Info

I have a mongodb server running v3.2.11 with a memory comsumption much higher than what I configured on cacheSizeGB. Currently it is using 3.9g but the cacheSizeGB parameter is configured with 1 (1g). I expected to be a little bit over 1g due to active connections and other stuff but right now is almost 4x that and seems to be increasing. The huge pages are disabled: grep AnonHugePages /proc/meminfo AnonHugePages: 0 kB cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] I'm running CentOS uname -a Linux XXXXXX 2.6.32-642.el6.x86_64 #1 SMP Tue May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux   cat /etc/redhat-release CentOS release 6.8 (Final) I attached the information that I was able to collect: top serverStatus mongostat mongo_oms_1.conf --> the configuration file Is this a normal behaviour? There is other way that I can use to limit the memory used by the mongodb process?

Top User Comments

mark.agarunov commented on Fri, 10 Nov 2017 21:18:16 +0000: Hello ejsfreitas, We still need additional information to diagnose the problem. If this is still an issue for you, would you please provide the extended diagnostic data? Thanks, Mark mark.agarunov commented on Fri, 8 Sep 2017 18:15:49 +0000: Hello ejsfreitas, Unfortunately the diagnostic data is not pointing at any causes for this behavior. If this is still an issue, please collect the diagnostic data with these parameters for a longer period of time as this issue appears to slowly happen over a long period of time, and has not yet been long enough to show any significant indicators in the diagnostic data. I suspect that a longer collection period may allow us to correlate the cause of the issue in the diagnostic data. Thanks, Mark mark.agarunov commented on Wed, 30 Aug 2017 21:20:58 +0000: Hello ejsfreitas, Thank you for providing the additional information. My apologies for the delay in response. We are still investigating this behavior but have unfortunately not yet determined the cause of it. Thanks, Mark ejsfreitas commented on Tue, 29 Aug 2017 09:33:53 +0000: Hi Mark, Did you had the chance to check this issue? Thanks! ejsfreitas commented on Thu, 3 Aug 2017 09:07:12 +0000: Hello Mark, sorry for the delay. I uploaded a new diagnostic.data with the parameters as you asked using the upload portal. Thanks, Emanuel mark.agarunov commented on Fri, 7 Jul 2017 21:32:14 +0000: Hello ejsfreitas, Thank you for providing this data. Unfortunately, due to the size of the data, it looks like it was truncated to a short timeframe. If possible, please increase the diagnostic data size and reduce the sampling rate to 10 seconds with the following parameters: diagnosticDataCollectionDirectorySizeMB=1000 diagnosticDataCollectionPeriodMillis=10000 I've generated an upload portal so that you can send us this data. Thanks, Mark ejsfreitas commented on Mon, 3 Jul 2017 09:19:23 +0000: Hello Mark, I'm sorry for the late response. I was trying to replicate the problem in our lab environment because I'm not sure about the implications of enableling that flag in production. I attached (heapProfilingEnabled.tar.gz) the information that you asked. As you can see it's already using 1072.0 MiB (I configured 1g). I know that the difference is very small but I hope that it can help you. Meanwhile I will let this instance running to see if it keeps growing. mark.agarunov commented on Tue, 27 Jun 2017 22:06:03 +0000: Hello ejsfreitas, Thank you for providing this data. It appears that this may be due to a memory leak in mongod. To investigate what may be causing this leak, I'd like to request heap profiler data from mongod. To obtain this, please do the following: Start mongod with the additional flag --setParameter heapProfilingEnabled=true Run mongod until reaching the high memory usage condition. Archive and upload the $dbpath/diagnostic.data so that we can examine the data. Provide the complete mongod logs while this issue is present. This should provide some insight into which component is causing this leak. Thanks, Mark ejsfreitas commented on Tue, 27 Jun 2017 14:56:43 +0000: Hello Bruce, I attached the diagnostic.data directory. Thanks for your help. Kind regards, Emanuel bruce.lucas@10gen.com commented on Tue, 27 Jun 2017 14:10:46 +0000: Hi Emanuel, In order for us to continue diagnosing this can you please archive and attach to this ticket the content of the $dbpath/diagnostic.data directory? Thanks, Bruce

Additional Resources / Links

Share:

BugZero Risk Score

Coming soon

Status

Closed

Have you been affected by this bug?

cost-cta-background

Do you know how much operational outages are costing you?

Understand the cost to your business and how BugZero can help you reduce those costs.

Discussion

Login to read and write comments.

Have you ever...

had your data corrupted from a

VMware

bug?

Search:

...