Operational Defect Database

BugZero found this defect 2435 days ago.

MongoDB | 398402

[SERVER-29868] swap keeps growing up even though there are a plenty of available RAM

Last update date:


Affected products:

MongoDB Server

Affected releases:

No affected releases provided.

Fixed releases:

No fixed releases provided.



We have a Mongodb sharded cluster consisting of one router server, one config server and two shard servers. The swap space in the shard servers keeps growing up even though there are plenty of available RAM. The speed is about tens of megabyte a week. System of shard server: operating system: Ubuntu 16, CPU: 4 core 3.4GHZ, RAM:16GB, database: Mongodb 3.4.2 Community version There is no error in the Mongodb log file. The swap space in router and config server are always zero. Please let me know anything else you need. I appreciate your support so much! John Wang

Top User Comments

john_wang commented on Wed, 2 Aug 2017 21:51:43 +0000: Hi Ramon, Thanks for your update so much! Best Regards, John Wang ramon.fernandez commented on Wed, 2 Aug 2017 21:01:35 +0000: The contents of diagnostic.data is explained in this comment, which also points to the source code if you're interested about the details of the format (which is specially formatted and compressed). Unfortunately I'm not aware of any publicly available tools to read and parse the output of the diagnostic.data, but the details are public so a tool can be written for it. Regards, Ramón. john_wang commented on Thu, 29 Jun 2017 18:37:09 +0000: Hi Mark, I agree with your point. Most of the swap space are used by Mongodb. I think probably the kernel swaps lots of old mapping files of Mongodb. What tool can read the diagnostic.data file? Thanks for your support so much! John Wang mark.agarunov commented on Wed, 28 Jun 2017 19:41:44 +0000: Hello John_Wang, Thank you for providing this data. Looking over this, I don't believe the swap usage is related to mongod itself. This is likely the linux kernel just swapping memory that hasn't been recently accessed. The kernel will generally not fill swap to the point of causing an out of memory condition, however you could try to set the vm.swapiness value lower (Ubuntu recommends 10 for servers), which should cause the system to only use swap when needed, and not swap out infrequently accessed pages. Please note that SERVER project is for reporting bugs or feature suggestions for the MongoDB server. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag. A question like this involving more discussion would be best posted on the mongodb-user group. Thanks, Mark john_wang commented on Wed, 28 Jun 2017 03:20:42 +0000: Hi Mark, I appreciate your support so much! The vm.swapiness value is 60 that is the default value in Ubuntu. I uploaded the two diagnostic.data.1&2.rar files. What tools can read the diagnostic.data? Thanks for your time. We have 1000 clients keep writing about 3MB/minute data into the Mongodb cluster. If the swap keeps going until full, is it possible to cause memory leak in the shard server cause there is no more space to swap? Please let me know anything you need. Thanks so much! Best, John Wang john_wang commented on Wed, 28 Jun 2017 03:03:54 +0000: diagnostic data in the primary server mark.agarunov commented on Tue, 27 Jun 2017 21:58:26 +0000: Hello John_Wang, Thank you for the report. From your description of the issue, I believe this may be due to the vm.swapiness value of the Linux kernel swapping out pages that haven't been recently accessed. However, to get a better idea of what may be causing this, please archive and upload the $dbPath/diagnostic.data directory. This should give a bit more information as to what is causing this. Thanks, Mark

Additional Resources / Links


BugZero Risk Score

Coming soon



Have you been affected by this bug?


Do you know how much operational outages are costing you?

Understand the cost to your business and how BugZero can help you reduce those costs.


Login to read and write comments.

Have you ever...

had your data corrupted from a