Operational Defect Database

BugZero found this defect 2412 days ago.

MongoDB | 406322

[SERVER-30238] High disk usage and query blocking on database drops

Last update date:


Affected products:

MongoDB Server

Affected releases:


Fixed releases:

No fixed releases provided.



This case is being raised from WT-3207 - in the latest 3.4.6 release, the bottleneck appears to have moved from CPU to disk. Image 1 - the database is dropped (09:02:47) and the queries, updates have dropped off. Image 2 - Recovery - at 09:04:45 the primary recovers and starts processing requests. Note the list Databases that returned after a long period of time. A new run with diagnostics.data to follow - Can I have a secure link to submit it to?

Top User Comments

richardyates commented on Mon, 24 Jul 2017 08:02:09 +0000: Hi Bruce, Bad news I'm afraid. After disabling discard on both primary+secondary nodes in a replicaset I delete another database around the same size as before (we do this on a weekly basis) and it still stalled the primary. The good news is it only stalled it for 30 seconds or so which is the quickest I've seen so I think disabling discard has helped. I've uploaded another metrics.interim file to the secure URL provided above if you're able to take another look? Thanks, Richard. bruce.lucas@10gen.com commented on Fri, 21 Jul 2017 15:48:22 +0000: Hi Richard, We will indeed want to add this to our production notes once we have confirmation from you that it was the cause of your issue. I'll put this ticket in "waiting for customer" state pending that confirmation. Thanks, Bruce richardyates commented on Fri, 21 Jul 2017 15:26:11 +0000: Hi Bruce, I'm one of David's colleagues. Thanks for the pointers around discard + SSD TRIM. I've done some more testing and forcing TRIM to run manually via fstrim caused identical behavior to what we've been reporting so it definitely looks like you're onto something. We'll disable discard over the coming days (we have multiple replicasets so need a little time to demote primaries, remount etc) and let you know if that appears to be the cause. Since it was AWS' recommendation to enable discard/TRIM support for instance volumes like we're using (although I can't find that reference now so they may have revised their advice!) can I suggest you amend your production notes to include specific mention of not using the discard mount option. We'll now plan to run fstrim manually periodically during a maintenance window to avoid any production impact and would probably recommend others do the same. David also found this relevant thread from mongodb-user: https://groups.google.com/forum/#!topic/mongodb-user/Mj0x6m-02Ms which sounds similar bruce.lucas@10gen.com commented on Fri, 21 Jul 2017 14:38:07 +0000: Hi David, The diagnostic data that you uploaded showed about 31 GB of data being written to the device during the drop operation, which is about the compressed size of the db, that is its size on disk, as you reported. However the metrics showed that the storage engine was not doing any i/o. I suspect that the "discard" mount option may be responsible for the i/o. This option causes trim commands to be issued to the device to discard unused blocks, and per the referenced article can have a severe performance impact. I believe this is causing the file removes involved in the dropDatabase operation to be very slow and generate a lot of i/o. Would it be possible to re-mount the filesystem without the "discard" option and test whether performance improves for dropDatabase? Thanks, Bruce david.henderson@triggeredmessaging.com commented on Fri, 21 Jul 2017 14:02:47 +0000: /dev/nvme0n1 on /srv/mongodb type xfs (rw,noatime,discard) On i3.2xlarge on AWS, Ubuntu 14.04.5 LTS, kernel: 4.4.0-78-generic bruce.lucas@10gen.com commented on Fri, 21 Jul 2017 13:53:57 +0000: Thanks for that info David. Can you tell us what filesystem you are using and what mount options? david.henderson@triggeredmessaging.com commented on Fri, 21 Jul 2017 13:24:55 +0000: It was ~ 26GB compressed around 107 collections, 2 indexes in each collection (_id, 1 compound index on _id, "data.c") Estimated collection sizes: Biggest collection 26.9GB in size (uncompressed), 13.3 million docs 3 more collections between 1 and 2.3 million docs then the rest under 1 million each bruce.lucas@10gen.com commented on Fri, 21 Jul 2017 13:15:34 +0000: Thanks David. Can you tell us about the database being dropped - approximately how big is it (compressed size on disk if you know), and how many collections and indexes? david.henderson@triggeredmessaging.com commented on Fri, 21 Jul 2017 08:24:58 +0000: Thanks - I've uploaded the relevant files for a new instance of the issue. It seems that this issue is much easier to trigger than the previous version in https://jira.mongodb.org/browse/WT-3207 which tended to require multiple drops. thomas.schubert commented on Thu, 20 Jul 2017 17:33:09 +0000: Hi david.henderson@triggeredmessaging.com, I've created a secure upload for you to use to provide the diagnostic.data. Thank you, Thomas

Additional Resources / Links


BugZero Risk Score

Coming soon



Have you been affected by this bug?


Do you know how much operational outages are costing you?

Understand the cost to your business and how BugZero can help you reduce those costs.


Login to read and write comments.

Have you ever...

had your data corrupted from a