Operational Defect Database

BugZero found this defect 2460 days ago.

MongoDB | 389796

[SERVER-29435] Latencies in Top exclude lock acquisition time

Last update date:


Affected products:

MongoDB Server

Affected releases:

No affected releases provided.

Fixed releases:

No fixed releases provided.



Due to the introduction of the AutoStatsTracker for reporting latencies to Top in SERVER-22541 (in particular, see commit 584ca76de9e), these latencies now exclude time spent acquiring locks. The issue can be demonstrated by applying the following patch, which adds sleeps to simulate super long lock acquisition times: diff --git a/src/mongo/db/db_raii.cpp b/src/mongo/db/db_raii.cpp index 9098e3b..977d291 100644 --- a/src/mongo/db/db_raii.cpp +++ b/src/mongo/db/db_raii.cpp @@ -107,6 +107,9 @@ AutoStatsTracker::~AutoStatsTracker() { AutoGetCollectionForRead::AutoGetCollectionForRead(OperationContext* opCtx, const NamespaceString& nss, AutoGetCollection::ViewMode viewMode) { + // Simulate a 2 second lock acquisition. + sleepmillis(2000); + _autoColl.emplace(opCtx, nss, MODE_IS, MODE_IS, viewMode);   // Note: this can yield. Then run the following: > db.c.drop() true > db.c.insert({_id: 1}) WriteResult({ "nInserted" : 1 }) > db.c.aggregate([{$collStats: {latencyStats: {}}}]).pretty() { "ns" : "test.c", "localTime" : ISODate("2017-06-02T21:17:22.267Z"), "latencyStats" : { "reads" : { "latency" : NumberLong(163), "ops" : NumberLong(2) }, "writes" : { "latency" : NumberLong(143438), "ops" : NumberLong(1) }, "commands" : { "latency" : NumberLong(0), "ops" : NumberLong(0) } } } > db.c.find() { "_id" : 1 } > db.c.aggregate([{$collStats: {latencyStats: {}}}]).pretty() { "ns" : "test.c", "localTime" : ISODate("2017-06-02T21:17:35.243Z"), "latencyStats" : { "reads" : { "latency" : NumberLong(401), "ops" : NumberLong(4) }, "writes" : { "latency" : NumberLong(143438), "ops" : NumberLong(1) }, "commands" : { "latency" : NumberLong(0), "ops" : NumberLong(0) } } } You should observe that the find commands take a few seconds to return, but the cumulative read latency reported in $collStats only increases by tens or hundreds of microseconds. Note that latency reported in slow query log lines is not affected by this bug. Re-using the example above, the log line for the find command reports a latency of 2000ms, which of course is dominated by the simulated 2 second lock acquisition: 2017-06-02T17:18:01.340-0400 I COMMAND [conn1] command test.c appName: "MongoDB Shell" command: find { find: "c", filter: {}, $db: "test" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:100 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_msg 2000ms

Top User Comments

david.storch commented on Wed, 14 Jun 2017 23:02:20 +0000: I ended up fixing this as part of the change for SERVER-29304. The fix was to use the CurOp timer for Top, so that our various debug mechanism which report latencies all use the same timer. The CurOp timer is now used by the slow query logs, the profiler, db.currentOp(), Top, and operation latency histograms. I'm resolving this ticket as a duplicate. david.storch commented on Mon, 5 Jun 2017 14:18:13 +0000: charlie.swanson, pretty sure, yeah. Unfortunately CurOp::ensureStarted() is used for reporting latency in the log lines, but we have a separate Timer instance for reporting latencies in Top. Prior to this patch, the Timer was constructed before the AutoGetCollectionForRead. I'm not sure if there is a good reason why we can't use the same timing code for both CurOp and Top, but I wasn't planning on changing it as part of fixing this issue. charlie.swanson commented on Mon, 5 Jun 2017 13:48:14 +0000: david.storch are you sure it was that commit? It looks like both before and after that commit we acquire the collection lock before calling Curop::ensureStarted(), which as far as I can tell is responsible for the latency reporting?

Additional Resources / Links


BugZero Risk Score

Coming soon



Have you been affected by this bug?


Do you know how much operational outages are costing you?

Understand the cost to your business and how BugZero can help you reduce those costs.


Login to read and write comments.

Have you ever...

had your data corrupted from a