outOfMemory error shows when occurs cleanser

Hello
I am using bdb I 5.0.58 HA (Group of two nodes, JVM 6 G for each node).
At first, I put 15 000 000 documents (10 bytes, the value of 8000 bytes key) to the Group of bdb.
Then, I have updated some of these documents; and, unfortunately, this result in the last xxxx.jdb must be cleaned.
In this case, I found outOfMemory error.

After debugging, I think that certain code comes from VLSNIndex.java are suspicious.

//----------------------------------------------------------------------------------------------------------------------------
/*
* Remove all VLSN-> mappings LSN < = deleteEnd
*/
Private Sub pruneDatabaseHead (VLSN deleteEnd,
long deleteFileNum,
TXN txn)
Bird {DatabaseException
Cursor cursor = null;

try {}
cursor = makeCursor (txn);

Key DatabaseEntry = new DatabaseEntry();
Data DatabaseEntry = new DatabaseEntry();
If (! positionBeforeOrEqual (cursor, deleteEnd, key, data)) {}
/ * Nothing to do. */
return;
}

/ * Remove this bucket and foregoing this bucket. */

/ * Avoid to go get the bucket itself, because it does not * /.
final DatabaseEntry noData = new DatabaseEntry();
noData.setPartial (0, 0, true);
int deleteCount = 0;
{}
long keyValue = LongBinding.entryToLong (key);
If (keyValue == VLSNRange.RANGE_KEY) {}
break;
}

OperationStatus status = cursor.delete ();
deleteCount ++;
If (status! = OperationStatus.SUCCESS) {}
launch EnvironmentFailureException.unexpectedState
(envImpl, "cannot be deleted, was granted the status of ' + status +)
"for the removal of the bucket" + keyValue + "deleteEnd =" + "
deleteEnd);
}
} While (cursor.getPrev (key, noData, LockMode.DEFAULT) ==
OperationStatus.SUCCESS);
----------------------------------------------------------------------------------------------------------------------------//
Try the pruneDatabaseHead() method of ' delete all VLSN-> mappings LSN < = deleteEnd.
In my case, the "deleteEnd' is the last vlsn, so I think that vlsn 15.000.000 will be remove by" OperationStatus status = cursor.delete ();
Because all the "delete" will try to get a memory lock (approximately 500 bytes), so we'll see a lot of many locks takes up a lot of memory.
Finally, I got outOfMemory error.

Thank you for your bug report and analysis. To do this, I opened the SR [#21786].

You are right and have identified a flaw in the implementation of this method. It is indeed trying to do the whole deletion in one transaction and is therefore vulnerable to accumulate a huge set of changes, requiring a lot of memory. We certainly have to change do this more limited and the removal must take place in the form of a series of transactions. The change is slightly more complicated than meets the eye because we browse metadata should be deleted in the reverse order. at the present time, it is removed from the end-> before, and for this piece, we must go to-> end. Who can have dependencies, we will have to reflect and to test.

We have never noticed it before is because it can occur when you want to remove a very large amount of data at once. The newspaper filter is designed to operate in the background, constantly cleaning in small increments. This kind of burst of cleaning is unexpected. The VLSNIndex problem is a bug we will fix, but this kind of cleaning heavy newspaper will certainly cause problems of application performance, in that disk space is not claimed by increment. It is sensible to you that there is a burst of newspaper cleaning? Is there some features of your application that could cause this?

In the meantime, do you need to recover this environment? I think it might take a little while working on a fix and perhaps it is a way to force an additional of your newspaper clean, so that the removal of VLSNINdex also ends up being incremental. I have to do some research to find out if there is a workaround; Let me know if you need.

Thank you for the bug report,

Linda

Tags: Database

Similar Questions

Maybe you are looking for