fix possible block db breakage during re-index

When re-indexing, there are a few cases where garbage data may be skipped in
the block files. In these cases, the indices are correctly written to the index
db, however the pointer to the next position for writing in the current block
file is calculated by adding the sizes of the valid blocks found.

As a result, when the re-index is finished, the index db is correct for all
existing blocks, but the next block will be written to an incorrect offset,
likely overwriting existing blocks.

Rather than using the sum of all valid blocks to determine the next write
position, use the end of the last block written to the file. Don't assume that
the current block is the last one in the file, since they may be read
out-of-order.

Rebased-From: bb6acff079
Github-Pull: #5864
pull/196/head
Cory Fields 10 years ago committed by Wladimir J. van der Laan
parent 200f29363b
commit 002c8a2411
No known key found for this signature in database
GPG Key ID: 74810B012346C9A6

@ -2362,8 +2362,11 @@ bool FindBlockPos(CValidationState &state, CDiskBlockPos &pos, unsigned int nAdd
}
nLastBlockFile = nFile;
vinfoBlockFile[nFile].nSize += nAddSize;
vinfoBlockFile[nFile].AddBlock(nHeight, nTime);
if (fKnown)
vinfoBlockFile[nFile].nSize = std::max(pos.nPos + nAddSize, vinfoBlockFile[nFile].nSize);
else
vinfoBlockFile[nFile].nSize += nAddSize;
if (!fKnown) {
unsigned int nOldChunks = (pos.nPos + BLOCKFILE_CHUNK_SIZE - 1) / BLOCKFILE_CHUNK_SIZE;

Loading…
Cancel
Save