Merge #19478: Remove CTxMempool::mapLinks data structure member

296be8f58e Get rid of unused functions CTxMemPool::GetMemPoolChildren, CTxMemPool::GetMemPoolParents (Jeremy Rubin)
46d955d196 Remove mapLinks in favor of entry inlined structs with iterator type erasure (Jeremy Rubin)

Pull request description:

  Currently we have a peculiar data structure in the mempool called maplinks. Maplinks job is to track the in-pool children and parents of each transaction. This PR can be primarily understood and reviewed as a simple refactoring to remove this extra data structure, although it comes with a nice memory and performance improvement for free.

  Maplinks is particularly peculiar because removing it is not as simple as just moving it's inner structure to the owning CTxMempoolEntry. Because TxLinks (the class storing the setEntries for parents and children) store txiters to each entry in the mempool corresponding to the parent or child, it means that the TxLinks type is "aware" of the boost multiindex (mapTx) it's coming from, which is in turn, aware of the entry type stored in mapTx. Thus we used maplinks to store this entry associated data we in an entirely separate data structure just to avoid a circular type reference caused by storing a txiter inside a CTxMempoolEntry.

  It turns out, we can kill this circular reference by making use of iterator_to multiindex function and std::reference_wrapper. This allows us to get rid of the maplinks data structure and move the ownership of the parents/child sets to the entries themselves.

  The benefit of this good all around, for any of the reasons given below the change would be acceptable, and it doesn't make the code harder to reason about or worse in any respect (as far as I can tell, there's no tradeoff).

  ### Simpler ownership model
  No longer having to consistency check that mapLinks did have records for our CTxMempoolEntry, impossible to have a mapLinks entry outlive or incorrectly die before a CTxMempoolEntry.

  ### Memory Usage
  We get rid of a O(Transactions) sized map in the mempool, which is a long lived data structure.

  ### Performance
  If you have a CTxMemPoolEntry, you immediately know the address of it's children/parents, rather than having to do a O(log(Transactions)) lookup via maplinks (which we do very often). We do it in *so many* places that a true benchmark has to look at a full running node, but it is easy enough to show an improvement in this case.

  The ComplexMemPool shows a good coherence check that we see the expected result of it being 12.5% faster / 1.14x faster.
  ```
  Before:
  # Benchmark, evals, iterations, total, min, max, median
  ComplexMemPool, 5, 1, 1.40462, 0.277222, 0.285339, 0.279793

  After:
  # Benchmark, evals, iterations, total, min, max, median
  ComplexMemPool, 5, 1, 1.22586, 0.243831, 0.247076, 0.244596
  ```
  The ComplexMemPool benchmark only checks doing addUnchecked and TrimToSize for 800 transactions. While this bench does a good job of hammering the relevant types of function, it doesn't test everything.

  Subbing in 5000 transactions shows a that the advantage isn't completely wiped out by other asymptotic factors (this isn't the only bottleneck in growing the mempool), but it's only a bit proportionally slower (10.8%, 1.12x), which adds evidence that this will be a good change for performance minded users.

  ```
  # Benchmark, evals, iterations, total, min, max, median
  ComplexMemPool, 5, 1, 59.1321, 11.5919, 12.235, 11.7068

  # Benchmark, evals, iterations, total, min, max, median
  ComplexMemPool, 5, 1, 52.1307, 10.2641, 10.5206, 10.4306
  ```
  I don't think it's possible to come up with an example of where a maplinks based design would have better performance, but it's something for reviewers to consider.

  # Discussion
  ## Why maplinks in the first place?

  I spoke with the author of mapLinks (sdaftuar) a while back, and my recollection from our conversation was that it was implemented because he did not know how to resolve the circular dependency at the time, and there was no other reason for making it a separate map.

  ## Is iterator_to weird?

  iterator_to is expressly for this purpose, see https://www.boost.org/doc/libs/1_51_0/libs/multi_index/doc/tutorial/indices.html#iterator_to

  >  iterator_to provides a way to retrieve an iterator to an element from a pointer to the element, thus making iterators and pointers interchangeable for the purposes of element pointing (not so for traversal) in many situations. This notwithstanding, it is not the aim of iterator_to to promote the usage of pointers as substitutes for real iterators: the latter are specifically designed for handling the elements of a container, and not only benefit from the iterator orientation of container interfaces, but are also capable of exposing many more programming bugs than raw pointers, both at compile and run time. iterator_to is thus meant to be used in scenarios where access via iterators is not suitable or desireable:
  >
  >     - Interoperability with preexisting APIs based on pointers or references.
  >     - Publication of pointer-based interfaces (for instance, when designing a C-compatible library).
  >     - The exposure of pointers in place of iterators can act as a type erasure barrier effectively decoupling the user of the code from the implementation detail of which particular container is being used. Similar techniques, like the famous Pimpl idiom, are used in large projects to reduce dependencies and build times.
  >     - Self-referencing contexts where an element acts upon its owner container and no iterator to itself is available.

  In other words, iterator_to is the perfect tool for the job by the last reason given. Under the hood it should just be a simple pointer cast and have no major runtime overhead (depending on if the function call is inlined).

  Edit by laanwj: removed at sign from the description

ACKs for top commit:
  jonatack:
    re-ACK 296be8f per `git range-diff ab338a19 3ba1665 296be8f`, sanity check gcc 10.2 debug build is clean.
  hebasto:
    re-ACK 296be8f58e, only rebased since my [previous](https://github.com/bitcoin/bitcoin/pull/19478#pullrequestreview-482400727) review (verified with `git range-diff`).

Tree-SHA512: f5c30a4936fcde6ae32a02823c303b3568a747c2681d11f87df88a149f984a6d3b4c81f391859afbeb68864ef7f6a3d8779f74a58e3de701b3d51f78e498682e
pull/764/head
MarcoFalke 4 years ago
commit 2583966130
No known key found for this signature in database
GPG Key ID: CE2B75697E69A548

@ -84,7 +84,7 @@ struct CompareTxIterByAncestorCount {
{
if (a->GetCountWithAncestors() != b->GetCountWithAncestors())
return a->GetCountWithAncestors() < b->GetCountWithAncestors();
return CTxMemPool::CompareIteratorByHash()(a, b);
return CompareIteratorByHash()(a, b);
}
};

@ -1778,11 +1778,11 @@ void static ProcessGetData(CNode& pfrom, const CChainParams& chainparams, CConnm
LOCK(mempool.cs);
auto txiter = mempool.GetIter(tx->GetHash());
if (txiter) {
const CTxMemPool::setEntries& parents = mempool.GetMemPoolParents(*txiter);
const CTxMemPoolEntry::Parents& parents = (*txiter)->GetMemPoolParentsConst();
parent_ids_to_add.reserve(parents.size());
for (CTxMemPool::txiter parent_iter : parents) {
if (parent_iter->GetTime() > now - UNCONDITIONAL_RELAY_DELAY) {
parent_ids_to_add.push_back(parent_iter->GetTx().GetHash());
for (const CTxMemPoolEntry& parent : parents) {
if (parent.GetTime() > now - UNCONDITIONAL_RELAY_DELAY) {
parent_ids_to_add.push_back(parent.GetTx().GetHash());
}
}
}

@ -463,9 +463,9 @@ static void entryToJSON(const CTxMemPool& pool, UniValue& info, const CTxMemPool
UniValue spent(UniValue::VARR);
const CTxMemPool::txiter& it = pool.mapTx.find(tx.GetHash());
const CTxMemPool::setEntries& setChildren = pool.GetMemPoolChildren(it);
for (CTxMemPool::txiter childiter : setChildren) {
spent.push_back(childiter->GetTx().GetHash().ToString());
const CTxMemPoolEntry::Children& children = it->GetMemPoolChildrenConst();
for (const CTxMemPoolEntry& child : children) {
spent.push_back(child.GetTx().GetHash().ToString());
}
info.pushKV("spentby", spent);

@ -55,45 +55,45 @@ size_t CTxMemPoolEntry::GetTxSize() const
}
// Update the given tx for any in-mempool descendants.
// Assumes that setMemPoolChildren is correct for the given tx and all
// Assumes that CTxMemPool::m_children is correct for the given tx and all
// descendants.
void CTxMemPool::UpdateForDescendants(txiter updateIt, cacheMap &cachedDescendants, const std::set<uint256> &setExclude)
{
setEntries stageEntries, setAllDescendants;
stageEntries = GetMemPoolChildren(updateIt);
CTxMemPoolEntry::Children stageEntries, descendants;
stageEntries = updateIt->GetMemPoolChildrenConst();
while (!stageEntries.empty()) {
const txiter cit = *stageEntries.begin();
setAllDescendants.insert(cit);
stageEntries.erase(cit);
const setEntries &setChildren = GetMemPoolChildren(cit);
for (txiter childEntry : setChildren) {
cacheMap::iterator cacheIt = cachedDescendants.find(childEntry);
const CTxMemPoolEntry& descendant = *stageEntries.begin();
descendants.insert(descendant);
stageEntries.erase(descendant);
const CTxMemPoolEntry::Children& children = descendant.GetMemPoolChildrenConst();
for (const CTxMemPoolEntry& childEntry : children) {
cacheMap::iterator cacheIt = cachedDescendants.find(mapTx.iterator_to(childEntry));
if (cacheIt != cachedDescendants.end()) {
// We've already calculated this one, just add the entries for this set
// but don't traverse again.
for (txiter cacheEntry : cacheIt->second) {
setAllDescendants.insert(cacheEntry);
descendants.insert(*cacheEntry);
}
} else if (!setAllDescendants.count(childEntry)) {
} else if (!descendants.count(childEntry)) {
// Schedule for later processing
stageEntries.insert(childEntry);
}
}
}
// setAllDescendants now contains all in-mempool descendants of updateIt.
// descendants now contains all in-mempool descendants of updateIt.
// Update and add to cached descendant map
int64_t modifySize = 0;
CAmount modifyFee = 0;
int64_t modifyCount = 0;
for (txiter cit : setAllDescendants) {
if (!setExclude.count(cit->GetTx().GetHash())) {
modifySize += cit->GetTxSize();
modifyFee += cit->GetModifiedFee();
for (const CTxMemPoolEntry& descendant : descendants) {
if (!setExclude.count(descendant.GetTx().GetHash())) {
modifySize += descendant.GetTxSize();
modifyFee += descendant.GetModifiedFee();
modifyCount++;
cachedDescendants[updateIt].insert(cit);
cachedDescendants[updateIt].insert(mapTx.iterator_to(descendant));
// Update ancestor state for each descendant
mapTx.modify(cit, update_ancestor_state(updateIt->GetTxSize(), updateIt->GetModifiedFee(), 1, updateIt->GetSigOpCost()));
mapTx.modify(mapTx.iterator_to(descendant), update_ancestor_state(updateIt->GetTxSize(), updateIt->GetModifiedFee(), 1, updateIt->GetSigOpCost()));
}
}
mapTx.modify(updateIt, update_descendant_state(modifySize, modifyFee, modifyCount));
@ -119,7 +119,7 @@ void CTxMemPool::UpdateTransactionsFromBlock(const std::vector<uint256> &vHashes
// Iterate in reverse, so that whenever we are looking at a transaction
// we are sure that all in-mempool descendants have already been processed.
// This maximizes the benefit of the descendant cache and guarantees that
// setMemPoolChildren will be updated, an assumption made in
// CTxMemPool::m_children will be updated, an assumption made in
// UpdateForDescendants.
for (const uint256 &hash : reverse_iterate(vHashesToUpdate)) {
// calculate children from mapNextTx
@ -128,8 +128,8 @@ void CTxMemPool::UpdateTransactionsFromBlock(const std::vector<uint256> &vHashes
continue;
}
auto iter = mapNextTx.lower_bound(COutPoint(hash, 0));
// First calculate the children, and update setMemPoolChildren to
// include them, and update their setMemPoolParents to include this tx.
// First calculate the children, and update CTxMemPool::m_children to
// include them, and update their CTxMemPoolEntry::m_parents to include this tx.
// we cache the in-mempool children to avoid duplicate updates
{
const auto epoch = GetFreshEpoch();
@ -151,7 +151,7 @@ void CTxMemPool::UpdateTransactionsFromBlock(const std::vector<uint256> &vHashes
bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntries &setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string &errString, bool fSearchForParents /* = true */) const
{
setEntries parentHashes;
CTxMemPoolEntry::Parents staged_ancestors;
const CTransaction &tx = entry.GetTx();
if (fSearchForParents) {
@ -161,8 +161,8 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
for (unsigned int i = 0; i < tx.vin.size(); i++) {
Optional<txiter> piter = GetIter(tx.vin[i].prevout.hash);
if (piter) {
parentHashes.insert(*piter);
if (parentHashes.size() + 1 > limitAncestorCount) {
staged_ancestors.insert(**piter);
if (staged_ancestors.size() + 1 > limitAncestorCount) {
errString = strprintf("too many unconfirmed parents [limit: %u]", limitAncestorCount);
return false;
}
@ -172,16 +172,17 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
// If we're not searching for parents, we require this to be an
// entry in the mempool already.
txiter it = mapTx.iterator_to(entry);
parentHashes = GetMemPoolParents(it);
staged_ancestors = it->GetMemPoolParentsConst();
}
size_t totalSizeWithAncestors = entry.GetTxSize();
while (!parentHashes.empty()) {
txiter stageit = *parentHashes.begin();
while (!staged_ancestors.empty()) {
const CTxMemPoolEntry& stage = staged_ancestors.begin()->get();
txiter stageit = mapTx.iterator_to(stage);
setAncestors.insert(stageit);
parentHashes.erase(stageit);
staged_ancestors.erase(stage);
totalSizeWithAncestors += stageit->GetTxSize();
if (stageit->GetSizeWithDescendants() + entry.GetTxSize() > limitDescendantSize) {
@ -195,13 +196,15 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
return false;
}
const setEntries & setMemPoolParents = GetMemPoolParents(stageit);
for (txiter phash : setMemPoolParents) {
const CTxMemPoolEntry::Parents& parents = stageit->GetMemPoolParentsConst();
for (const CTxMemPoolEntry& parent : parents) {
txiter parent_it = mapTx.iterator_to(parent);
// If this is a new ancestor, add it.
if (setAncestors.count(phash) == 0) {
parentHashes.insert(phash);
if (setAncestors.count(parent_it) == 0) {
staged_ancestors.insert(parent);
}
if (parentHashes.size() + setAncestors.size() + 1 > limitAncestorCount) {
if (staged_ancestors.size() + setAncestors.size() + 1 > limitAncestorCount) {
errString = strprintf("too many unconfirmed ancestors [limit: %u]", limitAncestorCount);
return false;
}
@ -213,10 +216,10 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
void CTxMemPool::UpdateAncestorsOf(bool add, txiter it, setEntries &setAncestors)
{
setEntries parentIters = GetMemPoolParents(it);
CTxMemPoolEntry::Parents parents = it->GetMemPoolParents();
// add or remove this tx as a child of each parent
for (txiter piter : parentIters) {
UpdateChild(piter, it, add);
for (const CTxMemPoolEntry& parent : parents) {
UpdateChild(mapTx.iterator_to(parent), it, add);
}
const int64_t updateCount = (add ? 1 : -1);
const int64_t updateSize = updateCount * it->GetTxSize();
@ -242,9 +245,9 @@ void CTxMemPool::UpdateEntryForAncestors(txiter it, const setEntries &setAncesto
void CTxMemPool::UpdateChildrenForRemoval(txiter it)
{
const setEntries &setMemPoolChildren = GetMemPoolChildren(it);
for (txiter updateIt : setMemPoolChildren) {
UpdateParent(updateIt, it, false);
const CTxMemPoolEntry::Children& children = it->GetMemPoolChildrenConst();
for (const CTxMemPoolEntry& updateIt : children) {
UpdateParent(mapTx.iterator_to(updateIt), it, false);
}
}
@ -257,9 +260,9 @@ void CTxMemPool::UpdateForRemoveFromMempool(const setEntries &entriesToRemove, b
// updateDescendants should be true whenever we're not recursively
// removing a tx and all its descendants, eg when a transaction is
// confirmed in a block.
// Here we only update statistics and not data in mapLinks (which
// we need to preserve until we're finished with all operations that
// need to traverse the mempool).
// Here we only update statistics and not data in CTxMemPool::Parents
// and CTxMemPoolEntry::Children (which we need to preserve until we're
// finished with all operations that need to traverse the mempool).
for (txiter removeIt : entriesToRemove) {
setEntries setDescendants;
CalculateDescendants(removeIt, setDescendants);
@ -282,24 +285,26 @@ void CTxMemPool::UpdateForRemoveFromMempool(const setEntries &entriesToRemove, b
// should be a bit faster.
// However, if we happen to be in the middle of processing a reorg, then
// the mempool can be in an inconsistent state. In this case, the set
// of ancestors reachable via mapLinks will be the same as the set of
// ancestors whose packages include this transaction, because when we
// add a new transaction to the mempool in addUnchecked(), we assume it
// has no children, and in the case of a reorg where that assumption is
// false, the in-mempool children aren't linked to the in-block tx's
// until UpdateTransactionsFromBlock() is called.
// of ancestors reachable via GetMemPoolParents()/GetMemPoolChildren()
// will be the same as the set of ancestors whose packages include this
// transaction, because when we add a new transaction to the mempool in
// addUnchecked(), we assume it has no children, and in the case of a
// reorg where that assumption is false, the in-mempool children aren't
// linked to the in-block tx's until UpdateTransactionsFromBlock() is
// called.
// So if we're being called during a reorg, ie before
// UpdateTransactionsFromBlock() has been called, then mapLinks[] will
// differ from the set of mempool parents we'd calculate by searching,
// and it's important that we use the mapLinks[] notion of ancestor
// transactions as the set of things to update for removal.
// UpdateTransactionsFromBlock() has been called, then
// GetMemPoolParents()/GetMemPoolChildren() will differ from the set of
// mempool parents we'd calculate by searching, and it's important that
// we use the cached notion of ancestor transactions as the set of
// things to update for removal.
CalculateMemPoolAncestors(entry, setAncestors, nNoLimit, nNoLimit, nNoLimit, nNoLimit, dummy, false);
// Note that UpdateAncestorsOf severs the child links that point to
// removeIt in the entries for the parents of removeIt.
UpdateAncestorsOf(false, removeIt, setAncestors);
}
// After updating all the ancestor sizes, we can now sever the link between each
// transaction being removed and any mempool children (ie, update setMemPoolParents
// transaction being removed and any mempool children (ie, update CTxMemPoolEntry::m_parents
// for each direct child of a transaction being removed).
for (txiter removeIt : entriesToRemove) {
UpdateChildrenForRemoval(removeIt);
@ -359,7 +364,6 @@ void CTxMemPool::addUnchecked(const CTxMemPoolEntry &entry, setEntries &setAnces
// Used by AcceptToMemoryPool(), which DOES do
// all the appropriate checks.
indexed_transaction_set::iterator newit = mapTx.insert(entry).first;
mapLinks.insert(make_pair(newit, TxLinks()));
// Update transaction for any feeDelta created by PrioritiseTransaction
// TODO: refactor so that the fee delta is calculated before inserting
@ -430,15 +434,14 @@ void CTxMemPool::removeUnchecked(txiter it, MemPoolRemovalReason reason)
totalTxSize -= it->GetTxSize();
cachedInnerUsage -= it->DynamicMemoryUsage();
cachedInnerUsage -= memusage::DynamicUsage(mapLinks[it].parents) + memusage::DynamicUsage(mapLinks[it].children);
mapLinks.erase(it);
cachedInnerUsage -= memusage::DynamicUsage(it->GetMemPoolParentsConst()) + memusage::DynamicUsage(it->GetMemPoolChildrenConst());
mapTx.erase(it);
nTransactionsUpdated++;
if (minerPolicyEstimator) {minerPolicyEstimator->removeTx(hash, false);}
}
// Calculates descendants of entry that are not already in setDescendants, and adds to
// setDescendants. Assumes entryit is already a tx in the mempool and setMemPoolChildren
// setDescendants. Assumes entryit is already a tx in the mempool and CTxMemPoolEntry::m_children
// is correct for tx and all descendants.
// Also assumes that if an entry is in setDescendants already, then all
// in-mempool descendants of it are already in setDescendants as well, so that we
@ -457,8 +460,9 @@ void CTxMemPool::CalculateDescendants(txiter entryit, setEntries& setDescendants
setDescendants.insert(it);
stage.erase(it);
const setEntries &setChildren = GetMemPoolChildren(it);
for (txiter childiter : setChildren) {
const CTxMemPoolEntry::Children& children = it->GetMemPoolChildrenConst();
for (const CTxMemPoolEntry& child : children) {
txiter childiter = mapTx.iterator_to(child);
if (!setDescendants.count(childiter)) {
stage.insert(childiter);
}
@ -584,7 +588,6 @@ void CTxMemPool::removeForBlock(const std::vector<CTransactionRef>& vtx, unsigne
void CTxMemPool::_clear()
{
mapLinks.clear();
mapTx.clear();
mapNextTx.clear();
totalTxSize = 0;
@ -633,12 +636,9 @@ void CTxMemPool::check(const CCoinsViewCache *pcoins) const
checkTotal += it->GetTxSize();
innerUsage += it->DynamicMemoryUsage();
const CTransaction& tx = it->GetTx();
txlinksMap::const_iterator linksiter = mapLinks.find(it);
assert(linksiter != mapLinks.end());
const TxLinks &links = linksiter->second;
innerUsage += memusage::DynamicUsage(links.parents) + memusage::DynamicUsage(links.children);
innerUsage += memusage::DynamicUsage(it->GetMemPoolParentsConst()) + memusage::DynamicUsage(it->GetMemPoolChildrenConst());
bool fDependsWait = false;
setEntries setParentCheck;
CTxMemPoolEntry::Parents setParentCheck;
for (const CTxIn &txin : tx.vin) {
// Check that every mempool transaction's inputs refer to available coins, or other mempool tx's.
indexed_transaction_set::const_iterator it2 = mapTx.find(txin.prevout.hash);
@ -646,7 +646,7 @@ void CTxMemPool::check(const CCoinsViewCache *pcoins) const
const CTransaction& tx2 = it2->GetTx();
assert(tx2.vout.size() > txin.prevout.n && !tx2.vout[txin.prevout.n].IsNull());
fDependsWait = true;
setParentCheck.insert(it2);
setParentCheck.insert(*it2);
} else {
assert(pcoins->HaveCoin(txin.prevout));
}
@ -657,7 +657,11 @@ void CTxMemPool::check(const CCoinsViewCache *pcoins) const
assert(it3->second == &tx);
i++;
}
assert(setParentCheck == GetMemPoolParents(it));
auto comp = [](const CTxMemPoolEntry& a, const CTxMemPoolEntry& b) -> bool {
return a.GetTx().GetHash() == b.GetTx().GetHash();
};
assert(setParentCheck.size() == it->GetMemPoolParentsConst().size());
assert(std::equal(setParentCheck.begin(), setParentCheck.end(), it->GetMemPoolParentsConst().begin(), comp));
// Verify ancestor state is correct.
setEntries setAncestors;
uint64_t nNoLimit = std::numeric_limits<uint64_t>::max();
@ -680,17 +684,18 @@ void CTxMemPool::check(const CCoinsViewCache *pcoins) const
assert(it->GetModFeesWithAncestors() == nFeesCheck);
// Check children against mapNextTx
CTxMemPool::setEntries setChildrenCheck;
CTxMemPoolEntry::Children setChildrenCheck;
auto iter = mapNextTx.lower_bound(COutPoint(it->GetTx().GetHash(), 0));
uint64_t child_sizes = 0;
for (; iter != mapNextTx.end() && iter->first->hash == it->GetTx().GetHash(); ++iter) {
txiter childit = mapTx.find(iter->second->GetHash());
assert(childit != mapTx.end()); // mapNextTx points to in-mempool transactions
if (setChildrenCheck.insert(childit).second) {
if (setChildrenCheck.insert(*childit).second) {
child_sizes += childit->GetTxSize();
}
}
assert(setChildrenCheck == GetMemPoolChildren(it));
assert(setChildrenCheck.size() == it->GetMemPoolChildrenConst().size());
assert(std::equal(setChildrenCheck.begin(), setChildrenCheck.end(), it->GetMemPoolChildrenConst().begin(), comp));
// Also check to make sure size is greater than sum with immediate children.
// just a sanity check, not definitive that this calc is correct...
assert(it->GetSizeWithDescendants() >= child_sizes + it->GetTxSize());
@ -920,7 +925,7 @@ bool CCoinsViewMemPool::GetCoin(const COutPoint &outpoint, Coin &coin) const {
size_t CTxMemPool::DynamicMemoryUsage() const {
LOCK(cs);
// Estimate the overhead of mapTx to be 15 pointers + an allocation, as no exact formula for boost::multi_index_contained is implemented.
return memusage::MallocUsage(sizeof(CTxMemPoolEntry) + 15 * sizeof(void*)) * mapTx.size() + memusage::DynamicUsage(mapNextTx) + memusage::DynamicUsage(mapDeltas) + memusage::DynamicUsage(mapLinks) + memusage::DynamicUsage(vTxHashes) + cachedInnerUsage;
return memusage::MallocUsage(sizeof(CTxMemPoolEntry) + 15 * sizeof(void*)) * mapTx.size() + memusage::DynamicUsage(mapNextTx) + memusage::DynamicUsage(mapDeltas) + memusage::DynamicUsage(vTxHashes) + cachedInnerUsage;
}
void CTxMemPool::RemoveUnbroadcastTx(const uint256& txid, const bool unchecked) {
@ -969,10 +974,10 @@ void CTxMemPool::addUnchecked(const CTxMemPoolEntry &entry, bool validFeeEstimat
void CTxMemPool::UpdateChild(txiter entry, txiter child, bool add)
{
AssertLockHeld(cs);
setEntries s;
if (add && mapLinks[entry].children.insert(child).second) {
CTxMemPoolEntry::Children s;
if (add && entry->GetMemPoolChildren().insert(*child).second) {
cachedInnerUsage += memusage::IncrementalDynamicUsage(s);
} else if (!add && mapLinks[entry].children.erase(child)) {
} else if (!add && entry->GetMemPoolChildren().erase(*child)) {
cachedInnerUsage -= memusage::IncrementalDynamicUsage(s);
}
}
@ -980,30 +985,14 @@ void CTxMemPool::UpdateChild(txiter entry, txiter child, bool add)
void CTxMemPool::UpdateParent(txiter entry, txiter parent, bool add)
{
AssertLockHeld(cs);
setEntries s;
if (add && mapLinks[entry].parents.insert(parent).second) {
CTxMemPoolEntry::Parents s;
if (add && entry->GetMemPoolParents().insert(*parent).second) {
cachedInnerUsage += memusage::IncrementalDynamicUsage(s);
} else if (!add && mapLinks[entry].parents.erase(parent)) {
} else if (!add && entry->GetMemPoolParents().erase(*parent)) {
cachedInnerUsage -= memusage::IncrementalDynamicUsage(s);
}
}
const CTxMemPool::setEntries & CTxMemPool::GetMemPoolParents(txiter entry) const
{
assert (entry != mapTx.end());
txlinksMap::const_iterator it = mapLinks.find(entry);
assert(it != mapLinks.end());
return it->second.parents;
}
const CTxMemPool::setEntries & CTxMemPool::GetMemPoolChildren(txiter entry) const
{
assert (entry != mapTx.end());
txlinksMap::const_iterator it = mapLinks.find(entry);
assert(it != mapLinks.end());
return it->second.children;
}
CFeeRate CTxMemPool::GetMinFee(size_t sizelimit) const {
LOCK(cs);
if (!blockSinceLastRollingFeeBump || rollingMinimumFeeRate == 0)
@ -1089,12 +1078,12 @@ uint64_t CTxMemPool::CalculateDescendantMaximum(txiter entry) const {
txiter candidate = candidates.back();
candidates.pop_back();
if (!counted.insert(candidate).second) continue;
const setEntries& parents = GetMemPoolParents(candidate);
const CTxMemPoolEntry::Parents& parents = candidate->GetMemPoolParentsConst();
if (parents.size() == 0) {
maximum = std::max(maximum, candidate->GetCountWithDescendants());
} else {
for (txiter i : parents) {
candidates.push_back(i);
for (const CTxMemPoolEntry& i : parents) {
candidates.push_back(mapTx.iterator_to(i));
}
}
}

@ -49,6 +49,20 @@ struct LockPoints
LockPoints() : height(0), time(0), maxInputBlock(nullptr) { }
};
struct CompareIteratorByHash {
// SFINAE for T where T is either a pointer type (e.g., a txiter) or a reference_wrapper<T>
// (e.g. a wrapped CTxMemPoolEntry&)
template <typename T>
bool operator()(const std::reference_wrapper<T>& a, const std::reference_wrapper<T>& b) const
{
return a.get().GetTx().GetHash() < b.get().GetTx().GetHash();
}
template <typename T>
bool operator()(const T& a, const T& b) const
{
return a->GetTx().GetHash() < b->GetTx().GetHash();
}
};
/** \class CTxMemPoolEntry
*
* CTxMemPoolEntry stores data about the corresponding transaction, as well
@ -63,8 +77,16 @@ struct LockPoints
class CTxMemPoolEntry
{
public:
typedef std::reference_wrapper<const CTxMemPoolEntry> CTxMemPoolEntryRef;
// two aliases, should the types ever diverge
typedef std::set<CTxMemPoolEntryRef, CompareIteratorByHash> Parents;
typedef std::set<CTxMemPoolEntryRef, CompareIteratorByHash> Children;
private:
const CTransactionRef tx;
mutable Parents m_parents;
mutable Children m_children;
const CAmount nFee; //!< Cached to avoid expensive parent-transaction lookups
const size_t nTxWeight; //!< ... and avoid recomputing tx weight (also used for GetTxSize())
const size_t nUsageSize; //!< ... and total memory usage
@ -127,6 +149,11 @@ public:
CAmount GetModFeesWithAncestors() const { return nModFeesWithAncestors; }
int64_t GetSigOpCostWithAncestors() const { return nSigOpCostWithAncestors; }
const Parents& GetMemPoolParentsConst() const { return m_parents; }
const Children& GetMemPoolChildrenConst() const { return m_children; }
Parents& GetMemPoolParents() const { return m_parents; }
Children& GetMemPoolChildren() const { return m_children; }
mutable size_t vTxHashesIdx; //!< Index in mempool's vTxHashes
mutable uint64_t m_epoch; //!< epoch when last touched, useful for graph algorithms
};
@ -547,26 +574,12 @@ public:
using txiter = indexed_transaction_set::nth_index<0>::type::const_iterator;
std::vector<std::pair<uint256, txiter>> vTxHashes GUARDED_BY(cs); //!< All tx witness hashes/entries in mapTx, in random order
struct CompareIteratorByHash {
bool operator()(const txiter &a, const txiter &b) const {
return a->GetTx().GetHash() < b->GetTx().GetHash();
}
};
typedef std::set<txiter, CompareIteratorByHash> setEntries;
const setEntries & GetMemPoolParents(txiter entry) const EXCLUSIVE_LOCKS_REQUIRED(cs);
const setEntries & GetMemPoolChildren(txiter entry) const EXCLUSIVE_LOCKS_REQUIRED(cs);
uint64_t CalculateDescendantMaximum(txiter entry) const EXCLUSIVE_LOCKS_REQUIRED(cs);
private:
typedef std::map<txiter, setEntries, CompareIteratorByHash> cacheMap;
struct TxLinks {
setEntries parents;
setEntries children;
};
typedef std::map<txiter, TxLinks, CompareIteratorByHash> txlinksMap;
txlinksMap mapLinks;
void UpdateParent(txiter entry, txiter parent, bool add) EXCLUSIVE_LOCKS_REQUIRED(cs);
void UpdateChild(txiter entry, txiter child, bool add) EXCLUSIVE_LOCKS_REQUIRED(cs);

Loading…
Cancel
Save