Keep a global counter for nOutbound, protected with its own waitable
critical section, and wait when all outbound slots are filled, rather
than polling.
This removes the (on average) 1 second delay between a lost connection
and a new connection attempt, and may speed up shutdowns.
This commit simplifies the locking system: CCriticalSection becomes a
simple typedef for boost::interprocess::interprocess_recursive_mutex,
and CCriticalBlock and CTryCriticalBlock are replaced by a templated
CMutexLock, which wraps boost::interprocess::scoped_lock.
By making the lock type a template parameter, some critical sections
can now be changed to non-recursive locks, which support waiting via
condition variables. These are implemented in CWaitableCriticalSection
and WAITABLE_CRITICAL_BLOCK.
CWaitableCriticalSection is a wrapper for a different Boost mutex,
which supports waiting/notification via condition variables. This
should enable us to remove much of the used polling code. Important
is that this mutex is not recursive, so functions that perform the
locking must not call eachother.
Because boost::interprocess::scoped_lock does not support assigning
and copying, I had to revert to the older CRITICAL_BLOCK macros that
use a nested for loop instead of a simple if.
- rename wxMessageBox, remove redundant arguments to noui/qtui calls
- also, add flag to force blocking, modal dialog box for disk space warning etc
- clarify function naming
- no more special MessageBox needed from AppInit2, as window object is created before calling AppInit2
- Overall, this is better design
- This fixes problems with the address book UI not updating when the address book is changed through RPC
- Move Statusbar change detection responsibility to ClientModel
It was too hyperactive.
gmaxwell: I mean that right now when the block gap goes over an hour it starts showing synchronizing. Increasing that to 90 minutes or so would make it only happen about 6.4 times per year