0) Adjust BIP30 enforcement values
1) Reduce amount that peers can adjust our time to eliminate an attack vector. Thanks to
coblee for this fix.
2) Zeitgeist2 patch - thanks to Lolcust and ArtForz. This fixes an issue where a
51% attack can change difficulty at will. Go back the full period unless it's the
first retarget after genesis.
3) Avoid overflow in CalculateNextWorkRequired(). Thanks to pooler for the overflow fix.
4) Zeitgeist2 bool fshift bnNew.bits(). Thanks to romanornr for this path.
5) SegWit ContextualCheckBlockHeader adjustment and extra coverage.
6) Reject peer proto version below 70002. Thanks to wtogami for this patch.
7) Send final alert message to nodes warning about removal of the alert system. Thanks to coblee for this patch.
8) Adjust default settings for Litecoin.
9) Adjust STALE_CHECK_INTERVAL value
b6121edf70 swapped "is" for "==" in literal comparison (Tyler Chambers)
Pull request description:
In Python 3.8+ literal comparisons using "is" instead of "==" produce a SyntaxWarning [source](https://docs.python.org/3.8/whatsnew/3.8.html#changes-in-python-behavior).
I checked the entire devtools directory, this seems to be the only occurrence.
This is a small fix, but removes the SyntaxWarning.
Fixes: #20338
ACKs for top commit:
hebasto:
re-ACK b6121edf70, only squashed since my [previous](https://github.com/bitcoin/bitcoin/pull/20346#pullrequestreview-525934568) review.
practicalswift:
re-ACK b6121edf70a8d50fd16ddbba0c3168e5e49bfc2e: patch still looks correct
theStack:
utACK b6121edf70
Tree-SHA512: 82a43495d6552fbaa3b02b58f0930b049d27aa937fe44b47714e3c059f844cc494de20674557371cbccf24fb8873ecb7376fb965ae326847eed2b855ed2d59c6
faa2f06f5e scripted-diff: [build] Ensure source tarball has leading directory name (MarcoFalke)
Pull request description:
This has been fixed in 0.20, so it needs to be fixed on master as well to avoid a regression
#18945
ACKs for top commit:
laanwj:
ACK faa2f06f5e
hebasto:
ACK faa2f06f5e, tested gitian builds only.
promag:
ACK faa2f06f5e.
Tree-SHA512: e3b025c29c45b025002abc35262bb5d771f6cbd807f1c256c477c243685e93cd43ad9f642b38e3cf218590912abe6ea0ddfec3bfbef36f99080aad74ed6cc0af
Stats:
```
IPv4 IPv6 Onion Pass
426728 59523 7900 Initial
426728 59523 7900 Skip entries with invalid address
426728 59523 7900 After removing duplicates
426727 59523 7900 Skip entries from suspicious hosts
123226 51785 7787 Enforce minimal number of blocks
121710 51322 7586 Require service bit 1
4706 1427 3749 Require minimum uptime
4124 1098 3681 Require a known and recent user agent
4033 1075 3681 Filter out hosts with multiple bitcoin ports
512 140 512 Look up ASNs and limit results per ASN and per net
```
33a84e8f40 build: Update and sort package list in gitian-linux.yml (Hennadii Stepanov)
95051682be build: Drop old hack which is unneeded now (Hennadii Stepanov)
Pull request description:
The hack was aimed to fix an issue in Ubuntu Trusty 14.04 (see #8188).
The current hack implementation was added in #8315.
On master (8db23349fe) this hack is effectively noop, and it is no longer needed.
I see this PR as a step to removing `libfaketime` from gitian builds.
ACKs for top commit:
dongcarl:
tACK 33a84e8f40
laanwj:
Code review ACK 33a84e8f40
Tree-SHA512: 90036c555a500649ccc3d108bf11f09a9cfd2c92c0b598f7e0c0df63a713ae7abaf78f350b68c025470619c967223f45f6a235ad37a6ce1d1a0341ed34963ba0
78c312c983 Replace current benchmarking framework with nanobench (Martin Ankerl)
Pull request description:
Replace current benchmarking framework with nanobench
This replaces the current benchmarking framework with nanobench [1], an
MIT licensed single-header benchmarking library, of which I am the
autor. This has in my opinion several advantages, especially on Linux:
* fast: Running all benchmarks takes ~6 seconds instead of 4m13s on
an Intel i7-8700 CPU @ 3.20GHz.
* accurate: I ran e.g. the benchmark for SipHash_32b 10 times and
calculate standard deviation / mean = coefficient of variation:
* 0.57% CV for old benchmarking framework
* 0.20% CV for nanobench
So the benchmark results with nanobench seem to vary less than with
the old framework.
* It automatically determines runtime based on clock precision, no need
to specify number of evaluations.
* measure instructions, cycles, branches, instructions per cycle,
branch misses (only Linux, when performance counters are available)
* output in markdown table format.
* Warn about unstable environment (frequency scaling, turbo, ...)
* For better profiling, it is possible to set the environment variable
NANOBENCH_ENDLESS to force endless running of a particular benchmark
without the need to recompile. This makes it to e.g. run "perf top"
and look at hotspots.
Here is an example copy & pasted from the terminal output:
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 2.52 | 396,529,415.94 | 0.6% | 25.42 | 8.02 | 3.169 | 0.06 | 0.0% | 0.03 | `bench/crypto_hash.cpp RIPEMD160`
| 1.87 | 535,161,444.83 | 0.3% | 21.36 | 5.95 | 3.589 | 0.06 | 0.0% | 0.02 | `bench/crypto_hash.cpp SHA1`
| 3.22 | 310,344,174.79 | 1.1% | 36.80 | 10.22 | 3.601 | 0.09 | 0.0% | 0.04 | `bench/crypto_hash.cpp SHA256`
| 2.01 | 496,375,796.23 | 0.0% | 18.72 | 6.43 | 2.911 | 0.01 | 1.0% | 0.00 | `bench/crypto_hash.cpp SHA256D64_1024`
| 7.23 | 138,263,519.35 | 0.1% | 82.66 | 23.11 | 3.577 | 1.63 | 0.1% | 0.00 | `bench/crypto_hash.cpp SHA256_32b`
| 3.04 | 328,780,166.40 | 0.3% | 35.82 | 9.69 | 3.696 | 0.03 | 0.0% | 0.03 | `bench/crypto_hash.cpp SHA512`
[1] https://github.com/martinus/nanobench
ACKs for top commit:
laanwj:
ACK 78c312c983
Tree-SHA512: 9e18770b18b6f95a7d0105a4a5497d31cf4eb5efe6574f4482f6f1b4c88d7e0946b9a4a1e9e8e6ecbf41a3f2d7571240677dcb45af29a6f0584e89b25f32e49e
Check that sections are appropriately separated in virtual memory,
based on their (expected) permissions. This checks for missing
-Wl,-z,separate-code and potentially other problems.
Co-authored-by: fanquake <fanquake@gmail.com>
The RandomOrphan function and the function ecdsa_signature_parse_der_lax
in pubkey.cpp were causing non-deterministic test coverage.
Force seed in the beginning of the test to make it deterministic.
The seed is selected carefully so that all branches of the function
ecdsa_signature_parse_der_lax are executed. Prior to this fix, the test
was exhibiting non-deterministic coverage since none of the ECDSA
signatures that were generated during the test had leading zeroes in
either R, S, or both, resulting in some branches of said function not
being executed. The seed ensures that both conditions are hit.
Removed denialofservice_tests test entry from the list of non-deterministic
tests in the coverage script.
Previously, we did not include the macOS SDK libc++ headers in our SDK
creation process and instead used whichever libc++ headers shipped with
the clang package we downloaded in depends.
This change adds a script (which works on both GNU/Linux and macOS) to
correctly generate the macOS SDK including the libc++ headers. This can
be thought of as a simplified rewrite of tpoechtrager's script:
d3392f4eae/tools/gen_sdk_package.sh
The location within the SDK where we place the libc++ headers is chosen
such that clang's search path detection logic for sysroots would pick up
the headers properly.
We also document this change.
47b49a05ea contrib: Fix SyntaxWarning in Python base58 implementation (Alex Willmer)
Pull request description:
In Python integers should be compared for equality (`i == j`), not identity (`i is j`). Recent versions of CPython 3.x emit a SyntaxWarning when they encounter this incorrect usage, e.g.
```
$ python3 base58.py
base58.py:110: SyntaxWarning: "is" with a literal. Did you mean "=="?
assert get_bcaddress_version('15VjRaDX9zpbA8LVnbrCAFzrVzN7ixHNsC') is 0
Tests passed
```
ACKs for top commit:
MarcoFalke:
ACK 47b49a05ea
Tree-SHA512: 9f8962025dcdfa062c0515c68a1864f5bbeb86bd0510c0ec0e413a5edb6afbfd5f41b4c0255784e53db8eaf39c68b7cfa7cc8a33a2e5214aae463fda374f8719
In Python integers should be compared for equality (`i == j`), not identity (`i is j`). Recent versions of CPython 3.x emit a SyntaxWarning when they encounter this incorrect usage, e.g.
```
$ python3 base58.py
base58.py:110: SyntaxWarning: "is" with a literal. Did you mean "=="?
assert get_bcaddress_version('15VjRaDX9zpbA8LVnbrCAFzrVzN7ixHNsC') is 0
Tests passed
```
This replaces the current benchmarking framework with nanobench [1], an
MIT licensed single-header benchmarking library, of which I am the
autor. This has in my opinion several advantages, especially on Linux:
* fast: Running all benchmarks takes ~6 seconds instead of 4m13s on
an Intel i7-8700 CPU @ 3.20GHz.
* accurate: I ran e.g. the benchmark for SipHash_32b 10 times and
calculate standard deviation / mean = coefficient of variation:
* 0.57% CV for old benchmarking framework
* 0.20% CV for nanobench
So the benchmark results with nanobench seem to vary less than with
the old framework.
* It automatically determines runtime based on clock precision, no need
to specify number of evaluations.
* measure instructions, cycles, branches, instructions per cycle,
branch misses (only Linux, when performance counters are available)
* output in markdown table format.
* Warn about unstable environment (frequency scaling, turbo, ...)
* For better profiling, it is possible to set the environment variable
NANOBENCH_ENDLESS to force endless running of a particular benchmark
without the need to recompile. This makes it to e.g. run "perf top"
and look at hotspots.
Here is an example copy & pasted from the terminal output:
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 2.52 | 396,529,415.94 | 0.6% | 25.42 | 8.02 | 3.169 | 0.06 | 0.0% | 0.03 | `bench/crypto_hash.cpp RIPEMD160`
| 1.87 | 535,161,444.83 | 0.3% | 21.36 | 5.95 | 3.589 | 0.06 | 0.0% | 0.02 | `bench/crypto_hash.cpp SHA1`
| 3.22 | 310,344,174.79 | 1.1% | 36.80 | 10.22 | 3.601 | 0.09 | 0.0% | 0.04 | `bench/crypto_hash.cpp SHA256`
| 2.01 | 496,375,796.23 | 0.0% | 18.72 | 6.43 | 2.911 | 0.01 | 1.0% | 0.00 | `bench/crypto_hash.cpp SHA256D64_1024`
| 7.23 | 138,263,519.35 | 0.1% | 82.66 | 23.11 | 3.577 | 1.63 | 0.1% | 0.00 | `bench/crypto_hash.cpp SHA256_32b`
| 3.04 | 328,780,166.40 | 0.3% | 35.82 | 9.69 | 3.696 | 0.03 | 0.0% | 0.03 | `bench/crypto_hash.cpp SHA512`
[1] https://github.com/martinus/nanobench
* Adds support for asymptotes
This adds support to calculate asymptotic complexity of a benchmark.
This is similar to #17375, but currently only one asymptote is
supported, and I have added support in the benchmark `ComplexMemPool`
as an example.
Usage is e.g. like this:
```
./bench_bitcoin -filter=ComplexMemPool -asymptote=25,50,100,200,400,600,800
```
This runs the benchmark `ComplexMemPool` several times but with
different complexityN settings. The benchmark can extract that number
and use it accordingly. Here, it's used for `childTxs`. The output is
this:
| complexityN | ns/op | op/s | err% | ins/op | cyc/op | IPC | total | benchmark
|------------:|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|----------:|:----------
| 25 | 1,064,241.00 | 939.64 | 1.4% | 3,960,279.00 | 2,829,708.00 | 1.400 | 0.01 | `ComplexMemPool`
| 50 | 1,579,530.00 | 633.10 | 1.0% | 6,231,810.00 | 4,412,674.00 | 1.412 | 0.02 | `ComplexMemPool`
| 100 | 4,022,774.00 | 248.58 | 0.6% | 16,544,406.00 | 11,889,535.00 | 1.392 | 0.04 | `ComplexMemPool`
| 200 | 15,390,986.00 | 64.97 | 0.2% | 63,904,254.00 | 47,731,705.00 | 1.339 | 0.17 | `ComplexMemPool`
| 400 | 69,394,711.00 | 14.41 | 0.1% | 272,602,461.00 | 219,014,691.00 | 1.245 | 0.76 | `ComplexMemPool`
| 600 | 168,977,165.00 | 5.92 | 0.1% | 639,108,082.00 | 535,316,887.00 | 1.194 | 1.86 | `ComplexMemPool`
| 800 | 310,109,077.00 | 3.22 | 0.1% |1,149,134,246.00 | 984,620,812.00 | 1.167 | 3.41 | `ComplexMemPool`
| coefficient | err% | complexity
|--------------:|-------:|------------
| 4.78486e-07 | 4.5% | O(n^2)
| 6.38557e-10 | 21.7% | O(n^3)
| 3.42338e-05 | 38.0% | O(n log n)
| 0.000313914 | 46.9% | O(n)
| 0.0129823 | 114.4% | O(log n)
| 0.0815055 | 133.8% | O(1)
The best fitting curve is O(n^2), so the algorithm seems to scale
quadratic with `childTxs` in the range 25 to 800.