Stats:
```
IPv4 IPv6 Onion Pass
426728 59523 7900 Initial
426728 59523 7900 Skip entries with invalid address
426728 59523 7900 After removing duplicates
426727 59523 7900 Skip entries from suspicious hosts
123226 51785 7787 Enforce minimal number of blocks
121710 51322 7586 Require service bit 1
4706 1427 3749 Require minimum uptime
4124 1098 3681 Require a known and recent user agent
4033 1075 3681 Filter out hosts with multiple bitcoin ports
512 140 512 Look up ASNs and limit results per ASN and per net
```
33a84e8f40 build: Update and sort package list in gitian-linux.yml (Hennadii Stepanov)
95051682be build: Drop old hack which is unneeded now (Hennadii Stepanov)
Pull request description:
The hack was aimed to fix an issue in Ubuntu Trusty 14.04 (see #8188).
The current hack implementation was added in #8315.
On master (8db23349fe) this hack is effectively noop, and it is no longer needed.
I see this PR as a step to removing `libfaketime` from gitian builds.
ACKs for top commit:
dongcarl:
tACK 33a84e8f40
laanwj:
Code review ACK 33a84e8f40
Tree-SHA512: 90036c555a500649ccc3d108bf11f09a9cfd2c92c0b598f7e0c0df63a713ae7abaf78f350b68c025470619c967223f45f6a235ad37a6ce1d1a0341ed34963ba0
78c312c983 Replace current benchmarking framework with nanobench (Martin Ankerl)
Pull request description:
Replace current benchmarking framework with nanobench
This replaces the current benchmarking framework with nanobench [1], an
MIT licensed single-header benchmarking library, of which I am the
autor. This has in my opinion several advantages, especially on Linux:
* fast: Running all benchmarks takes ~6 seconds instead of 4m13s on
an Intel i7-8700 CPU @ 3.20GHz.
* accurate: I ran e.g. the benchmark for SipHash_32b 10 times and
calculate standard deviation / mean = coefficient of variation:
* 0.57% CV for old benchmarking framework
* 0.20% CV for nanobench
So the benchmark results with nanobench seem to vary less than with
the old framework.
* It automatically determines runtime based on clock precision, no need
to specify number of evaluations.
* measure instructions, cycles, branches, instructions per cycle,
branch misses (only Linux, when performance counters are available)
* output in markdown table format.
* Warn about unstable environment (frequency scaling, turbo, ...)
* For better profiling, it is possible to set the environment variable
NANOBENCH_ENDLESS to force endless running of a particular benchmark
without the need to recompile. This makes it to e.g. run "perf top"
and look at hotspots.
Here is an example copy & pasted from the terminal output:
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 2.52 | 396,529,415.94 | 0.6% | 25.42 | 8.02 | 3.169 | 0.06 | 0.0% | 0.03 | `bench/crypto_hash.cpp RIPEMD160`
| 1.87 | 535,161,444.83 | 0.3% | 21.36 | 5.95 | 3.589 | 0.06 | 0.0% | 0.02 | `bench/crypto_hash.cpp SHA1`
| 3.22 | 310,344,174.79 | 1.1% | 36.80 | 10.22 | 3.601 | 0.09 | 0.0% | 0.04 | `bench/crypto_hash.cpp SHA256`
| 2.01 | 496,375,796.23 | 0.0% | 18.72 | 6.43 | 2.911 | 0.01 | 1.0% | 0.00 | `bench/crypto_hash.cpp SHA256D64_1024`
| 7.23 | 138,263,519.35 | 0.1% | 82.66 | 23.11 | 3.577 | 1.63 | 0.1% | 0.00 | `bench/crypto_hash.cpp SHA256_32b`
| 3.04 | 328,780,166.40 | 0.3% | 35.82 | 9.69 | 3.696 | 0.03 | 0.0% | 0.03 | `bench/crypto_hash.cpp SHA512`
[1] https://github.com/martinus/nanobench
ACKs for top commit:
laanwj:
ACK 78c312c983
Tree-SHA512: 9e18770b18b6f95a7d0105a4a5497d31cf4eb5efe6574f4482f6f1b4c88d7e0946b9a4a1e9e8e6ecbf41a3f2d7571240677dcb45af29a6f0584e89b25f32e49e
Check that sections are appropriately separated in virtual memory,
based on their (expected) permissions. This checks for missing
-Wl,-z,separate-code and potentially other problems.
Co-authored-by: fanquake <fanquake@gmail.com>
The RandomOrphan function and the function ecdsa_signature_parse_der_lax
in pubkey.cpp were causing non-deterministic test coverage.
Force seed in the beginning of the test to make it deterministic.
The seed is selected carefully so that all branches of the function
ecdsa_signature_parse_der_lax are executed. Prior to this fix, the test
was exhibiting non-deterministic coverage since none of the ECDSA
signatures that were generated during the test had leading zeroes in
either R, S, or both, resulting in some branches of said function not
being executed. The seed ensures that both conditions are hit.
Removed denialofservice_tests test entry from the list of non-deterministic
tests in the coverage script.
Previously, we did not include the macOS SDK libc++ headers in our SDK
creation process and instead used whichever libc++ headers shipped with
the clang package we downloaded in depends.
This change adds a script (which works on both GNU/Linux and macOS) to
correctly generate the macOS SDK including the libc++ headers. This can
be thought of as a simplified rewrite of tpoechtrager's script:
d3392f4eae/tools/gen_sdk_package.sh
The location within the SDK where we place the libc++ headers is chosen
such that clang's search path detection logic for sysroots would pick up
the headers properly.
We also document this change.
47b49a05ea contrib: Fix SyntaxWarning in Python base58 implementation (Alex Willmer)
Pull request description:
In Python integers should be compared for equality (`i == j`), not identity (`i is j`). Recent versions of CPython 3.x emit a SyntaxWarning when they encounter this incorrect usage, e.g.
```
$ python3 base58.py
base58.py:110: SyntaxWarning: "is" with a literal. Did you mean "=="?
assert get_bcaddress_version('15VjRaDX9zpbA8LVnbrCAFzrVzN7ixHNsC') is 0
Tests passed
```
ACKs for top commit:
MarcoFalke:
ACK 47b49a05ea
Tree-SHA512: 9f8962025dcdfa062c0515c68a1864f5bbeb86bd0510c0ec0e413a5edb6afbfd5f41b4c0255784e53db8eaf39c68b7cfa7cc8a33a2e5214aae463fda374f8719
In Python integers should be compared for equality (`i == j`), not identity (`i is j`). Recent versions of CPython 3.x emit a SyntaxWarning when they encounter this incorrect usage, e.g.
```
$ python3 base58.py
base58.py:110: SyntaxWarning: "is" with a literal. Did you mean "=="?
assert get_bcaddress_version('15VjRaDX9zpbA8LVnbrCAFzrVzN7ixHNsC') is 0
Tests passed
```
This replaces the current benchmarking framework with nanobench [1], an
MIT licensed single-header benchmarking library, of which I am the
autor. This has in my opinion several advantages, especially on Linux:
* fast: Running all benchmarks takes ~6 seconds instead of 4m13s on
an Intel i7-8700 CPU @ 3.20GHz.
* accurate: I ran e.g. the benchmark for SipHash_32b 10 times and
calculate standard deviation / mean = coefficient of variation:
* 0.57% CV for old benchmarking framework
* 0.20% CV for nanobench
So the benchmark results with nanobench seem to vary less than with
the old framework.
* It automatically determines runtime based on clock precision, no need
to specify number of evaluations.
* measure instructions, cycles, branches, instructions per cycle,
branch misses (only Linux, when performance counters are available)
* output in markdown table format.
* Warn about unstable environment (frequency scaling, turbo, ...)
* For better profiling, it is possible to set the environment variable
NANOBENCH_ENDLESS to force endless running of a particular benchmark
without the need to recompile. This makes it to e.g. run "perf top"
and look at hotspots.
Here is an example copy & pasted from the terminal output:
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 2.52 | 396,529,415.94 | 0.6% | 25.42 | 8.02 | 3.169 | 0.06 | 0.0% | 0.03 | `bench/crypto_hash.cpp RIPEMD160`
| 1.87 | 535,161,444.83 | 0.3% | 21.36 | 5.95 | 3.589 | 0.06 | 0.0% | 0.02 | `bench/crypto_hash.cpp SHA1`
| 3.22 | 310,344,174.79 | 1.1% | 36.80 | 10.22 | 3.601 | 0.09 | 0.0% | 0.04 | `bench/crypto_hash.cpp SHA256`
| 2.01 | 496,375,796.23 | 0.0% | 18.72 | 6.43 | 2.911 | 0.01 | 1.0% | 0.00 | `bench/crypto_hash.cpp SHA256D64_1024`
| 7.23 | 138,263,519.35 | 0.1% | 82.66 | 23.11 | 3.577 | 1.63 | 0.1% | 0.00 | `bench/crypto_hash.cpp SHA256_32b`
| 3.04 | 328,780,166.40 | 0.3% | 35.82 | 9.69 | 3.696 | 0.03 | 0.0% | 0.03 | `bench/crypto_hash.cpp SHA512`
[1] https://github.com/martinus/nanobench
* Adds support for asymptotes
This adds support to calculate asymptotic complexity of a benchmark.
This is similar to #17375, but currently only one asymptote is
supported, and I have added support in the benchmark `ComplexMemPool`
as an example.
Usage is e.g. like this:
```
./bench_bitcoin -filter=ComplexMemPool -asymptote=25,50,100,200,400,600,800
```
This runs the benchmark `ComplexMemPool` several times but with
different complexityN settings. The benchmark can extract that number
and use it accordingly. Here, it's used for `childTxs`. The output is
this:
| complexityN | ns/op | op/s | err% | ins/op | cyc/op | IPC | total | benchmark
|------------:|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|----------:|:----------
| 25 | 1,064,241.00 | 939.64 | 1.4% | 3,960,279.00 | 2,829,708.00 | 1.400 | 0.01 | `ComplexMemPool`
| 50 | 1,579,530.00 | 633.10 | 1.0% | 6,231,810.00 | 4,412,674.00 | 1.412 | 0.02 | `ComplexMemPool`
| 100 | 4,022,774.00 | 248.58 | 0.6% | 16,544,406.00 | 11,889,535.00 | 1.392 | 0.04 | `ComplexMemPool`
| 200 | 15,390,986.00 | 64.97 | 0.2% | 63,904,254.00 | 47,731,705.00 | 1.339 | 0.17 | `ComplexMemPool`
| 400 | 69,394,711.00 | 14.41 | 0.1% | 272,602,461.00 | 219,014,691.00 | 1.245 | 0.76 | `ComplexMemPool`
| 600 | 168,977,165.00 | 5.92 | 0.1% | 639,108,082.00 | 535,316,887.00 | 1.194 | 1.86 | `ComplexMemPool`
| 800 | 310,109,077.00 | 3.22 | 0.1% |1,149,134,246.00 | 984,620,812.00 | 1.167 | 3.41 | `ComplexMemPool`
| coefficient | err% | complexity
|--------------:|-------:|------------
| 4.78486e-07 | 4.5% | O(n^2)
| 6.38557e-10 | 21.7% | O(n^3)
| 3.42338e-05 | 38.0% | O(n log n)
| 0.000313914 | 46.9% | O(n)
| 0.0129823 | 114.4% | O(log n)
| 0.0815055 | 133.8% | O(1)
The best fitting curve is O(n^2), so the algorithm seems to scale
quadratic with `childTxs` in the range 25 to 800.
f852761aec guix: Add clarifying documentation for V env var (Carl Dong)
85f4a4b082 guix: Make V=1 more powerful for debugging (Carl Dong)
Pull request description:
```
- Print commands in both unexpanded and expanded forms
- Set VERBOSE=1 for CMake
```
Ping MarcoFalke hopefully you use `V=1` already for the Guix builds on DrahtBot?
ACKs for top commit:
fanquake:
ACK f852761aec. Ran a Windows Guix build and compared the output from master and this PR when using `V=1`. i.e `HOSTS=x86_64-w64-mingw32 PATH="/root/.config/guix/current/bin${PATH:+:}$PATH" V=1 ./contrib/guix/guix-build.sh`.
Tree-SHA512: 8bc466fa7b869618bbd5a0a91c6b23d4785009289f8dfb93b0349317463a9ab9ece128c72436e02a0819722a63e703100aed15807867a716fda891292fcb9d9d
bfe1ba2f5b rel-builds: Specify core.abbrev for git-rev-parse (Carl Dong)
27e63e01cc build: Accomodate makensis v2.x (Carl Dong)
1f2c39a30e guix: Remove logical cores requirement (Carl Dong)
a4f6ffa71e lint: Also enable source statements for non-gitian (Carl Dong)
d256f91cb1 rel-builds: Directly deploy win installer to OUTDIR (Carl Dong)
fa791da02f nsis: Specify OutFile path only once (Carl Dong)
14701604d0 guix: Expose GIT_COMMON_DIR in container as readonly (Carl Dong)
f5a6ac4f48 guix: Make source tarball using git-archive (Carl Dong)
395c1137f6 gitian: Limit sourced script to just assignments (Carl Dong)
Pull request description:
Based on: #18556
Related: https://github.com/bitcoin/bitcoin/pull/17595#discussion_r399728721
ACKs for top commit:
fanquake:
ACK bfe1ba2f5b - I agree with Carl, and am going to merge this. I'd like for Linux Guix builds to be working again, and we can rebase #18818.
Tree-SHA512: c87ada7e3de17ca0b692a91029b86573442ded5780fc081c214773f6b374a0cdbeaf6f6898c36669c2e247ee32aa7f82defb1180f8decac52c65f0c140f18674
fa13090d20 contrib: Remove optimize-pngs.py script, which lives in the maintainer repo (MarcoFalke)
Pull request description:
Moved to https://github.com/bitcoin-core/bitcoin-maintainer-tools/blob/master/optimize-pngs.py
Bitcoin Core should focus on the full node implementation, not on scripts to compress png images.
This script is only used when new PNG files are added to the repo. This happens about once every two years. So fetching the script from the other repo should not be a burden, but removing it from this repo is going to cut down on the meta files we need to maintain in the main repo.
ACKs for top commit:
practicalswift:
ACK fa13090d20 -- `+0 lines, -82 lines` :)
promag:
ACK fa13090d20.
hebasto:
ACK fa13090d20, verified that script is already [moved](https://github.com/bitcoin-core/bitcoin-maintainer-tools/pull/56).
Tree-SHA512: 37d111adae769bcddc6ae88041032d5a2b8b228fec67f555c8333c38de3992f5138b30bea868d7d6d6b7f966a47133e5853134373b149ab23cba3b8b560ecb31
When using worktrees or submodules, you'll see a `.git' plain text file
at the root of your working tree instead of the usual `.git' directory.
This plain text file will point to the real GIT_DIR, under the
GIT_COMMON_DIR. From experimentation, the full GIT_COMMON_DIR is
required to exist for operations such as git-archive(1), so we expose it
as readonly inside the container.