Because only macOS wasy mentioned, I was unsure if this would be a macOS specific tool. I guess Linux is more used than Mac, so Linux guide should be there, too.
remove fix_configure_mac.patch
Fixed upstream: https://bugreports.qt.io/browse/QTBUG-67286
remove fix_riscv64_arch.patch
Was fixed upstream in 6a39e49a6cdeb28a04a3657bb6a22f848d5dfa9d
remove fix_rcc_determinism.patch
Fixed upstream in https://bugreports.qt.io/browse/QTBUG-62511
remove freetype_back_compat.patch
By the time we ship a release with Qt 5.12, we'll certainly no-longer be
supporting Ubuntu 14.04 and Ubuntu 16.04 ships with FreeType 2.6.1,
which is new enough that using the symbol is no-longer an issue.
The renaming of FT_Get_X11_Font_Format() happened in FreeType 2.6
remove xkb-default.patch
This was removed upstream in d5abf545971da717014d316127045fc19edbcd65
Co-authored-by: Hennadii Stepanov <32963518+hebasto@users.noreply.github.com>
a0a771843f contrib: Changes to checks for PowerPC64 (Luke Dashjr)
634f6ec4eb contrib: Parse ELF directly for symbol and security checks (Wladimir J. van der Laan)
Pull request description:
Instead of the ever-messier text parsing of the output of the readelf tool (which is clearly meant for human consumption not to be machine parseable), parse the ELF binaries directly.
Add a small dependency-less ELF parser specific to the checks.
This is slightly more secure, too, because it removes potential ambiguity due to misparsing and changes in the output format of `elfread`. It also allows for stricter and more specific ELF format checks in the future.
This removes the build-time dependency for `readelf`.
It passes the test-security-check for me locally, ~~though I haven't checked on all platforms~~. I've checked that this works on the cross-compile output for all ELF platforms supported by Bitcoin Core at the moment, as well as PPC64 LE and BE.
Top commit has no ACKs.
Tree-SHA512: 7f9241fec83ee512642fecf5afd90546964561efd8c8c0f99826dcf6660604a4db2b7255e1afb1e9bb0211fd06f5dbad18a6175dfc03e39761a40025118e7bfc
6690adba08 Warn when binaries are built from a dirty branch. (Tyler Chambers)
Pull request description:
- Adjusted `--version` flag behavior in bitcoind and bitcoin-wallet to have the same behavior.
- Added `--version` flag to bitcoin-tx to match.
- Added functionality in gen-manpages.sh to error when attempting to generate man pages for binaries built from a dirty branch.
mitigates problem with issue #20412
ACKs for top commit:
laanwj:
Tested ACK 6690adba08
Tree-SHA512: b5ca509f1a57f66808c2bebc4b710ca00c6fec7b5ebd7eef58018e28e716f5f2358e36551b8a4df571bf3204baed565a297aeefb93990e7a99add502b97ee1b8
Check both failure cases:
- Use a glibc symbol from a version that is too new
- Use a symbol from a library that is not in the allowlist
And also check a conforming binary.
Adding a similar check for Windows PE can be done in a separate PR.
Adjusted version flag behavior in bitcoin-tx, bitcoin-wallet, and
bitcoind to match. Added functionality in gen-manpages.sh to warning when
attempting to generate man pages for binaries built from a dirty
branch.
Instead of the ever-messier text parsing of the output of the readelf
tool (which is clearly meant for human consumption not to be machine
parseable), parse the ELF binaries directly.
Add a small dependency-less ELF parser specific to the checks.
This is slightly more secure, too, because it removes potential
ambiguity due to misparsing and changes in the output format of `elfread`. It
also allows for stricter and more specific ELF format checks in the future.
This removes the build-time dependency for `readelf`.
It passes the test-security-check for me locally, though I haven't
checked on all platforms.
78c312c983 Replace current benchmarking framework with nanobench (Martin Ankerl)
Pull request description:
Replace current benchmarking framework with nanobench
This replaces the current benchmarking framework with nanobench [1], an
MIT licensed single-header benchmarking library, of which I am the
autor. This has in my opinion several advantages, especially on Linux:
* fast: Running all benchmarks takes ~6 seconds instead of 4m13s on
an Intel i7-8700 CPU @ 3.20GHz.
* accurate: I ran e.g. the benchmark for SipHash_32b 10 times and
calculate standard deviation / mean = coefficient of variation:
* 0.57% CV for old benchmarking framework
* 0.20% CV for nanobench
So the benchmark results with nanobench seem to vary less than with
the old framework.
* It automatically determines runtime based on clock precision, no need
to specify number of evaluations.
* measure instructions, cycles, branches, instructions per cycle,
branch misses (only Linux, when performance counters are available)
* output in markdown table format.
* Warn about unstable environment (frequency scaling, turbo, ...)
* For better profiling, it is possible to set the environment variable
NANOBENCH_ENDLESS to force endless running of a particular benchmark
without the need to recompile. This makes it to e.g. run "perf top"
and look at hotspots.
Here is an example copy & pasted from the terminal output:
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 2.52 | 396,529,415.94 | 0.6% | 25.42 | 8.02 | 3.169 | 0.06 | 0.0% | 0.03 | `bench/crypto_hash.cpp RIPEMD160`
| 1.87 | 535,161,444.83 | 0.3% | 21.36 | 5.95 | 3.589 | 0.06 | 0.0% | 0.02 | `bench/crypto_hash.cpp SHA1`
| 3.22 | 310,344,174.79 | 1.1% | 36.80 | 10.22 | 3.601 | 0.09 | 0.0% | 0.04 | `bench/crypto_hash.cpp SHA256`
| 2.01 | 496,375,796.23 | 0.0% | 18.72 | 6.43 | 2.911 | 0.01 | 1.0% | 0.00 | `bench/crypto_hash.cpp SHA256D64_1024`
| 7.23 | 138,263,519.35 | 0.1% | 82.66 | 23.11 | 3.577 | 1.63 | 0.1% | 0.00 | `bench/crypto_hash.cpp SHA256_32b`
| 3.04 | 328,780,166.40 | 0.3% | 35.82 | 9.69 | 3.696 | 0.03 | 0.0% | 0.03 | `bench/crypto_hash.cpp SHA512`
[1] https://github.com/martinus/nanobench
ACKs for top commit:
laanwj:
ACK 78c312c983
Tree-SHA512: 9e18770b18b6f95a7d0105a4a5497d31cf4eb5efe6574f4482f6f1b4c88d7e0946b9a4a1e9e8e6ecbf41a3f2d7571240677dcb45af29a6f0584e89b25f32e49e
Check that sections are appropriately separated in virtual memory,
based on their (expected) permissions. This checks for missing
-Wl,-z,separate-code and potentially other problems.
Co-authored-by: fanquake <fanquake@gmail.com>
The RandomOrphan function and the function ecdsa_signature_parse_der_lax
in pubkey.cpp were causing non-deterministic test coverage.
Force seed in the beginning of the test to make it deterministic.
The seed is selected carefully so that all branches of the function
ecdsa_signature_parse_der_lax are executed. Prior to this fix, the test
was exhibiting non-deterministic coverage since none of the ECDSA
signatures that were generated during the test had leading zeroes in
either R, S, or both, resulting in some branches of said function not
being executed. The seed ensures that both conditions are hit.
Removed denialofservice_tests test entry from the list of non-deterministic
tests in the coverage script.
This replaces the current benchmarking framework with nanobench [1], an
MIT licensed single-header benchmarking library, of which I am the
autor. This has in my opinion several advantages, especially on Linux:
* fast: Running all benchmarks takes ~6 seconds instead of 4m13s on
an Intel i7-8700 CPU @ 3.20GHz.
* accurate: I ran e.g. the benchmark for SipHash_32b 10 times and
calculate standard deviation / mean = coefficient of variation:
* 0.57% CV for old benchmarking framework
* 0.20% CV for nanobench
So the benchmark results with nanobench seem to vary less than with
the old framework.
* It automatically determines runtime based on clock precision, no need
to specify number of evaluations.
* measure instructions, cycles, branches, instructions per cycle,
branch misses (only Linux, when performance counters are available)
* output in markdown table format.
* Warn about unstable environment (frequency scaling, turbo, ...)
* For better profiling, it is possible to set the environment variable
NANOBENCH_ENDLESS to force endless running of a particular benchmark
without the need to recompile. This makes it to e.g. run "perf top"
and look at hotspots.
Here is an example copy & pasted from the terminal output:
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 2.52 | 396,529,415.94 | 0.6% | 25.42 | 8.02 | 3.169 | 0.06 | 0.0% | 0.03 | `bench/crypto_hash.cpp RIPEMD160`
| 1.87 | 535,161,444.83 | 0.3% | 21.36 | 5.95 | 3.589 | 0.06 | 0.0% | 0.02 | `bench/crypto_hash.cpp SHA1`
| 3.22 | 310,344,174.79 | 1.1% | 36.80 | 10.22 | 3.601 | 0.09 | 0.0% | 0.04 | `bench/crypto_hash.cpp SHA256`
| 2.01 | 496,375,796.23 | 0.0% | 18.72 | 6.43 | 2.911 | 0.01 | 1.0% | 0.00 | `bench/crypto_hash.cpp SHA256D64_1024`
| 7.23 | 138,263,519.35 | 0.1% | 82.66 | 23.11 | 3.577 | 1.63 | 0.1% | 0.00 | `bench/crypto_hash.cpp SHA256_32b`
| 3.04 | 328,780,166.40 | 0.3% | 35.82 | 9.69 | 3.696 | 0.03 | 0.0% | 0.03 | `bench/crypto_hash.cpp SHA512`
[1] https://github.com/martinus/nanobench
* Adds support for asymptotes
This adds support to calculate asymptotic complexity of a benchmark.
This is similar to #17375, but currently only one asymptote is
supported, and I have added support in the benchmark `ComplexMemPool`
as an example.
Usage is e.g. like this:
```
./bench_bitcoin -filter=ComplexMemPool -asymptote=25,50,100,200,400,600,800
```
This runs the benchmark `ComplexMemPool` several times but with
different complexityN settings. The benchmark can extract that number
and use it accordingly. Here, it's used for `childTxs`. The output is
this:
| complexityN | ns/op | op/s | err% | ins/op | cyc/op | IPC | total | benchmark
|------------:|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|----------:|:----------
| 25 | 1,064,241.00 | 939.64 | 1.4% | 3,960,279.00 | 2,829,708.00 | 1.400 | 0.01 | `ComplexMemPool`
| 50 | 1,579,530.00 | 633.10 | 1.0% | 6,231,810.00 | 4,412,674.00 | 1.412 | 0.02 | `ComplexMemPool`
| 100 | 4,022,774.00 | 248.58 | 0.6% | 16,544,406.00 | 11,889,535.00 | 1.392 | 0.04 | `ComplexMemPool`
| 200 | 15,390,986.00 | 64.97 | 0.2% | 63,904,254.00 | 47,731,705.00 | 1.339 | 0.17 | `ComplexMemPool`
| 400 | 69,394,711.00 | 14.41 | 0.1% | 272,602,461.00 | 219,014,691.00 | 1.245 | 0.76 | `ComplexMemPool`
| 600 | 168,977,165.00 | 5.92 | 0.1% | 639,108,082.00 | 535,316,887.00 | 1.194 | 1.86 | `ComplexMemPool`
| 800 | 310,109,077.00 | 3.22 | 0.1% |1,149,134,246.00 | 984,620,812.00 | 1.167 | 3.41 | `ComplexMemPool`
| coefficient | err% | complexity
|--------------:|-------:|------------
| 4.78486e-07 | 4.5% | O(n^2)
| 6.38557e-10 | 21.7% | O(n^3)
| 3.42338e-05 | 38.0% | O(n log n)
| 0.000313914 | 46.9% | O(n)
| 0.0129823 | 114.4% | O(log n)
| 0.0815055 | 133.8% | O(1)
The best fitting curve is O(n^2), so the algorithm seems to scale
quadratic with `childTxs` in the range 25 to 800.
fa13090d20 contrib: Remove optimize-pngs.py script, which lives in the maintainer repo (MarcoFalke)
Pull request description:
Moved to https://github.com/bitcoin-core/bitcoin-maintainer-tools/blob/master/optimize-pngs.py
Bitcoin Core should focus on the full node implementation, not on scripts to compress png images.
This script is only used when new PNG files are added to the repo. This happens about once every two years. So fetching the script from the other repo should not be a burden, but removing it from this repo is going to cut down on the meta files we need to maintain in the main repo.
ACKs for top commit:
practicalswift:
ACK fa13090d20 -- `+0 lines, -82 lines` :)
promag:
ACK fa13090d20.
hebasto:
ACK fa13090d20, verified that script is already [moved](https://github.com/bitcoin-core/bitcoin-maintainer-tools/pull/56).
Tree-SHA512: 37d111adae769bcddc6ae88041032d5a2b8b228fec67f555c8333c38de3992f5138b30bea868d7d6d6b7f966a47133e5853134373b149ab23cba3b8b560ecb31
332f373a9d [scripts] previous_release: improve failed download error message (Sebastian Falbesoner)
Pull request description:
Currently, if the earlier release build/fetch script `previous_release.sh` is invoked with the option `-b` (intending to fetch a binary package from `https://bitcoin.org`) and the download fails, the user sees the following confusing output:
```
$ contrib/devtools/previous_release.sh -r -b v0.9.5
[...]
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
```
This implies that the download worked, but the archive is corrupted, when in reality the HTML document containing the delivery fail reason (most likely 404 Not Found) is saved and tried to get unpacked. In contrast to wget, curl is a bit stubborn and needs explicit instructions to react to server errors via the flag `-f` (outputs error message and returns error code, ideal for scripts): https://curl.haxx.se/docs/manpage.html#-f
On the PR branch, the output on failed download looks now the following:
```
$ contrib/devtools/previous_release.sh -r -b v0.9.5
[...]
curl: (22) The requested URL returned error: 404 Not Found
Download failed.
```
ACKs for top commit:
fanquake:
ACK 332f373a9d
Tree-SHA512: 046c931ad9e78aeb2d13faa4866d46122ed325aa142483547c2b04032d03223ed2411783b00106fcab0cd91b2f78691531ac526ed7bb3ed7547b6e2adbfb2e93
before:
------------------------------------------------------------
$ contrib/devtools/previous_release.sh -r -b v0.9.5
[...]
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
------------------------------------------------------------
now:
------------------------------------------------------------
$ contrib/devtools/previous_release.sh -r -b v0.9.5
[...]
curl: (22) The requested URL returned error: 404 Not Found
Download failed.
------------------------------------------------------------