Gen_context was unable to find the required headers without some
autotools fixups. Make dist was also broken without the extra
sources for the host side table builder utility.
This vastly shrinks the size of the context required for signing on devices with
memory-mapped Flash.
Tables are generated by the new gen_context tool into a header.
5a43124 Save 1 _fe_negate since s1 == -s2 (Peter Dettman)
a5d796e Update code comments (Peter Dettman)
7d054cd Refactor to save a _fe_negate (Peter Dettman)
b28d02a Refactor to remove a local var (Peter Dettman)
55e7fc3 Perf. improvement in _gej_add_ge (Peter Dettman)
7657420 Add tests for adding P+Q with P.x!=Q.x and P.y=-Q.y (Pieter Wuille)
8c5d5f7 tests: Add failing unit test for #257 (bad addition formula) (Andrew Poelstra)
5de4c5d gej_add_ge: fix degenerate case when computing P + (-lambda)P (Andrew Poelstra)
bcf2fcf gej_add_ge: rearrange algebra (Andrew Poelstra)
If two points (x1, y1) and (x2, y2) are given to gej_add_ge with
x1 != x2 but y1 = -y2, the function gives a wrong answer since
this causes it to compute "lambda = 0/0" during an intermediate
step. (Here lambda refers to an auxiallary variable in the point
addition formula, not the cube-root of 1 used by the endomorphism
optimization.)
This commit catches the 0/0 and replaces it with an alternate
expression for lambda, cmov'ing it in place if necessary.
There is zero functionality or opcount changes here; I need to do
this to make sure both R and M are computed before they are used,
since a future patch will replace either none or both of them.
Also compute r->y directly in terms of r->x, which again will be
used in a future patch.
Right now `secp256k1_ec_pubkey_decompress` takes an in/out pointer to
a public key and replaces the input key with its decompressed variant.
This forces users who store compressed keys in small (<65 byte) fixed
size buffers (for example, the Rust bindings do this) to explicitly
and wastefully copy their key to a larger buffer.
[API BREAK]
* Make secp256k1_gej_add_var and secp256k1_gej_double return the
Z ratio to go from a.z to r.z.
* Use these Z ratios to speed up batch point conversion to affine
coordinates, and to speed up batch conversion of points to a
common Z coordinate.
* Add a point addition function that takes a point with a known
Z inverse.
* Due to secp256k1's endomorphism, all additions in the EC
multiplication code can work on affine coordinate (with an
implicit common Z coordinate), correcting the Z coordinate of
the result afterwards.
Refactoring by Pieter Wuille:
* Move more global-z logic into the group code.
* Separate code for computing the odd multiples from the code to bring it
to either storage or globalz format.
* Rename functions.
* Make all addition operations return Z ratios, and test them.
* Make the zr table format compatible with future batch chaining
(the first entry in zr becomes the ratio between the input and the
first output).
Original idea and code by Peter Dettman.
This computes (n-b)G + bG with random value b, in place of nG in
ecmult_gen() for signing.
This is intended to reduce exposure to potential power/EMI sidechannels
during signing and pubkey generation by blinding the secret value with
another value which is hopefully unknown to the attacker.
It may not be very helpful if the attacker is able to observe the setup
or if even the scalar addition has an unacceptable leak, but it has low
overhead in any case and the security should be purely additive on top
of the existing defenses against sidechannels.
Use a conditional move of the same kind we use for the affine points
in the storage type instead of multiplying with the infinity flag
and adding. This results in fewer constructions to worry about for
sidechannel behavior.
It also might be faster: It doesn't appear to benchmark as slower for
me at least; but I think the CMOV is faster than the mul_int + add,
but slower than the set+add; making it a wash.