Unfortunatelly the benchmark function cannot return
corrected parallel cost, so it must fail.
Note that some backends (like OpenSSL) also limits maximal thread count,
so currently it was clapped at 4 for luksFormat and 8 for benchmark.
This patch set it all to PBKDF internal parallel limit.
In both pbkdf2 and argon2* benchmark key variable
is pointer to benchmark data and does not need to be erased
safely as regular key data would need to.
This patch switches code to SPDX one-line license identifiers according to
https://spdx.dev/learn/handling-license-info/
and replacing long license text headers.
I used C++ format on the first line in style
// SPDX-License-Identifier: <id>
except exported libcryptsetup.h, when only C comments are used.
The only additional changes are:
- switch backend utf8.c from LGPL2+ to LGPL2.1+ (as in systemd)
- add some additional formatting lines.
Argon2 draft defines suggested parameters for disk encryption use, but LUKS2
approach is slightly different. We need to provide platform independent
values. The values in draft expects 64bit systems (suggesting using 6 GiB
of RAM), while we need to provide compatibility with all 32bit systems,
so allocating more than 4GiB memory is not option for LUKS2.
The maximal limit in LUKS2 stays for 4 GiB, and by default LUKS2
PBKDF benchmarking sets maximum to 1 GIB, prefering increase of CPU cost.
But for the minimal memory cost we had a quite low limit 32 MiB.
This patch increases the bechmarking value to 64 MiB (as minimal
suggested values in Argon2 RFC). For compatibility reasons we still
allow older limit if set by a parameter.
There is a memory leak when PBKDF2_temp > UINT32_MAX. Here,
we change return to goto out to free key.
Signed-off-by: Lixiaokeng <lixiaokeng@huawei.com>
Signed-off-by: Linfeilong <linfeilong@huawei.com>
1) If the calculated costs were the same, it run forever.
2) If the calculation returned final values in the first step,
out costs were not updated and benchmark returned too low values.
When we have measured time smaller than target time, we are decreasing
the parameters. Thus, we should first try to decrease t_cost and only
if that is not possible should we try to decrease m_cost instead. The
original logic was only valid for the case where parameters are being
increased. Most notably this caused unusual parameter combinations for
iteration time < 250 ms.
In this commit we also factor out the now heavily nested parameter
update formula.
Code based on patch by Ondrej Mosnacek
The new benchmark works as follows:
Phase 1:
It searches for smallest parameters, such that the duration is 250 ms
(this part is quite fast).
Then it uses that data point to estimate the paramters that will have
the desired duration (and fulfill the basic constraints).
Phase 2:
The candidate parameters are then measured and if their duration falls
within +-5% of the target duration, they are accepted.
Otherwise, new candidate parameters are estimated based on the last
measurement and phase 2 is repeated.
When measuring the duration for given parameters, the measurement
is repeated 3 or 4 times and a minimum of the measured durations
is used as the final duration (to reduce variance in measurements).
A minimum is taken instead of mean, because the measurements definitely
have a certain lower bound, but no upper bound (therefore mean value
would tend to be higher than the value with highest probability density).
The actual "most likely" duration is going to be somewhere just above
the minimum measurable value, so minimum over the observations is
a better estimate than mean.
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Prepare API for PBKDF that can set three costs
- time (similar to iterations in PBKDF2)
- memory (required memory for memory-hard function)
- threads (required number of threads/CPUs).
This patch also removes wrongly designed API call
crypt_benchmark_kdf and replaces it with the new call
crypt_benchmark_pbkdf.
Two functions for PBKDF per context setting
are introduced: crypt_set_pbkdf_type and crypt_get_pbkdf_type.
The patch should be backward compatible when using
crypt_set_iteration_time function (works only for PBKDF2).
Signed-off-by: Milan Broz <gmazyland@gmail.com>
The Argon2i/id is a password hashing function that
won Password Hashing Competiton.
It will be (optionally) used in LUKS2 for passworrd-based
key derivation.
We have to bundle code for now (similar PBKDF2 years ago)
because there is yet no usable implementation in common
crypto libraries.
(Once there is native implementation, cryptsetup
will switch to the crypto library version.)
For now, we use reference (not optimized but portable) implementation.
This patch contains bundled Argon2 algorithm library copied from
https://github.com/P-H-C/phc-winner-argon2
For more info see Password Hashing Competition site:
https://password-hashing.net/
and draft of RFC document
https://datatracker.ietf.org/doc/draft-irtf-cfrg-argon2/
Signed-off-by: Milan Broz <gmazyland@gmail.com>
If measurement function returns exactly 500 ms, the iteration
calculation loop doubles iteration count but instead of repeating
measurement it uses this value directly.
Thanks to Ondrej Mosnacek for bug report.
The previous PBKDF2 benchmark code did not take into account
output key length.
For SHA1 (with 160-bits output) and 256-bit keys (and longer)
it means that the final value was higher than it should be.
For other hash algorithms (like SHA256 or SHA512) it caused
that iteration count was smaller (in comparison to SHA1) than
expected for the requested time period.
This patch fixes the code to use key size for the formatted device
(or default LUKS key size if running in informational benchmark mode).
Thanks to A.Visconti, S.Bossi, A.Calo and H.Ragab
(http://www.club.di.unimi.it/) for point this out.
(Based on "What users should know about Full Disk Encryption
based on LUKS" paper to be presented on CANS2015).
(Or it rehases key in every iteration.)
- Kernel backens seems not to support >20480 HMAC key
- NSS is slow (without proper key reset)
Add some test vectors (commented out by default).