It's a good practice to mark public APIs as extern "C" so that
projects written in C++ can use our library.
[mbroz] It is not public API in cryptsetup, but we use this backend
in other projects, this aligns the code changes.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch switches code to SPDX one-line license identifiers according to
https://spdx.dev/learn/handling-license-info/
and replacing long license text headers.
I used C++ format on the first line in style
// SPDX-License-Identifier: <id>
except exported libcryptsetup.h, when only C comments are used.
The only additional changes are:
- switch backend utf8.c from LGPL2+ to LGPL2.1+ (as in systemd)
- add some additional formatting lines.
For Argon2 native code (gcrypt, OpenSSL) a flag in debug output is printed.
If libargon is used, then [cryptsetup libargon2] is printed
(embedded code) or [external libargon2] for dynamic external library.
# Crypto backend (OpenSSL 3.0.11 19 Sep 2023 [default][legacy] [external libargon2])
or
# Crypto backend (OpenSSL 3.0.11 19 Sep 2023 [default][legacy] [cryptsetup libargon2])
Fixes: #851
For OpenSSL2, we use PKCS5_PBKDF2_HMAC() function.
Unfortunately, the iteration count is defined as signed integer
(unlike unsigned in OpenSSL3 PARAMS KDF API).
This can lead to overflow and decreasing of actual iterations count.
In reality this can happen only if pbkdf-force-iterations is used.
This patch add check to INT_MAX if linked to older OpenSSL and
disallows such setting.
Note, this is misconception in OpenSSL2 API, cryptsetup internally
use uint32_t for iterations count.
Reported by wangzhiqiang <wangzhiqiang95@huawei.com> in cryptsetup list.
System FIPS mode check is no longer dependent on /etc/system-fips
file. The change should be compatible with older distributions since
we now depend on crypto backend internal routine.
This commit affects only FIPS enabled systems (with FIPS enabled
builds). In case this causes any regression in current distributions
feel free to drop the patch.
For reference see https://bugzilla.redhat.com/show_bug.cgi?id=2080516
Argon2 draft defines suggested parameters for disk encryption use, but LUKS2
approach is slightly different. We need to provide platform independent
values. The values in draft expects 64bit systems (suggesting using 6 GiB
of RAM), while we need to provide compatibility with all 32bit systems,
so allocating more than 4GiB memory is not option for LUKS2.
The maximal limit in LUKS2 stays for 4 GiB, and by default LUKS2
PBKDF benchmarking sets maximum to 1 GIB, prefering increase of CPU cost.
But for the minimal memory cost we had a quite low limit 32 MiB.
This patch increases the bechmarking value to 64 MiB (as minimal
suggested values in Argon2 RFC). For compatibility reasons we still
allow older limit if set by a parameter.
rename sector_start -> iv_start (it's now a iv shift for subsequent
en/decrypt operations)
rename count -> length. We accept length in bytes now and perform sanity
checks at the crypt_storage_init and crypt_storage_decrypt (or encrypt)
respectively.
rename sector -> offset. It's in bytes as well. Sanity checks inside
crypt_storage functions.
Code based on patch by Ondrej Mosnacek
The new benchmark works as follows:
Phase 1:
It searches for smallest parameters, such that the duration is 250 ms
(this part is quite fast).
Then it uses that data point to estimate the paramters that will have
the desired duration (and fulfill the basic constraints).
Phase 2:
The candidate parameters are then measured and if their duration falls
within +-5% of the target duration, they are accepted.
Otherwise, new candidate parameters are estimated based on the last
measurement and phase 2 is repeated.
When measuring the duration for given parameters, the measurement
is repeated 3 or 4 times and a minimum of the measured durations
is used as the final duration (to reduce variance in measurements).
A minimum is taken instead of mean, because the measurements definitely
have a certain lower bound, but no upper bound (therefore mean value
would tend to be higher than the value with highest probability density).
The actual "most likely" duration is going to be somewhere just above
the minimum measurable value, so minimum over the observations is
a better estimate than mean.
Signed-off-by: Milan Broz <gmazyland@gmail.com>