1) If the calculated costs were the same, it run forever.
2) If the calculation returned final values in the first step,
out costs were not updated and benchmark returned too low values.
If the keyfile size is explicitly given, then allocate a suitable sized
buffer right from the start instead of increasing it in 4k steps. This
speeds up reading larger keyfiles.
If reading a keyfile use bulk read operations instead of reading one
character at the time. This speeds up reading larger keyfiles.
If read should stop at a EOL, then fallback to reading one character at
the time to not read anything beyond the EOL character.
Also cache its value in active context, so we run benchmark
only once.
The patch also changes calculated value for LUKS1 key digest
to 125 miliseconds (it means that for full 8 used slots
the additional slow-down is circa 1 second).
Note that there is no need to have too high iteration count
for key digest; if it is too computationally expensive, attacker
will better decrypt of one sector with candidate key anyway.
(Check for a known signature.)
The reason to have some delay for key digest check was
to complicate brute-force search for volume key with LUKS header
only (and if RNG used to generate volumekey was flawed
allowing such a search i reasonable time).
When we have measured time smaller than target time, we are decreasing
the parameters. Thus, we should first try to decrease t_cost and only
if that is not possible should we try to decrease m_cost instead. The
original logic was only valid for the case where parameters are being
increased. Most notably this caused unusual parameter combinations for
iteration time < 250 ms.
In this commit we also factor out the now heavily nested parameter
update formula.
Code is written by Ondrej Kozina.
This patch adds ability to store volume key in kernel keyring
(feature available in recent kernels) and avoid setting
key through dm-ioctl and avoiding key in table mapping.
Will be used in LUKS2.
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Code based on patch by Ondrej Mosnacek
The new benchmark works as follows:
Phase 1:
It searches for smallest parameters, such that the duration is 250 ms
(this part is quite fast).
Then it uses that data point to estimate the paramters that will have
the desired duration (and fulfill the basic constraints).
Phase 2:
The candidate parameters are then measured and if their duration falls
within +-5% of the target duration, they are accepted.
Otherwise, new candidate parameters are estimated based on the last
measurement and phase 2 is repeated.
When measuring the duration for given parameters, the measurement
is repeated 3 or 4 times and a minimum of the measured durations
is used as the final duration (to reduce variance in measurements).
A minimum is taken instead of mean, because the measurements definitely
have a certain lower bound, but no upper bound (therefore mean value
would tend to be higher than the value with highest probability density).
The actual "most likely" duration is going to be somewhere just above
the minimum measurable value, so minimum over the observations is
a better estimate than mean.
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Prepare API for PBKDF that can set three costs
- time (similar to iterations in PBKDF2)
- memory (required memory for memory-hard function)
- threads (required number of threads/CPUs).
This patch also removes wrongly designed API call
crypt_benchmark_kdf and replaces it with the new call
crypt_benchmark_pbkdf.
Two functions for PBKDF per context setting
are introduced: crypt_set_pbkdf_type and crypt_get_pbkdf_type.
The patch should be backward compatible when using
crypt_set_iteration_time function (works only for PBKDF2).
Signed-off-by: Milan Broz <gmazyland@gmail.com>
The Argon2i/id is a password hashing function that
won Password Hashing Competiton.
It will be (optionally) used in LUKS2 for passworrd-based
key derivation.
We have to bundle code for now (similar PBKDF2 years ago)
because there is yet no usable implementation in common
crypto libraries.
(Once there is native implementation, cryptsetup
will switch to the crypto library version.)
For now, we use reference (not optimized but portable) implementation.
This patch contains bundled Argon2 algorithm library copied from
https://github.com/P-H-C/phc-winner-argon2
For more info see Password Hashing Competition site:
https://password-hashing.net/
and draft of RFC document
https://datatracker.ietf.org/doc/draft-irtf-cfrg-argon2/
Signed-off-by: Milan Broz <gmazyland@gmail.com>
In some specific situation we do not want to read the devices
before initialization.
Here it is integrity checking that will produce warning, because
the device is not yet initialized.
Used only in wipe function (here we must use direct-io anyway)
and expect the device is capable of direct-io.
This error code means invalid value, no point in repeating the whole sequence.
(If there is a situation that requires repeat, it should not return EINVAL.)
Initially cryptsetup expected underlying device that was, by definition,
always aligned to a sector size (and length was always multiple of sectors).
For the images in file, we can now access the image directly.
Expecting that the image is always aligned to the whole block is now false
(the last block in file image can be incomplete).
Moreover, we cannot easily detect underlying block device sector (block) size
(the storage stack can be complex with various RAID and loop block sizes),
so code uses systyem PAGE_SIZE in this situation (should be the safest way).
Unfortunately, PAGE_SIZE can be bigger (1MB) than device sector (4k) and
the blockwise functions then fails because the image in file is not
aligned to PAGE_SIZE multiple..
Fix it by checking that read/write for the last part of an image is
the exact requested size and not a full block.
(The problem is for example for an unaligned hidden Truecrypt header
on PPC64LE systems, where page size is 64k.)
With big page size and image in file this can actually happen.
The command works in this situation but the code will be quite
ineffective (due to blockwise handling).