This patch switches code to SPDX one-line license identifiers according to
https://spdx.dev/learn/handling-license-info/
and replacing long license text headers.
I used C++ format on the first line in style
// SPDX-License-Identifier: <id>
except exported libcryptsetup.h, when only C comments are used.
The only additional changes are:
- switch backend utf8.c from LGPL2+ to LGPL2.1+ (as in systemd)
- add some additional formatting lines.
If sysconf is lying, then anything can happen.
But check for overflow anyway.
Device/partition offset overflow for IV can only cause
bad decryption (expected).
Benchmark for memory-hard KDF is tricky, seems that relying
on maximum half of physical memory is not enough.
Let's allow only free physical available space if there is no swap.
This should not cause changes on normal systems, at least.
AC_SYS_LARGEFILE autoconf macro is in use in configure script which will
add needed feature macros on commandline to enable 64bit off_t.
Also replace lseek64 with lseek, since it will be same when
_FILE_OFFSET_BITS=64 is defined on relevant platforms via AC_SYS_LARGEFILE
This fixes build with latest musl, where LFS64 interfaces are moved out
of _GNU_SOURCE feature test macros namespace [1]
[1] https://git.musl-libc.org/cgit/musl/commit/?id=25e6fee27f4a293728dd15b659170e7b9c7db9bc
Signed-off-by: Khem Raj <raj.khem@gmail.com>
This patch locks all memory ranges in safe allocations.
While crypto backend can have some secure memory calls,
it is usually limited by intitial config.
For our use is enough to keep keys in memory and prevent
swapping it out.
If the lock fails (because of limits) we quietly
stay with plain malloc.
This patch has no functional impact. It only renames misleading
parameter 'keyfile_size_max' to 'key_size' because that's
how it's actually interpreted since beginning. Also updated
API documentation accordingly.
When loading first dm-crypt table (or action that triggers dm-crypt
module load) we do not know dm-crypt version yet. Let's assume all
kernels before 4.15.0 are flawed and reject VK load via kernel keyring
service.
When dm-crypt is already in kernel, check for correct target version
instead (v1.18.1 or later).
The keyfile interface was designed, well, for keyfiles.
Unfortunately, a keyfile can be placed on a device and the size_t offset
can overflow.
We have to introduce new set of fucntions that allows 64bit offsets even on 32bit systems:
- crypt_resume_by_keyfile_device_offset
- crypt_keyslot_add_by_keyfile_device_offset
- crypt_activate_by_keyfile_device_offset
- crypt_keyfile_device_read
The new functions have added _device_ in name.
Old functions are just internall wrappers around these.
Also cryptsetup --keyfile-offset and --new-keyfile-offset must now
process 64bit offsets.
For more info see issue 359.
On some systems the requested amount of memory causes OOM killer
to kill the process (instead of returning ENOMEM).
For now, we never try to use more than half of available
physical memory.
If the keyfile size is explicitly given, then allocate a suitable sized
buffer right from the start instead of increasing it in 4k steps. This
speeds up reading larger keyfiles.
If reading a keyfile use bulk read operations instead of reading one
character at the time. This speeds up reading larger keyfiles.
If read should stop at a EOL, then fallback to reading one character at
the time to not read anything beyond the EOL character.
Prepare API for PBKDF that can set three costs
- time (similar to iterations in PBKDF2)
- memory (required memory for memory-hard function)
- threads (required number of threads/CPUs).
This patch also removes wrongly designed API call
crypt_benchmark_kdf and replaces it with the new call
crypt_benchmark_pbkdf.
Two functions for PBKDF per context setting
are introduced: crypt_set_pbkdf_type and crypt_get_pbkdf_type.
The patch should be backward compatible when using
crypt_set_iteration_time function (works only for PBKDF2).
Signed-off-by: Milan Broz <gmazyland@gmail.com>
Initially cryptsetup expected underlying device that was, by definition,
always aligned to a sector size (and length was always multiple of sectors).
For the images in file, we can now access the image directly.
Expecting that the image is always aligned to the whole block is now false
(the last block in file image can be incomplete).
Moreover, we cannot easily detect underlying block device sector (block) size
(the storage stack can be complex with various RAID and loop block sizes),
so code uses systyem PAGE_SIZE in this situation (should be the safest way).
Unfortunately, PAGE_SIZE can be bigger (1MB) than device sector (4k) and
the blockwise functions then fails because the image in file is not
aligned to PAGE_SIZE multiple..
Fix it by checking that read/write for the last part of an image is
the exact requested size and not a full block.
(The problem is for example for an unaligned hidden Truecrypt header
on PPC64LE systems, where page size is 64k.)
On native 4k-sector device the old hidden header is not aligned
with hw sector size and derect-io access with SEEK_END fails.
Let's extend blockwise functions to support a negative offset
and use the same logic as normal unaligned writes.
Fixes problem mentioned in
https://gitlab.com/cryptsetup/cryptsetup/merge_requests/18
The lseek in function write_blockwise() could return value
that is greater than integer for result so it can overflow
and fail the whole write.
[comment added by mbroz]