Compare commits

..

326 Commits

Author SHA1 Message Date
Milan Broz
d0dc59e792 Update po file. 2019-06-14 13:54:23 +02:00
Ondrej Kozina
0106c64369 Fix issues reported by valgrind.
keyslot_cipher member leaked after existing LUKS2 context reload.

crypt_keyslot_set_encryption may access freed memory if
crypt_keyslot_get_encryption was previously called with
CRYPT_ANY_SLOT parameter.
2019-06-14 13:50:09 +02:00
Ondrej Kozina
69fdb41934 Add tests for LUKS2 reencryption with multiple active keyslots. 2019-06-14 09:10:28 +02:00
Ondrej Kozina
550b3ee1d3 Fix off-by-one error in reencryption keyslots count check. 2019-06-14 09:10:28 +02:00
Milan Broz
961cc6a6d3 Prepare version 2.2.0-rc1. 2019-06-14 08:20:04 +02:00
Ondrej Kozina
05091ab656 Improve reencryption when dealing with multiple keyslots.
It's possible to retain all keyslots (passphrases) when
performing LUKS2 reencryption provided there's enough
space in LUKS2 json metadata.

When specific keyslot is selected all other keyslots
bound to old volume key get deleted after reencryption
is finished.

Existing tokens are assigned to new keyslots.
2019-06-13 17:04:34 +02:00
Ondrej Kozina
272505b99d If no hash is specified in pbkdf use default value for keyslot AF. 2019-06-13 17:04:21 +02:00
Ondrej Kozina
60a769955b Rename hash data parameter in reencrypt keyslot dump. 2019-06-12 12:36:51 +02:00
Ondrej Kozina
34bec53474 Drop excessive nested locking in LUKS2 keyslot store path.
Since commit 80a435f it's not needed to call device_write_lock
in function luks2_encrypt_to_storage. It's handled correctly on
upper layer.
2019-06-12 12:36:51 +02:00
Ondrej Kozina
c77ae65a0d Wipe both keyslot data and metadata holding single write lock. 2019-06-12 12:36:51 +02:00
Ondrej Kozina
1ed0430b82 Move LUKS2 write lock upper when storing reencryption keyslot. 2019-06-12 12:36:51 +02:00
Ondrej Kozina
82f640e360 Open device in locked mode for wipe when necessary. 2019-06-12 12:36:51 +02:00
Ondrej Kozina
44aabc3ae4 Drop reload of metadata in reencryption initialization. 2019-06-12 12:36:50 +02:00
Ondrej Kozina
bbdf9b2745 Read and compare metadata sequence id after taking write lock. 2019-06-12 12:36:46 +02:00
Ondrej Kozina
96a87170f7 Return usage count from device locking functions. 2019-06-12 11:51:08 +02:00
Ondrej Kozina
281323db42 Fix condition for printing debug message. 2019-06-12 11:51:08 +02:00
Milan Broz
32258ee8ae Fix debugging messages callback.
The debug messages should contain EOL char.

Also check string lengths in internal logging macros.
2019-06-11 15:26:53 +02:00
Milan Broz
df0faef9ca Add integritysetup bitmap mode test. 2019-06-04 20:05:13 +02:00
Ondrej Kozina
9c3a020ecf Remove useless debug message from keyslot dump. 2019-05-27 16:23:56 +02:00
Ondrej Kozina
4c4cc55bb7 Wipe backup segment data after reencryption is finished. 2019-05-27 16:05:21 +02:00
Ondrej Kozina
f4c2e7e629 Implement LUKS2 reencrypt keyslot dump. 2019-05-27 15:27:23 +02:00
Ondrej Kozina
eadef08fd5 Extend LUKS2 reencryption recovery tests.
- test repair commad for reencryption recovery.
- test close command is able to teardown leftover device stack after
  crash.
- test open performs recovery by default (to be able to open root
  volume).
2019-05-24 17:29:56 +02:00
Ondrej Kozina
0c725a257d Compare moved segment specific size against real device size only. 2019-05-24 17:29:56 +02:00
Ondrej Kozina
6f35fb5f80 Silence query error messages for unsupported target types. 2019-05-24 17:29:56 +02:00
Ondrej Kozina
cd1fe75987 Close all device handlers after failed internal load. 2019-05-24 17:29:56 +02:00
Ondrej Kozina
e92e320956 Add explicit device_close routine. 2019-05-24 17:29:56 +02:00
Ondrej Kozina
0e4757e0fb Add LUKS2 reencryption recovery in repair command. 2019-05-24 17:29:56 +02:00
Ondrej Kozina
bd6af68bc5 Add support for explicit reencryption recovery in request. 2019-05-24 17:07:37 +02:00
Ondrej Kozina
13050f73c1 Properly finished reencryption after recovery. 2019-05-24 17:07:37 +02:00
Ondrej Kozina
5472fb0c56 Refactor reencryption recovery during activation. 2019-05-24 17:07:36 +02:00
Ondrej Kozina
73c2424b24 Refactor LUKS2 device activation (in reencryption). 2019-05-24 17:07:36 +02:00
Milan Broz
5117eda688 Switch to Xenial distro in Travis. 2019-05-24 08:33:20 +02:00
Ondrej Kozina
cfbef51d3d Add interactive dialog in case active device auto-detection fails. 2019-05-22 12:50:18 +02:00
Ondrej Kozina
09cb2d76ef Add dialog with default 'no' answer. 2019-05-22 12:50:17 +02:00
Ondrej Kozina
3f549ad0df Refactor yesDialog utility. 2019-05-22 12:50:17 +02:00
Ondrej Kozina
60d26be325 Load volume key in keyring when activated by token.
LUKS2 should use keyring for dm-crypt volume keys by default
when possible. crypt_activate_by_token didn't load keys in
keyring by default. It was a bug.
2019-05-21 18:08:00 +02:00
Ondrej Kozina
013d0d3753 Rename internal reencrypt enum to REENC_PROTECTION_NONE. 2019-05-21 18:08:00 +02:00
Ondrej Kozina
97da67c6a8 Add tests for reencryption with fixed device size. 2019-05-21 18:08:00 +02:00
Ondrej Kozina
f74072ba28 Silence active device detection message in batch mode. 2019-05-21 16:05:23 +02:00
Ondrej Kozina
19eac239b7 Add --device-size parameter for use in LUKS2 reencryption.
Currently it's used only in LUKS2 reencryption code
for reencrypting initial part of data device only.

It may be used to encrypt/reencrypt only initial part
of data device if user is aware that rest of the device
is empty.
2019-05-21 15:54:43 +02:00
Ondrej Kozina
31cd41bfe4 Add support for reencryption of initial device part.
It's useful to reencrypt only initial device part only.
For example with golden image reencryption it may be useful
to reencrypt only first X bytes of device because we know
the rest of device is empty.
2019-05-21 15:54:07 +02:00
Ondrej Kozina
af6c321395 Set default length for reencryption with resilience 'none' only. 2019-05-21 15:54:07 +02:00
Milan Broz
448fca1fdf Integritysetup: implement new bitmap mode. 2019-05-21 15:54:07 +02:00
Ondrej Kozina
1923928fdc Drop duplicate error message from reencrypt load. 2019-05-21 15:54:07 +02:00
Ondrej Kozina
bee5574656 Add --resume-only parameter to reencrypt command. 2019-05-21 15:54:07 +02:00
Ondrej Kozina
8c8a68d850 Add CRYPT_REENCRYPT_RESUME_ONLY flag. 2019-05-13 18:23:20 +02:00
Ondrej Kozina
9159b5b120 Add coverity toctou annotation in device_open_excl.
We can't avoid this race due to undefined behaviour if called with
O_EXCL flag on regular file.

Let's double-check fd with O_EXCL flag is actually open block device.
2019-05-13 18:23:20 +02:00
Ondrej Kozina
2d0079905e Adapt device_open_excl to reusing of fds. 2019-05-10 21:05:31 +02:00
Ondrej Kozina
83c227d53c Sync device using internal write enabled descriptor. 2019-05-10 21:05:31 +02:00
Ondrej Kozina
ee57b865b0 Reuse device file desriptors. 2019-05-10 21:05:31 +02:00
Milan Broz
ecbb9cfa90 Use upstream gnulib patch for Coverity warning fixed by previous patch. 2019-05-10 21:03:22 +02:00
Ondrej Kozina
8545e8496b Fix memleak in reencryption with moved segment. 2019-05-07 17:17:34 +02:00
Kamil Dudka
75b2610e85 Fix TAINTED_SCALAR false positives of Coverity
Coverity Analysis 2019.03 incorrectly marks the input argument
of base64_encode(), and conseuqnetly base64_encode_alloc(), as
tainted_data_sink because it sees byte-level operations on the input.
This one-line annotation makes Coverity suppress the following false
positives:

Error: TAINTED_SCALAR:
lib/luks2/luks2_digest_pbkdf2.c:117: tainted_data_argument: Calling function "crypt_random_get" taints argument "salt".
lib/luks2/luks2_digest_pbkdf2.c:157: tainted_data: Passing tainted variable "salt" to a tainted sink.

Error: TAINTED_SCALAR:
lib/luks2/luks2_keyslot_luks2.c:445: tainted_data_argument: Calling function "crypt_random_get" taints argument "salt".
lib/luks2/luks2_keyslot_luks2.c:448: tainted_data: Passing tainted variable "salt" to a tainted sink.
2019-05-07 15:35:55 +02:00
Milan Broz
237021ec15 Fix some warnings in static analysis. 2019-05-07 13:44:43 +02:00
Ondrej Kozina
4f5c25d0dd Add HAVE_DECL_DM_TASK_RETRY_REMOVE define in local tests. 2019-05-06 15:42:11 +02:00
Ondrej Kozina
4c33ab1997 Remove internal config file scratching (breaks local tests.) 2019-05-06 15:41:37 +02:00
Ondrej Kozina
5bb65aca8f Remove all test dm devices with retry option if available. 2019-05-06 15:37:35 +02:00
Milan Broz
3fd7babacc Update Readme.md. 2019-05-03 15:50:39 +02:00
Ondrej Kozina
caea8a9588 Update rc release notes. 2019-05-03 15:16:12 +02:00
Ondrej Kozina
e1d6cba014 Add reencryption action man page. 2019-05-03 15:00:33 +02:00
Milan Broz
1f91fe7a2c Use JSON-debug wrappers. 2019-05-03 14:02:43 +02:00
Milan Broz
dc53261c3b Fix data leak in format and reencrypt command. 2019-05-03 13:06:58 +02:00
Milan Broz
b3e90a93b0 Add test release notes and increase ABI version. 2019-05-03 12:57:29 +02:00
Milan Broz
1f3e2b770c Fix offline reencryption tool name. 2019-05-02 21:05:22 +02:00
Ondrej Kozina
d310e896cb Add basic offline tests for LUKS2 reencryption. 2019-05-02 17:23:59 +02:00
Ondrej Kozina
a36245cef6 Add new reencrypt cryptsetup action.
The new reencryption code is enabled via cryptsetup cli
and works with LUKS2 devices only.
2019-05-02 16:45:43 +02:00
Ondrej Kozina
092ef90f29 Add autodetection code for active dm device. 2019-05-02 16:44:23 +02:00
Ondrej Kozina
64f59ff71e Add reencryption progress function. 2019-05-02 16:44:23 +02:00
Ondrej Kozina
a7f80a2770 Add resilient LUKS2 reencryption library code. 2019-05-02 16:44:23 +02:00
Ondrej Kozina
a5c5e3e876 Add dm_device_deps for quering dm device dependencies. 2019-05-02 15:23:29 +02:00
Ondrej Kozina
8e4fb993c0 Add error target support in dm_query_device. 2019-05-02 15:23:29 +02:00
Ondrej Kozina
846567275a Move dm_query_device body in static function. 2019-05-02 15:23:28 +02:00
Ondrej Kozina
741c972935 Remove unused minor number from dm_is_dm_device. 2019-05-02 15:23:28 +02:00
Ondrej Kozina
6c2760c9cd Report data sync errors from storage wrapper. 2019-04-29 16:48:20 +02:00
Ondrej Kozina
b35a5ee4a3 Replace table with error mapping even when in use. 2019-04-29 16:10:57 +02:00
Ondrej Kozina
345385376a Add missing validation check for area type specification. 2019-04-29 16:10:57 +02:00
Milan Broz
dbe9db26fc Never serialize memory-hard KDF for small amount of memory. 2019-04-29 16:10:57 +02:00
Milan Broz
91ba22b157 Do not try to remove device that was not succesfully activated. 2019-04-29 16:10:57 +02:00
Ondrej Kozina
86b2736480 Drop unused type parameter from LUKS2_keyslot_find_empty() 2019-04-23 10:41:56 +02:00
Milan Broz
cfe2fb66ab Fix some untranslated error messages. 2019-04-23 10:41:06 +02:00
Milan Broz
428e61253c Fix dm_error_device() to properly use error device. 2019-04-10 15:06:07 +02:00
Milan Broz
95bcd0c9d5 Fix previous patch locking to return EBUSY. 2019-04-10 14:27:42 +02:00
Milan Broz
23bada3c5a Fix several issues found by Coverity scan. 2019-04-10 12:30:09 +02:00
Stig Otnes Kolstad
de0cf8433b Add pbkdf options to all key operations in manpage 2019-04-09 17:19:41 +02:00
Milan Broz
1b49ea4061 Add global serialization lock for memory hard PBKDF.
This is very ugly workaround for situation when multiple
devices are being activated in parallel (systemd crypttab)
and system  instead of returning ENOMEM use OOM killer
to randomly kill processes.

This flag is intended to be used only in very specific situations.
2019-03-29 11:58:12 +01:00
Ondrej Kozina
29b94d6ba3 Add arbitrary resource locking (named locks).
It's complementary to current device locking. It'll be used
for mutual exclusion of two or more reencryption resume processes
2019-03-26 14:48:27 +01:00
Ondrej Kozina
80a435f00b Write keyslot binary data and metadata holding single lock. 2019-03-25 11:37:32 +01:00
Ondrej Kozina
fdcd5806b1 Allow to change requirements flag in-memory only. 2019-03-25 11:37:32 +01:00
Ondrej Kozina
9ddcfce915 Refactor locking code. 2019-03-25 11:37:32 +01:00
Ondrej Kozina
6ba358533b Modify crypt lock handle internal structure.
makes it ready for future lock handle type
2019-03-25 11:37:32 +01:00
TrueDoctor
73aa329d57 fixed Grammar in manpage cryptsetup-reencrypt(8) 2019-03-22 23:20:13 +00:00
Ondrej Kozina
379016fd78 Add no flush internal suspend/resume flag. 2019-03-22 08:01:21 +01:00
Ondrej Kozina
ea4b586c77 Add tests for CRYPT_VOLUME_KEY_DIGEST_REUSE flag.
Tests commit 7569519530
2019-03-22 08:01:21 +01:00
Ondrej Kozina
6961f2caae Switch crypt_suspend() to DM_SUSPEND_WIPE_KEY flag. 2019-03-22 08:01:21 +01:00
Ondrej Kozina
4df2ce4409 Add wipe key flag for internal device suspend. 2019-03-22 08:01:21 +01:00
Ondrej Kozina
052a4f432c Add internal option to skip fs freeze in device suspend. 2019-03-22 08:01:21 +01:00
Ondrej Kozina
de86ff051e Introduce support for internal dm suspend/resume flags. 2019-03-22 08:01:21 +01:00
Ondrej Kozina
f5feeab48d Add experimental storage wrappers. 2019-03-22 08:01:21 +01:00
Milan Broz
1317af028e Use compatible switch for free command. 2019-03-21 15:32:22 +01:00
Milan Broz
cdcd4ddd35 Print free memory in tests. 2019-03-21 15:16:33 +01:00
Milan Broz
2960164cf8 Fix localtest if the last test is skipped. 2019-03-21 15:12:39 +01:00
Milan Broz
a98ef9787c Set devel version. 2019-03-20 21:58:27 +01:00
Milan Broz
b6d406fbc8 Add fixed Makefile that can run tests outside of compiled tree. 2019-03-20 21:58:07 +01:00
Ondrej Kozina
e3488292ba Fix typo in --disable-keyring description. 2019-03-13 15:24:45 +01:00
Ondrej Kozina
fea2e0be4f Add algorithm for searching largest gap in keyslots area. 2019-03-13 14:56:31 +01:00
Milan Broz
751f5dfda3 Move error message for a keyslot area search. 2019-03-13 14:56:31 +01:00
Ondrej Kozina
d5f71e66f9 Allow digest segment (un)binding for all segments at once. 2019-03-13 14:56:31 +01:00
Ondrej Kozina
03e810ec72 Split crypt_drop_keyring_key in two different routines.
crypt_drop_keyring_key function allow to drop all keys in keyring
assocatiated with passed volume key list.

crypt_drop_keyring_key_by_description is used to drop independent key.
2019-03-13 14:56:31 +01:00
Ondrej Kozina
6c6f4bcd45 Add signed int64 json helpers. 2019-03-13 14:56:31 +01:00
Ondrej Kozina
304942302b Introduce CRYPT_DEFAULT_SEGMENT abstraction.
Default segment is no longer constant segment with id 0.
2019-03-13 14:56:31 +01:00
Ondrej Kozina
8dc1a74df8 Adapt existing code to future reencryption changes. 2019-03-13 14:56:31 +01:00
Ondrej Kozina
e295d01505 Adding new functions later used in reencryption. 2019-03-13 14:56:31 +01:00
Ondrej Kozina
aa1b29ea0e Add volume key next helper. 2019-03-13 14:56:31 +01:00
Ondrej Kozina
cef857fbbd Add routine for adding volume key in a list. 2019-03-13 14:56:31 +01:00
Ondrej Kozina
6bba8ce0dc Allow vk insert in linked list.
Also adds search function crypt_volume_key_by_id.
2019-03-13 14:56:31 +01:00
Ondrej Kozina
b0330d62e5 Add id member in volume_key structure.
Also adds set/get helper routines.
2019-03-13 14:56:31 +01:00
Frederik Nnaji
fc0c857cfe Update README.md 2019-03-13 13:52:40 +00:00
Milan Broz
238b18b8ac Upstream fixes to bundled Argon2 code.
Wait for already running threads if a thread creation failed.
Use explicit_bzero() on recent glibc versions.
(Without fixed logic, we have already macro definition through automake.)

Fixes #444.
2019-03-13 08:26:40 +01:00
Ondrej Kozina
6a2d023b7b Make keyring utilities ready for additional kernel key types. 2019-03-08 09:03:35 +01:00
Ondrej Kozina
4bb1fff15d Add new functions for kernel keyring handling. 2019-03-08 08:54:09 +01:00
Ondrej Kozina
37f5bda227 Add explicit key type name in keyring functions. 2019-03-08 08:53:33 +01:00
Ondrej Kozina
56b571fcaa Use const before vk in all digest verify functions. 2019-03-08 08:52:47 +01:00
Ondrej Kozina
46bf3c9e9c Add segment create helpers. 2019-03-08 08:44:51 +01:00
Ondrej Kozina
361fb22954 Remove helper get_first_data_offset completely. 2019-03-08 08:43:19 +01:00
Ondrej Kozina
203fe0f4bf Move get_first_data_offset to luks2_segment.c 2019-03-08 08:42:23 +01:00
Ondrej Kozina
36ac5fe735 Move LUKS2 segments handling in separate file. 2019-03-08 08:39:32 +01:00
Ondrej Kozina
7569519530 Allow unbound keyslots to be assigned to existing digest.
If passed key matches any existing digest we will not create
new digest but assign the keyslot to already existing one.

Because reencryption should be able to create more than one
keyslot assigned to new key digest.

TODO: Tests for the new feature
2019-03-08 08:37:27 +01:00
Ondrej Kozina
a848179286 Add json_object_copy wrapper. 2019-03-08 08:27:18 +01:00
Milan Broz
456ab38caa Allow to set CRYPTSETUP_PATH in tests for system installed cryptsetup tools.
Run: make check CRYPTSETUP_PATH=/sbin
2019-03-08 08:16:45 +01:00
Milan Broz
c71b5c0426 Update po files. 2019-03-08 08:15:57 +01:00
Ondrej Kozina
868cc52415 Abort conversion to LUKS1 with incompatible sector size. 2019-03-05 17:08:05 +01:00
Ondrej Kozina
8c168cc337 Introduce file for luks2 segments handling. 2019-03-05 17:08:02 +01:00
Ondrej Kozina
f9fa4cc099 Add kernel only detection in crypt storage API. 2019-03-05 17:07:57 +01:00
Ondrej Kozina
a0540cafb3 alter crypt_storage interface
rename sector_start -> iv_start (it's now a iv shift for subsequent
en/decrypt operations)

rename count -> length. We accept length in bytes now and perform sanity
checks at the crypt_storage_init and crypt_storage_decrypt (or encrypt)
respectively.

rename sector -> offset. It's in bytes as well. Sanity checks inside
crypt_storage functions.
2019-03-05 17:07:45 +01:00
Ondrej Kozina
88b3924132 Update LUKS2 locks for atomic operations.
Atomic operation requires to hold a lock for longer period than
single metadata I/O. Update locks so that we can:

- lock a device more than once (lock ref counting)
- reaquire read lock on already held write lock (write lock
  is stronger than read lock)
2019-03-05 17:07:31 +01:00
Ondrej Kozina
3023f26911 Always allocate new header file of 4KiB.
All issues related to header wiping and smaller
files were resolved. It's no longer needed to allocate
files larger than 4KiB.
2019-03-05 16:55:17 +01:00
Milan Broz
c9347d3d7d Fix a gcc warning when accessing packed struct member. 2019-03-05 16:50:24 +01:00
Ondrej Kozina
d85c7d06af Do not fail tests if benchmarked >= 1000 iterations with -i1. 2019-03-01 21:43:35 +01:00
Ondrej Kozina
e229f79741 Open device in locked mode if needed. 2019-03-01 21:43:31 +01:00
Ondrej Kozina
a4d236eebe Add device_is_locked function. 2019-03-01 21:43:25 +01:00
Milan Broz
1192fd27c6 Add query for cipher implementation is used through kernel API. 2019-03-01 21:43:10 +01:00
Milan Broz
cd1cb40033 Use crypto library for ciphers if algorithms are available. 2019-03-01 21:34:22 +01:00
Milan Broz
14e085f70e Move cipher performance check to crypto backend. 2019-03-01 21:16:05 +01:00
Milan Broz
fc37d81144 Move crypt_cipher to per-lib implementation.
For now, it calls kernel fallback only.
2019-03-01 21:14:13 +01:00
Milan Broz
a859455aad Move block ciphers backend wrappers to per-library files.
For now it always fallbacks to kernel crypto API.
2019-03-01 21:10:50 +01:00
Milan Broz
93d596ace2 Introduce internal backend header.
And remove commented-out test vectors (moved to tests).
2019-03-01 20:39:33 +01:00
Ondrej Kozina
c03e3fe88a Fix getting default LUKS2 keyslot encryption parameters.
When information about original keyslot size is missing (no active
keyslot assigned to default segment) we have to fallback to
default luks2 encryption parameters even though we know default
segment cipher and mode.

Fixes: #442.
2019-03-01 20:39:06 +01:00
Ondrej Kozina
a90a5c9244 Avoid double free corruption after failed crypt_init_data_device. 2019-03-01 20:31:00 +01:00
Ondrej Kozina
26772f8184 Return NULL explicitly if keyslot is missing.
json_object_object_get_ex return parameter is
undefined if function returns false.
2019-03-01 20:30:21 +01:00
Ondrej Kozina
8f8ad83861 Validate metadata before writting binary keyslot area. 2019-03-01 20:29:49 +01:00
Ondrej Kozina
d111b42cf1 Fix keyslot area gap find algorithm.
get_max_offset must use value calculated from LUKS2 metadata
boundaries. Data offset didn't have to match end of LUKS2 metadata
area.
2019-03-01 20:29:40 +01:00
Ondrej Kozina
821c965b45 Drop commented code block. 2019-03-01 20:28:56 +01:00
Ondrej Kozina
4acac9a294 Properly handle DM_LINEAR type while checking version or dmflags. 2019-03-01 20:28:43 +01:00
Ondrej Kozina
4adb06ae91 Add missing direction flag in dm_crypt_target_set.
This bug may have caused memory corruption in dm_targets_free
later.
2019-03-01 20:27:53 +01:00
Milan Broz
dce7a1e2aa Fix gcc warning in tests. 2019-02-24 12:35:54 +01:00
Milan Broz
a354b72546 Add some symmetric block ciphers vector tests for crypto backend. 2019-02-24 12:35:50 +01:00
Milan Broz
ac8f41404b Simplify and reformat hash/HMAC test vectors test. 2019-02-24 12:35:45 +01:00
Milan Broz
fc7b257bab Silence dmsetup removal messages. 2019-02-13 13:34:39 +01:00
Milan Broz
787066c292 Report error if no LUKS keyslots are available.
Also fix LUKS1 keyslot function to proper return -ENOENT errno in this case.

This change means, that user can distinguish between bad passphrase and
no keyslot available. (But this information was avalilable with luksDump
even before the change.)
2019-02-13 13:19:48 +01:00
Milan Broz
71ab6cb818 Fix other tests to not fail if keyring support is missing in kernel. 2019-02-12 16:16:56 +01:00
Milan Broz
1158ba453e Use better test for a bad loop descriptor. 2019-02-12 14:54:56 +01:00
Milan Broz
2e3f764272 Fix api-test-2 to properly detect missing keyring in kernel.
Also properly cleanup after some failures.
2019-02-12 14:49:21 +01:00
Milan Broz
2172f1d2cd Print PBKDF debug log in a better format.
Fixes #439.
2019-02-11 12:37:33 +01:00
Milan Broz
6efc1eae9f Update Readme.md. 2019-02-08 15:37:17 +01:00
Milan Broz
6a740033de Add 2.1. release notes. 2019-02-08 15:08:04 +01:00
Ondrej Kozina
d754598143 Preserve LUKS2 mdata & keyslots sizes after reencryption. 2019-02-08 12:00:24 +01:00
Ondrej Kozina
47f632263e Add missing crypt_free() in api test. 2019-02-08 11:56:52 +01:00
Milan Broz
98af0b0c77 Increase API version. 2019-02-07 18:42:17 +01:00
Ondrej Kozina
b9c6a62437 Do not call fallocate on image file that is already large enough. 2019-02-07 18:41:06 +01:00
Ondrej Kozina
57670eeeb7 Detect LUKS2 default alignmnet in align tests. 2019-02-07 18:40:48 +01:00
Ondrej Kozina
f26ee11913 Assert reasonable LUKS2 default header size. 2019-02-07 18:40:39 +01:00
Milan Broz
2435d76a39 Use 16MB LUKS2 header size by default. 2019-02-07 18:40:14 +01:00
Milan Broz
348d460ab7 Workarounds for larger LUKS2 header for tests. 2019-02-07 18:39:50 +01:00
Milan Broz
2b8b43b3db Fix file descriptor leak in error path. 2019-02-07 17:37:16 +01:00
Milan Broz
91b74b6896 Fix some compiler warnings. 2019-02-07 17:14:47 +01:00
Milan Broz
319fd19b5e Add implementation of crypt_keyslot_pbkdf().
This function allows to get PBKDF parameters per-keyslot.
2019-02-07 12:55:12 +01:00
Milan Broz
4edd796509 Fix typo. 2019-02-06 21:48:29 +01:00
Ondrej Kozina
b0ced1bd2c Make compat-test2 work with 16M data offset. 2019-02-06 21:43:36 +01:00
Ondrej Kozina
6ed3a7774f Calculate keyslots size based on requested metadata size. 2019-02-06 21:42:51 +01:00
Ondrej Kozina
1ce3feb893 Add format test for detached header using last keyslot. 2019-02-06 21:41:43 +01:00
Milan Broz
ebbc5eceb8 Fix crypt_wipe to allocate space and not silently fail.
This change will allocate space if underlying device is smaller file
and fail if it is block device.

Previously smaller device was quietly ignored, leading to keyslot
access failure with older dm-crypt mapped keyslot encryption
(disabled kernel user crypto API).
2019-02-06 21:39:26 +01:00
Ondrej Kozina
0cac4a4e0c Make api test run with any defalt LUKS2 header size. 2019-02-06 11:48:47 +01:00
Milan Broz
1908403324 Prepare change for default LUKS2 keyslot area size. 2019-02-06 11:48:34 +01:00
Ondrej Kozina
faa07b71f9 Fix debug message when zeroing rest of data device.
The debug message printed wrong expected value and
also remained silent if expected value differed from
real bytes written to the data device.
2019-02-06 11:48:24 +01:00
Ondrej Kozina
e9dcf6b8dd Simplify create_empty_header in cryptsetup-reencrypt.
In most cases we do not need to create large files for new headers.
crypt_format already allocates enough space for all keyslots in files
during internal header wipe.

Fixes #410.
2019-02-06 11:48:07 +01:00
Milan Broz
3ea60ea0ae Update po files. 2019-02-06 11:46:37 +01:00
Milan Broz
54171dfdd3 Fix api-test to detect kernel without needed crypto module for tcrypt test. 2019-01-31 16:32:11 +01:00
Milan Broz
dc8db34155 Run keyring test only for recent kernels. 2019-01-31 16:31:09 +01:00
Milan Broz
a68f3939cf Use min memory limit from PBKDF struct in Argon benchmark. 2019-01-31 10:53:51 +01:00
Milan Broz
ae90497762 Switch to default LUKS2 format in configure. 2019-01-31 09:30:04 +01:00
Rafael Fontenelle
2b55f6420a Fix misspellings 2019-01-28 08:40:20 -02:00
Milan Broz
6d3545624d Fix typo in API documentation. 2019-01-26 12:44:31 +01:00
Milan Broz
46dc5beee9 Increase LUKS keysize if XTS mode is used (two internal keys). 2019-01-25 13:56:21 +01:00
Milan Broz
943cc16020 Fix test to print exit line and use explicit key size. 2019-01-25 13:38:24 +01:00
Milan Broz
a6f5ce8c7b Update copyright year.
And unify name copyright format.
2019-01-25 09:45:57 +01:00
Milan Broz
bc3d0feb5c Switch default cryptographic backend to OpenSSL.
Cryptsetup/libcryptsetup currently supports several cryptographic
library backends.

The fully supported are libgcrypt, OpenSSL and kernel crypto API.

FIPS mode extensions are maintained only for libgcrypt and OpenSSL.

(Nettle and NSS are usable only for some subset of algorithms and
cannot provide full backward compatibility.)

For years, OpenSSL provided better performance for PBKDF.

Since this commit, cryptsetup uses OpenSSL as the default backend.

You can always switch to other backend by using a configure switch,
for libgcrypt (compatibility for older distributions) use:
--with-crypto_backend=gcrypt
2019-01-25 08:24:10 +01:00
Milan Broz
580f0f1a28 Add some FIPS mode workarounds.
We cannot (yet) use Argon2 in FIPS mode, hack scripts and library
to use PBKDF2 or skip tests and fix tests to run in FIPS mode.
2019-01-24 17:04:13 +01:00
Milan Broz
715b0c9b6c Switch to fetching default PBKDF values from library. 2019-01-23 14:15:23 +01:00
Milan Broz
388afa07f4 Cleunup devices before running mode-test. 2019-01-23 14:14:45 +01:00
Milan Broz
1def60cd2c Do not allow conversion to LUKS1 if hash algorithms differs (digest,AF). 2019-01-22 14:19:58 +01:00
Milan Broz
cdb4816fbb Allow setting of hash function in LUKS2 PBKDF2 digest.
For now, the hash was set to sha256 (except for converted LUKS1 header).

This patch adds the same logic as in LUKS1 - hash aglorithms is
loaded from PBKDF setting.

Fixes #396.
2019-01-22 12:45:01 +01:00
Milan Broz
be46588cf0 Allow LUKS2 keyslots area to increase if data offset allows it.
ALso deprecate align-plauload option and add more debugging code
to understand internal calculation of metadata and keyslots area sizes.

Fixes #436.
2019-01-22 09:23:49 +01:00
Milan Broz
6dc2f7231b Fix a possible NULL pointer in opt_type. 2019-01-21 14:07:33 +01:00
Milan Broz
3165b77ec9 Remove undeeded check for DM_SECURE_SUPPORTED. 2019-01-21 13:55:43 +01:00
Ondrej Kozina
ad0e2b86dc Do not issue flush when reading device status.
Fixes #417.
2019-01-21 11:20:02 +01:00
Milan Broz
5ee0b01118 Add test for specific legacy plain hash type. 2019-01-20 10:20:44 +01:00
Milan Broz
fbfd0c7353 Update Nettle crypto backend.
WARNING: this is just experimental backend, use only for testing.
2019-01-16 21:13:00 +01:00
Milan Broz
ee8970c11e Fix strncpy gcc warning. 2019-01-15 15:34:00 +01:00
Milan Broz
82a1f33260 Silence new warning in tests if run on older kernel. 2019-01-15 15:15:25 +01:00
Milan Broz
9607b322d2 Add missing struct to Nettle backend. 2019-01-15 15:00:36 +01:00
Milan Broz
238c74643b Add some more hash algorithms test. 2019-01-15 14:06:51 +01:00
Milan Broz
712c1783b6 Warn user if sector size is not supported by the loaded dm-crypt module.
Fixes #423.
2019-01-15 10:31:06 +01:00
Milan Broz
081fb6ec78 Do not try to read LUKS header if there is a clear version mismatch (detached header).
Fixes #423.
2019-01-14 20:14:46 +01:00
Milan Broz
c04d332b7f Do not require gcrypt-devel for authconfig.
The gcrypt does not use standard pkgconfig detection and requires
specific macro (part of gcrypt development fileS) to be present
during autoconfigure.

With other crypto backend, like OpenSSL, this makes no sense,
so make this part of autoconfigure optional.
2019-01-14 13:20:02 +01:00
Milan Broz
32786acf19 Add kernel crypt backend option to Travis build. 2019-01-14 09:11:01 +01:00
Milan Broz
51dd2762a9 Add --debug-json switch and log level.
The JSON structures should not be printed by default to debug log.

This flag introduces new debug level that prints JSON structures
and keeps default debug output separate.
2019-01-10 14:52:49 +01:00
Milan Broz
cf31bdb65c Workaround for test failure with disabled keyring.
NOTE: this need proper fix, tests should not expect a device state
from previous test.
2019-01-08 13:32:34 +01:00
Milan Broz
50cae84100 Print AF hash in luksDump. 2019-01-07 21:25:03 +01:00
Milan Broz
98feca280f Add crypt_get_default_type() API call. 2019-01-07 20:38:17 +01:00
Milan Broz
304c4e3d3b Add more common hash algorithms to kernel crypto backend.
Fixes #430.
2019-01-07 20:07:18 +01:00
Milan Broz
c5b55049b9 Fix AEAD modes check with kernel and Nettle backend.
These do not implement backend RNG yet, so use a fixed key for test.
2019-01-07 20:05:55 +01:00
Ondrej Kozina
c494eb94f4 Add LUKS2 refresh test.
Test refresh doesn't affect device vk.
2019-01-07 15:52:03 +01:00
Milan Broz
5f173e9357 Fix allocating of LUKS header on format.
Fixes #431.
2019-01-07 13:07:46 +01:00
Milan Broz
307a7ad077 Add keyslot encryption params.
This patch makes available LUKS2 per-keyslot encryption settings to user.

In LUKS2, keyslot can use different encryption that data.

We can use new crypt_keyslot_get_encryption and crypt_keyslot_set_encryption
API calls to set/get this encryption.

For cryptsetup new --keyslot-cipher and --keyslot-key-size options are added.

The default keyslot encryption algorithm (if cannot be derived from data encryption)
is now available as configure options (default is aes-xts-plain64 with 512-bits key).
NOTE: default was increased from 256-bits.
2019-01-07 13:07:46 +01:00
Milan Broz
0039834bb9 Rename function to describe precisely keys size it obtains.
This should avoid confusion between key size for the stored key and
key size that actually encrypts the keyslot.
2019-01-07 13:07:45 +01:00
Milan Broz
d064c625f4 Fix reencryption test to use more context lines to parse parameters. 2019-01-07 13:07:45 +01:00
Ondrej Kozina
77a62b8594 Remove trailing newline in loopaes error message. 2019-01-07 13:07:45 +01:00
Ondrej Kozina
d4339661df Fix cipher spec leak in crypt_format on error. 2019-01-07 13:07:45 +01:00
Ondrej Kozina
39a014f601 dm backend with support for multi-segment devices.
Support for multi-segment devices is requirement for online
reencryption to work. Introducing modififed dm backend that
splits data structures describing active device and individual
dm target (or segment).
2019-01-07 13:07:45 +01:00
Ondrej Kozina
1e22160e74 Fix dm-integrity auto-recalculation flag handling.
Fail with proper error message rather than silently
dropping the flag if not supported in kernel.
2019-01-03 19:57:23 +01:00
Milan Broz
267bf01259 Add crypt_get_pbkdf_type_params() API.
This function allows get default (compiled-in) PBKDF parameters
per every algorithm.

Fixes #397.
2019-01-03 14:13:01 +01:00
Milan Broz
e23fa65ef2 Fix leak of json struct on crypt_format() error path. 2019-01-02 14:08:41 +01:00
Milan Broz
ee7ff024c1 Use json_object_object_add_ex if defined.
The json-c lib changed json_object_object_add() prototype to return int,
this is backward incompatible.
2019-01-02 13:59:04 +01:00
Milan Broz
e8a92b67c3 Use snprintf. 2019-01-01 21:42:46 +01:00
Milan Broz
3ce7489531 Fix context init/exit pairing in libdevmapper.
And few small reformats.
2019-01-01 21:42:46 +01:00
Ondrej Kozina
ffbb35fa01 Update git ignore file. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
de0b69691d Add json_object_object_del_by_uint helper routine. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
82aae20e9c Add json_object_object_add_by_uint helper routine. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
7362b14d41 Extend device-test with refresh actions. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
77d7babf92 Add new crypt_resize tests. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
545b347ca5 Add api test for CRYPT_ACTIVATE_REFRESH flag. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
df2111eb4f Drop DEV_SHARED definition.
There are no users.
2019-01-01 21:42:46 +01:00
Ondrej Kozina
c7d3b7438c Replace DEV_SHARED with DEV_OK.
DEV_SHARED is never checked for in device_check.
2019-01-01 21:42:46 +01:00
Ondrej Kozina
5c0ad86f19 Move device_block_adjust() check lower in code. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
675cf7ef59 Add dm_clear_device routine. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
d74e7fc084 Add dm_error_device routine. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
2cd85ddf11 Add stand alone dm_resume_device routine. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
3c1dc9cfaa Refactor LUKS2 activation with dm-integrity. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
8b2553b3f4 Split integrity activation between two function. 2019-01-01 21:42:46 +01:00
Ondrej Kozina
b9373700a2 Switch crypt_resize to reload internally.
This ties up few loose ends with regard to
target device parameters verification.
2019-01-01 21:42:46 +01:00
Ondrej Kozina
bdce4b84d8 Add new internal crypt_get_cipher_spec.
Add function for getting cipher spec (cipher
and mode) in convenient single string format.
2019-01-01 21:42:46 +01:00
Ondrej Kozina
2dd4609699 Implement cryptsetup refresh action (open --refresh alias).
It allows active device refresh with new activation
parameters. It's supported for LUKS1, LUKS2, crypt plain
and loop-AES devices.
2019-01-01 21:42:46 +01:00
Ondrej Kozina
5c67ca015b Add CRYPT_ACTIVATE_REFRESH flag to activation calls.
The new flag is supposed to refresh (reload) active dm-crypt
mapping with new set of activation flags. CRYPT_ACTIVATE_READONLY
can not be switched for already active device.

The flag is silently ignored for tcrypt, verity and integrity
devices. LUKS2 with authenticated encryption support is added in
later commit.
2019-01-01 21:42:46 +01:00
Ondrej Kozina
957b329e94 _dm_simple cleanup (wait is no longer needed) 2019-01-01 21:42:46 +01:00
Ondrej Kozina
120ebea917 Split low level code for creating dm devices.
The separate code for reloading device tables
will be used in later features.
2019-01-01 21:42:46 +01:00
Milan Broz
6e1e11f6cd Redirect lib API docs. 2018-12-19 11:56:36 +01:00
Milan Broz
dbc056f9ac Remove file commited by a mistake... 2018-12-13 22:29:51 +01:00
Ondrej Kozina
7de815e957 Silence annoying shell checks for dracut module.
Also fixes one theoretical issue with 'local' keyword for
any (if any) POSIX-strictly shell.
2018-12-12 15:08:06 +01:00
Ondrej Kozina
1894d6e6ff Add devno comparison for bdevs in device_is_identical(). 2018-12-12 15:07:33 +01:00
Ondrej Kozina
1cc722d0cc Simplify device_is_identical.
If any argument is null return false (with higher
priority than trivial identity check).

Also device_path can't return null if device struct gets
allocated succesfully.
2018-12-12 15:06:59 +01:00
Milan Broz
ec07927b55 Add cryptsetup options for LUKS2 header size settings.
Also print these area sizes in dump command.

NOTE: since now, the metadata area size in dump command contains
mandatory 4k binary section (to be aligned with API definition).
2018-12-12 14:51:40 +01:00
Milan Broz
41c7e4fe87 Remove incorrect parameter in crypt_reload test. 2018-12-12 12:28:42 +01:00
Milan Broz
217cd0f5e9 Do not use dd for JSON metadata tests.
This should fix random testsuite failures.
2018-12-12 11:51:44 +01:00
Milan Broz
fd02dca60e Add crypt_set_metadata_size / crypt_get_metadata_size API. 2018-12-11 21:59:59 +01:00
Milan Broz
2a1d58ed22 Check data device offset if it fits data device size in luksFormat. 2018-12-11 21:59:49 +01:00
Milan Broz
7d8003da46 cryptsetup: add support for --offset option to luksFormat.
This option can replace --align-payload with absolute alignment value.
2018-12-06 14:22:18 +01:00
Milan Broz
03edcd2bfd Add crypt_set_data_offset API function.
The crypt_set_data_offset sets the data offset for LUKS and LUKS2 devices
to specified value in 512-byte sectors.

This value should replace alignment calculation in LUKS param structures.
2018-12-06 11:10:21 +01:00
Milan Broz
a9d3f48372 Fix metadata test log message. 2018-12-05 19:46:28 +01:00
Milan Broz
316ec5b398 integrity: support detached data device.
Since the kernel 4.18 there is a possibility to speficy external
data device for dm-integrity that stores all integrity tags.

The new option --data-device in integritysetup uses this feature.
2018-12-05 19:42:31 +01:00
Milan Broz
d06defd885 Add automatic recalculation to dm-integrity.
Linux kernel since version 4.18 supports automatic background
recalculation of integrity tags for dm-integrity.

This patch adds new integritysetup --integrity-recalculate options
that uses this option.
2018-12-05 14:53:17 +01:00
Milan Broz
0fed68dd16 Introduce crypt_init_data_device and crypt_get_metadata_device_name.
For some formats we need to separate metadata and data device before
format is called.
2018-12-05 12:33:16 +01:00
Milan Broz
ce60fe04cb Update po files. 2018-12-04 16:28:25 +01:00
Milan Broz
4e1c62d7f1 Ignore false positive Coverity warning for string length. 2018-12-04 12:57:08 +01:00
Milan Broz
3ea8e01a9d Fix some cppcheck warnings.
Despite it is nonsense and cppcheck should understand the code better :-)
2018-12-04 12:30:14 +01:00
Milan Broz
9cbd36163c Fix various gcc compiler warnings in tests. 2018-12-03 13:47:43 +01:00
Milan Broz
0f5c3e107e Update README.md. 2018-12-03 10:35:50 +01:00
Milan Broz
1ae251ea5b Update LUKS2 docs. 2018-12-03 09:33:49 +01:00
Milan Broz
90742541c6 Add 2.0.6 release notes. 2018-12-03 09:30:48 +01:00
Milan Broz
84d8dfd46c Update po files. 2018-12-02 19:00:18 +01:00
Ondrej Kozina
3ed404e5bb Add validation tests for non-default metadata. 2018-12-02 18:56:59 +01:00
Ondrej Kozina
4b64ffc365 Update LUKS2 test images.
- update test images for validation fixes
  from previous commits

- erase leftover json data in between secondary
  header and keyslot areas.
2018-11-29 13:32:02 +01:00
Ondrej Kozina
e297cc4c2a Remove redundant check in keyslot areas validation.
Due to previous fix it's no longer needed to add
all keyslot area lengths and check if result sum
is lower than keyslots_size.

(We already check lower limit, upper limit and
overlapping areas)
2018-11-29 13:31:59 +01:00
Ondrej Kozina
9ab63c58f2 Fix keyslot areas validation.
This commit fixes two problems:

a) Replace hardcoded 16KiB metadata variant as lower limit
   for keyslot area offset with current value set in config
   section (already validated).

b) Replace segment offset (if not zero) as upper limit for
   keyslot area offset + size with value calculated as
   2 * metadata size + keyslots_size as acquired from
   config section (also already validated)
2018-11-29 13:31:54 +01:00
Ondrej Kozina
3c0aceb9f7 Reshuffle config and keyslots areas validation code.
Swap config and keyslot areas validation code order.

Also split original keyslots_size validation code in
between config and keyslot areas routines for furhter
changes in the code later. This commit has no funtional
impact.
2018-11-29 13:31:50 +01:00
Ondrej Kozina
d7bd3d2d69 Do not validate keyslot areas so frantically.
Keyslot areas were validated from each keyslot
validation routine and later one more time
in general header validation routine. The call
from header validation routine is good enough.
2018-11-29 13:31:46 +01:00
Ondrej Kozina
3136226134 Test cryptsetup can handle all LUKS2 metadata variants.
following tests:

add keyslot
test passphrase
unlock device
store token in metadata
read token from metadata
2018-11-27 16:56:57 +01:00
Ondrej Kozina
5a7535c513 Add LUKS2 metadata test images.
Test archive contains images with all supported
LUKS2 metadata size configurations. There's
one active keyslot 0 in every image that can be
unlocked with following passphrase (ignore
quotation marks): "Qx3qn46vq0v"
2018-11-27 16:54:51 +01:00
Milan Broz
991ab5de64 Fixe more context propagation paths. 2018-11-27 16:09:45 +01:00
Milan Broz
b17e4fa3bf Use context in PBKDF benchmark log. 2018-11-27 15:04:03 +01:00
Milan Broz
35fa5b7dfc Propagate context in libdevmapper functions. 2018-11-27 14:47:50 +01:00
Milan Broz
7812214db6 Add context to device handling functions. 2018-11-27 14:19:57 +01:00
Milan Broz
a5a8467993 Use context in debug log messages.
To use per-context logging even for debug messages
we need to use the same macro as for error logging.
2018-11-27 13:37:20 +01:00
Ondrej Kozina
544ea7ccfc Drop needless size restriction on keyslots size. 2018-11-27 11:25:40 +01:00
Ondrej Kozina
024b5310fa Add validation tests for non-default json area size.
Test both primary and secondary header validation tests
with non-default LUKS2 json area size.

Check validation rejects config.keyslots_size with zero value.

Check validation rejects mismatching values for metadata size
set in binary header and in config json section.
2018-11-26 16:28:07 +01:00
Ondrej Kozina
177cb8bbe1 Extend baseline LUKS2 validation image to 16 MiBs. 2018-11-26 16:28:01 +01:00
Ondrej Kozina
35f137df35 Move some validation tests in new section. 2018-11-26 16:27:52 +01:00
Milan Broz
c71ee7a3e6 Update POTFILES. 2018-11-25 16:02:59 +01:00
Milan Broz
9a2dbb26a5 Fix signed/unsigned comparison warning. 2018-11-25 15:11:44 +01:00
Milan Broz
3d2fd06035 Fix setting of integrity persistent flags (no-journal).
We have to query and set flags also for underlying dm-integrity device,
otherwise activation flags applied there are ignored.
2018-11-25 12:46:41 +01:00
Milan Broz
2f6d0c006c Check for algorithms string lengths in crypt_cipher_check().
The kernel check will fail anyway if string is truncated, but this
make some compilers more happy.
2018-11-25 10:55:28 +01:00
Milan Broz
43088ee8ba Fix unsigned return value. 2018-11-25 10:55:08 +01:00
Milan Broz
c17b6e7be3 Fix LUKS2_hdr_validate funtion definition. 2018-11-25 10:28:34 +01:00
Milan Broz
71299633d5 Properly handle interrupt in cryptsetup-reencrypt and remove log.
Fixes #419.
2018-11-24 20:10:46 +01:00
Milan Broz
dfe61cbe9c Fix sector-size tests for older kernels. 2018-11-24 20:10:03 +01:00
Milan Broz
18c9210342 Check for device size and sector size misalignment.
Kernel prevents activation of device that is not aligned
to requested sector size.

Add early check to plain and LUKS2 formats to disallow
creation of such a device.
(Activation will fail in kernel later anyway.)

Fixes #390.
2018-11-24 18:53:46 +01:00
Milan Broz
1167e6b86f Add support for Adiantum cipher mode. 2018-11-23 21:03:02 +01:00
Milan Broz
1684fa8c63 Do not run empty test set in main directory. 2018-11-22 16:30:33 +01:00
Milan Broz
b4dce61918 Try to check if AEAD cipher is available through kernel crypto API. 2018-11-22 16:02:33 +01:00
Milan Broz
d7ddcc0768 Reformat AF implementation, use secure allocation for buffer. 2018-11-22 16:02:00 +01:00
Milan Broz
36c26b6903 Properly propagate error from AF diffuse function. 2018-11-22 15:51:27 +01:00
Milan Broz
2300c692b8 Check hash value in pbkdf setting early. 2018-11-22 15:51:10 +01:00
Milan Broz
da6dbbd433 Fallback to default keyslot algorithm if backend does not know the cipher. 2018-11-22 15:49:56 +01:00
Ondrej Kozina
0a4bd8cb7d Remove unused crypt_dm_active_device member. 2018-11-22 15:49:21 +01:00
Ondrej Kozina
32d357e1a8 Secondary header offset must match header size. 2018-11-22 15:34:28 +01:00
Ondrej Kozina
21e259d1a4 Check json size matches value from binary LUKS2 header.
We have max json area length parameter stored twice. In
LUKS2 binary header and in json metadata. Those two values
must match.
2018-11-22 15:34:18 +01:00
Ondrej Kozina
c3a54aa59a Change max json area length type to unsigned.
We use uint64_t for max json length everywhere else
including config.json_size field in LUKS2 metadata.

Also renames some misleading parameter names.
2018-11-22 15:34:00 +01:00
Ondrej Kozina
7713df9e41 Enable all supported metadata sizes in LUKS2 validation code.
LUKS2 specification allows various size of LUKS2 metadata.
The single metadata instance is composed of LUKS2 binary header
(4096 bytes) and immediately following json area. The resulting
assembled metadata size have to be one of following values,
all in KiB:

16, 32, 64, 128, 256, 512, 1024, 2048 or 4096
2018-11-22 15:32:31 +01:00
Milan Broz
49900b79a9 Add branch v2_0_x to Travis. 2018-11-19 13:25:37 +01:00
Milan Broz
4f075a1aef Remove python dev from Travis script. 2018-11-09 10:28:29 +01:00
Milan Broz
d4cd902e1c Update po file. 2018-11-09 09:59:27 +01:00
Milan Broz
ef4484ab27 Remove python bindings in favour of liblockdev. 2018-11-09 09:18:41 +01:00
Ondrej Kozina
9e7f9f3471 Parse compat values from LUKS2 default segment encryption.
We used to preset compat cipher and cipher_mode values during
crypt_format() or crypt_load(). Since we can change 'default segment'
dynamically during reencryption (encryption, decryption included) we
need to parse those values from default segment json encryption field
each time crypt_get_cipher() or crypt_get_cipher_mode() is called.
2018-11-07 10:18:41 +01:00
Milan Broz
493e8580d6 Log all debug messages through log callback.
This cahnge allow to redirect all output of library
to a log processor.
2018-11-07 10:17:51 +01:00
Milan Broz
bce567db46 Add workaround for benchmarking Adiantum cipher. 2018-11-07 10:17:33 +01:00
Milan Broz
38e2c8cb8a Set devel version. 2018-11-07 10:16:35 +01:00
Milan Broz
16309544ac Fix ext4 image to work without CONFIG_LBDAF. 2018-11-05 12:00:01 +01:00
171 changed files with 32196 additions and 12774 deletions

9
.gitignore vendored
View File

@@ -45,3 +45,12 @@ scripts/cryptsetup.conf
stamp-h1
veritysetup
tests/valglog.*
*/*.dirstamp
*-debug-luks2-backup*
tests/api-test
tests/api-test-2
tests/differ
tests/luks1-images
tests/tcrypt-images
tests/unit-utils-io
tests/vectors-test

View File

@@ -36,7 +36,6 @@ function check_nonroot
[ -z "$cfg_opts" ] && return
configure_travis \
--enable-python \
--enable-cryptsetup-reencrypt \
--enable-internal-sse-argon2 \
"$cfg_opts" \
@@ -54,7 +53,6 @@ function check_root
[ -z "$cfg_opts" ] && return
configure_travis \
--enable-python \
--enable-cryptsetup-reencrypt \
--enable-internal-sse-argon2 \
"$cfg_opts" \
@@ -73,7 +71,6 @@ function check_nonroot_compile_only
[ -z "$cfg_opts" ] && return
configure_travis \
--enable-python \
--enable-cryptsetup-reencrypt \
--enable-internal-sse-argon2 \
"$cfg_opts" \
@@ -87,7 +84,6 @@ function travis_install_script
# install some packages from Ubuntu's default sources
sudo apt-get -qq update
sudo apt-get install -qq >/dev/null \
python-dev \
sharutils \
libgcrypt20-dev \
libssl-dev \
@@ -140,6 +136,13 @@ function travis_script
openssl_compile)
check_nonroot_compile_only "--with-crypto_backend=openssl"
;;
kernel)
check_nonroot "--with-crypto_backend=kernel" && \
check_root "--with-crypto_backend=kernel"
;;
kernel_compile)
check_nonroot_compile_only "--with-crypto_backend=kernel"
;;
*)
echo "error, check environment (travis.yml)" >&2
false

View File

@@ -1,7 +1,7 @@
language: c
sudo: required
dist: trusty
dist: xenial
compiler:
- gcc
@@ -9,6 +9,7 @@ compiler:
env:
- MAKE_CHECK="gcrypt"
- MAKE_CHECK="openssl"
- MAKE_CHECK="kernel"
branches:
only:

2
FAQ
View File

@@ -976,7 +976,7 @@ A. Contributors
In order to find out whether a key-slot is damaged one has to look
for "non-random looking" data in it. There is a tool that
automatizes this in the cryptsetup distribution from version 1.6.0
automates this in the cryptsetup distribution from version 1.6.0
onwards. It is located in misc/keyslot_checker/. Instructions how
to use and how to interpret results are in the README file. Note
that this tool requires a libcryptsetup from cryptsetup 1.6.0 or

View File

@@ -1,6 +1,5 @@
EXTRA_DIST = COPYING.LGPL FAQ docs misc
SUBDIRS = po tests
TESTS =
CLEANFILES =
DISTCLEAN_TARGETS =
@@ -25,8 +24,6 @@ tmpfilesd_DATA =
include man/Makemodule.am
include python/Makemodule.am
include scripts/Makemodule.am
if CRYPTO_INTERNAL_ARGON2
@@ -40,7 +37,6 @@ include src/Makemodule.am
ACLOCAL_AMFLAGS = -I m4
DISTCHECK_CONFIGURE_FLAGS = \
--enable-python \
--with-tmpfilesdir=$$dc_install_base/usr/lib/tmpfiles.d \
--enable-internal-argon2 --enable-internal-sse-argon2

View File

@@ -2,13 +2,13 @@
What the ...?
=============
**Cryptsetup** is utility used to conveniently setup disk encryption based
on [DMCrypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) kernel module.
**Cryptsetup** is a utility used to conveniently set up disk encryption based
on the [DMCrypt](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt) kernel module.
These include **plain** **dm-crypt** volumes, **LUKS** volumes, **loop-AES**
and **TrueCrypt** (including **VeraCrypt** extension) format.
and **TrueCrypt** (including **VeraCrypt** extension) formats.
Project also includes **veritysetup** utility used to conveniently setup
The project also includes a **veritysetup** utility used to conveniently setup
[DMVerity](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMVerity) block integrity checking kernel module
and, since version 2.0, **integritysetup** to setup
[DMIntegrity](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMIntegrity) block integrity kernel module.
@@ -20,7 +20,10 @@ LUKS Design
only facilitate compatibility among distributions, but also provides secure management of multiple user passwords.
LUKS stores all necessary setup information in the partition header, enabling to transport or migrate data seamlessly.
Last version of the LUKS format specification is
Last version of the LUKS2 format specification is
[available here](https://gitlab.com/cryptsetup/LUKS2-docs).
Last version of the LUKS1 format specification is
[available here](https://www.kernel.org/pub/linux/utils/cryptsetup/LUKS_docs/on-disk-format.pdf).
Why LUKS?
@@ -41,13 +44,25 @@ Download
--------
All release tarballs and release notes are hosted on [kernel.org](https://www.kernel.org/pub/linux/utils/cryptsetup/).
**The latest cryptsetup version is 2.0.5**
* [cryptsetup-2.0.5.tar.xz](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.xz)
* Signature [cryptsetup-2.0.5.tar.sign](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.sign)
**The latest cryptsetup version is 2.1.0**
* [cryptsetup-2.1.0.tar.xz](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.1/cryptsetup-2.1.0.tar.xz)
* Signature [cryptsetup-2.1.0.tar.sign](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.1/cryptsetup-2.1.0.tar.sign)
_(You need to decompress file first to check signature.)_
* [Cryptsetup 2.0.5 Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.5-ReleaseNotes).
* [Cryptsetup 2.1.0 Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.1/v2.1.0-ReleaseNotes).
**The latest testing and experimental cryptsetup version is 2.2.0-rc0**
* [cryptsetup-2.2.0-rc0.tar.xz](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.2/cryptsetup-2.2.0-rc0.tar.xz)
* Signature [cryptsetup-2.2.0-rc0.tar.sign](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.2/cryptsetup-2.2.0-rc0.tar.sign)
_(You need to decompress file first to check signature.)_
* [Cryptsetup 2.2.0-rc0 Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.2/v2.2.0-rc0-ReleaseNotes).
Previous versions
* [Version 2.0.6](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.6.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.6.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.6-ReleaseNotes).
* [Version 2.0.5](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.5-ReleaseNotes).
* [Version 2.0.4](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.4.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.4.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.4-ReleaseNotes).
@@ -87,7 +102,7 @@ Source and API docs
For development version code, please refer to [source](https://gitlab.com/cryptsetup/cryptsetup/tree/master) page,
mirror on [kernel.org](https://git.kernel.org/cgit/utils/cryptsetup/cryptsetup.git/) or [GitHub](https://github.com/mbroz/cryptsetup).
For libcryptsetup documentation see [libcryptsetup API](https://gitlab.com/cryptsetup/cryptsetup/wikis/API/index.html) page.
For libcryptsetup documentation see [libcryptsetup API](https://mbroz.fedorapeople.org/libcryptsetup_API/) page.
The libcryptsetup API/ABI changes are tracked in [compatibility report](https://abi-laboratory.pro/tracker/timeline/cryptsetup/).

View File

@@ -1,9 +1,9 @@
AC_PREREQ([2.67])
AC_INIT([cryptsetup],[2.0.6])
AC_INIT([cryptsetup],[2.2.0-rc1])
dnl library version from <major>.<minor>.<release>[-<suffix>]
LIBCRYPTSETUP_VERSION=$(echo $PACKAGE_VERSION | cut -f1 -d-)
LIBCRYPTSETUP_VERSION_INFO=15:0:3
LIBCRYPTSETUP_VERSION_INFO=17:0:5
AM_SILENT_RULES([yes])
AC_CONFIG_SRCDIR(src/cryptsetup.c)
@@ -185,8 +185,12 @@ AC_DEFUN([CONFIGURE_GCRYPT], [
else
GCRYPT_REQ_VERSION=1.1.42
fi
dnl Check if we can use gcrypt PBKDF2 (1.6.0 supports empty password)
dnl libgcrypt rejects to use pkgconfig, use AM_PATH_LIBGCRYPT from gcrypt-devel here.
dnl Do not require gcrypt-devel if other crypto backend is used.
m4_ifdef([AM_PATH_LIBGCRYPT],[
AC_ARG_ENABLE([gcrypt-pbkdf2],
dnl Check if we can use gcrypt PBKDF2 (1.6.0 supports empty password)
AS_HELP_STRING([--enable-gcrypt-pbkdf2], [force enable internal gcrypt PBKDF2]),
if test "x$enableval" = "xyes"; then
[use_internal_pbkdf2=0]
@@ -194,7 +198,8 @@ AC_DEFUN([CONFIGURE_GCRYPT], [
[use_internal_pbkdf2=1]
fi,
[AM_PATH_LIBGCRYPT([1.6.1], [use_internal_pbkdf2=0], [use_internal_pbkdf2=1])])
AM_PATH_LIBGCRYPT($GCRYPT_REQ_VERSION,,[AC_MSG_ERROR([You need the gcrypt library.])])
AM_PATH_LIBGCRYPT($GCRYPT_REQ_VERSION,,[AC_MSG_ERROR([You need the gcrypt library.])])],
AC_MSG_ERROR([Missing support for gcrypt: install gcrypt and regenerate configure.]))
AC_MSG_CHECKING([if internal cryptsetup PBKDF2 is compiled-in])
if test $use_internal_pbkdf2 = 0; then
@@ -204,6 +209,8 @@ AC_DEFUN([CONFIGURE_GCRYPT], [
NO_FIPS([])
fi
AC_CHECK_DECLS([GCRY_CIPHER_MODE_XTS], [], [], [#include <gcrypt.h>])
if test "x$enable_static_cryptsetup" = "xyes"; then
saved_LIBS=$LIBS
LIBS="$saved_LIBS $LIBGCRYPT_LIBS -static"
@@ -271,6 +278,7 @@ AC_DEFUN([CONFIGURE_KERNEL], [
AC_DEFUN([CONFIGURE_NETTLE], [
AC_CHECK_HEADERS(nettle/sha.h,,
[AC_MSG_ERROR([You need Nettle cryptographic library.])])
AC_CHECK_HEADERS(nettle/version.h)
saved_LIBS=$LIBS
AC_CHECK_LIB(nettle, nettle_pbkdf2_hmac_sha256,,
@@ -352,11 +360,13 @@ LIBS=$saved_LIBS
dnl Check for JSON-C used in LUKS2
PKG_CHECK_MODULES([JSON_C], [json-c])
AC_CHECK_DECLS([json_object_object_add_ex], [], [], [#include <json-c/json.h>])
AC_CHECK_DECLS([json_object_deep_copy], [], [], [#include <json-c/json.h>])
dnl Crypto backend configuration.
AC_ARG_WITH([crypto_backend],
AS_HELP_STRING([--with-crypto_backend=BACKEND], [crypto backend (gcrypt/openssl/nss/kernel/nettle) [gcrypt]]),
[], [with_crypto_backend=gcrypt])
AS_HELP_STRING([--with-crypto_backend=BACKEND], [crypto backend (gcrypt/openssl/nss/kernel/nettle) [openssl]]),
[], [with_crypto_backend=openssl])
dnl Kernel crypto API backend needed for benchmark and tcrypt
AC_ARG_ENABLE([kernel_crypto],
@@ -546,35 +556,6 @@ AC_DEFUN([CS_ABSPATH], [
esac
])
dnl ==========================================================================
dnl Python bindings
AC_ARG_ENABLE([python],
AS_HELP_STRING([--enable-python], [enable Python bindings]))
AC_ARG_WITH([python_version],
AS_HELP_STRING([--with-python_version=VERSION], [required Python version [2.6]]),
[PYTHON_VERSION=$withval], [PYTHON_VERSION=2.6])
if test "x$enable_python" = "xyes"; then
AM_PATH_PYTHON([$PYTHON_VERSION])
AC_PATH_PROGS([PYTHON_CONFIG], [python${PYTHON_VERSION}-config python-config], [no])
if test "${PYTHON_CONFIG}" = "no"; then
AC_MSG_ERROR([cannot find python${PYTHON_VERSION}-config or python-config in PATH])
fi
AC_MSG_CHECKING(for python headers using $PYTHON_CONFIG --includes)
PYTHON_INCLUDES=$($PYTHON_CONFIG --includes)
AC_MSG_RESULT($PYTHON_INCLUDES)
AC_SUBST(PYTHON_INCLUDES)
AC_MSG_CHECKING(for python libraries using $PYTHON_CONFIG --libs)
PYTHON_LIBS=$($PYTHON_CONFIG --libs)
AC_MSG_RESULT($PYTHON_LIBS)
AC_SUBST(PYTHON_LIBS)
fi
AM_CONDITIONAL([PYTHON_CRYPTSETUP], [test "x$enable_python" = "xyes"])
dnl ==========================================================================
CS_STR_WITH([plain-hash], [password hashing function for plain mode], [ripemd160])
CS_STR_WITH([plain-cipher], [cipher for plain mode], [aes])
@@ -586,12 +567,22 @@ CS_STR_WITH([luks1-cipher], [cipher for LUKS1], [aes])
CS_STR_WITH([luks1-mode], [cipher mode for LUKS1], [xts-plain64])
CS_NUM_WITH([luks1-keybits],[key length in bits for LUKS1], [256])
AC_ARG_ENABLE([luks_adjust_xts_keysize], AS_HELP_STRING([--disable-luks-adjust-xts-keysize],
[XTS mode requires two keys, double default LUKS keysize if needed]),
[], [enable_luks_adjust_xts_keysize=yes])
if test "x$enable_luks_adjust_xts_keysize" = "xyes"; then
AC_DEFINE(ENABLE_LUKS_ADJUST_XTS_KEYSIZE, 1, [XTS mode - double default LUKS keysize if needed])
fi
CS_STR_WITH([luks2-pbkdf], [Default PBKDF algorithm (pbkdf2 or argon2i/argon2id) for LUKS2], [argon2i])
CS_NUM_WITH([luks1-iter-time], [PBKDF2 iteration time for LUKS1 (in ms)], [2000])
CS_NUM_WITH([luks2-iter-time], [Argon2 PBKDF iteration time for LUKS2 (in ms)], [2000])
CS_NUM_WITH([luks2-memory-kb], [Argon2 PBKDF memory cost for LUKS2 (in kB)], [1048576])
CS_NUM_WITH([luks2-parallel-threads],[Argon2 PBKDF max parallel cost for LUKS2 (if CPUs available)], [4])
CS_STR_WITH([luks2-keyslot-cipher], [fallback cipher for LUKS2 keyslot (if data encryption is incompatible)], [aes-xts-plain64])
CS_NUM_WITH([luks2-keyslot-keybits],[fallback key size for LUKS2 keyslot (if data encryption is incompatible)], [512])
CS_STR_WITH([loopaes-cipher], [cipher for loop-AES mode], [aes])
CS_NUM_WITH([loopaes-keybits],[key length in bits for loop-AES mode], [256])
@@ -626,8 +617,8 @@ AC_SUBST(DEFAULT_LUKS2_LOCK_DIR_PERMS)
dnl Override default LUKS format version (for cryptsetup or cryptsetup-reencrypt format actions only).
AC_ARG_WITH([default_luks_format],
AS_HELP_STRING([--with-default-luks-format=FORMAT], [default LUKS format version (LUKS1/LUKS2) [LUKS1]]),
[], [with_default_luks_format=LUKS1])
AS_HELP_STRING([--with-default-luks-format=FORMAT], [default LUKS format version (LUKS1/LUKS2) [LUKS2]]),
[], [with_default_luks_format=LUKS2])
case $with_default_luks_format in
LUKS1) default_luks=CRYPT_LUKS1 ;;

View File

@@ -195,7 +195,7 @@
2011-03-05 Milan Broz <mbroz@redhat.com>
* Add exception to COPYING for binary distribution linked with OpenSSL library.
* Set secure data flag (wipe all ioclt buffers) if devmapper library supports it.
* Set secure data flag (wipe all ioctl buffers) if devmapper library supports it.
2011-01-29 Milan Broz <mbroz@redhat.com>
* Fix mapping removal if device disappeared but node still exists.
@@ -636,7 +636,7 @@
2006-03-15 Clemens Fruhwirth <clemens@endorphin.org>
* configure.in: 1.0.3-rc3. Most unplease release ever.
* configure.in: 1.0.3-rc3. Most displease release ever.
* lib/setup.c (__crypt_create_device): More verbose error message.
2006-02-26 Clemens Fruhwirth <clemens@endorphin.org>

View File

@@ -1,7 +1,7 @@
/*
* An example of using logging through libcryptsetup API
*
* Copyright (C) 2011-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -1,7 +1,7 @@
/*
* An example of using LUKS device through libcryptsetup API
*
* Copyright (C) 2011-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

210
docs/v2.1.0-ReleaseNotes Normal file
View File

@@ -0,0 +1,210 @@
Cryptsetup 2.1.0 Release Notes
==============================
Stable release with new features and bug fixes.
Cryptsetup 2.1 version uses a new on-disk LUKS2 format as the default
LUKS format and increases default LUKS2 header size.
The legacy LUKS (referenced as LUKS1) will be fully supported forever
as well as a traditional and fully backward compatible format.
When upgrading a stable distribution, please use configure option
--with-default-luks-format=LUKS1 to maintain backward compatibility.
This release also switches to OpenSSL as a default cryptographic
backend for LUKS header processing. Use --with-crypto_backend=gcrypt
configure option if you need to preserve legacy libgcrypt backend.
Please do not use LUKS2 without properly configured backup or
in production systems that need to be compatible with older systems.
Changes since version 2.0.6
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* The default for cryptsetup LUKS format action is now LUKS2.
You can use LUKS1 with cryptsetup option --type luks1.
* The default size of the LUKS2 header is increased to 16 MB.
It includes metadata and the area used for binary keyslots;
it means that LUKS header backup is now 16MB in size.
Note, that used keyslot area is much smaller, but this increase
of reserved space allows implementation of later extensions
(like online reencryption).
It is fully compatible with older cryptsetup 2.0.x versions.
If you require to create LUKS2 header with the same size as
in the 2.0.x version, use --offset 8192 option for luksFormat
(units are in 512-bytes sectors; see notes below).
* Cryptsetup now doubles LUKS default key size if XTS mode is used
(XTS mode uses two internal keys). This does not apply if key size
is explicitly specified on the command line and it does not apply
for the plain mode.
This fixes a confusion with AES and 256bit key in XTS mode where
code used AES128 and not AES256 as often expected.
Also, the default keyslot encryption algorithm (if cannot be derived
from data encryption algorithm) is now available as configure
options --with-luks2-keyslot-cipher and --with-luks2-keyslot-keybits.
The default is aes-xts-plain64 with 2 * 256-bits key.
* Default cryptographic backend used for LUKS header processing is now
OpenSSL. For years, OpenSSL provided better performance for PBKDF.
NOTE: Cryptsetup/libcryptsetup supports several cryptographic
library backends. The fully supported are libgcrypt, OpenSSL and
kernel crypto API. FIPS mode extensions are maintained only for
libgcrypt and OpenSSL. Nettle and NSS are usable only for some
subset of algorithms and cannot provide full backward compatibility.
You can always switch to other backends by using a configure switch,
for libgcrypt (compatibility for older distributions) use:
--with-crypto_backend=gcrypt
* The Python bindings are no longer supported and the code was removed
from cryptsetup distribution. Please use the libblockdev project
that already covers most of the libcryptsetup functionality
including LUKS2.
* Cryptsetup now allows using --offset option also for luksFormat.
It means that the specified offset value is used for data offset.
LUKS2 header areas are automatically adjusted according to this value.
(Note units are in 512-byte sectors due to the previous definition
of this option in plain mode.)
This option can replace --align-payload with absolute alignment value.
* Cryptsetup now supports new refresh action (that is the alias for
"open --refresh").
It allows changes of parameters for an active device (like root
device mapping), for example, it can enable or disable TRIM support
on-the-fly.
It is supported for LUKS1, LUKS2, plain and loop-AES devices.
* Integritysetup now supports mode with detached data device through
new --data-device option.
Since kernel 4.18 there is a possibility to specify external data
device for dm-integrity that stores all integrity tags.
* Integritysetup now supports automatic integrity recalculation
through new --integrity-recalculate option.
Linux kernel since version 4.18 supports automatic background
recalculation of integrity tags for dm-integrity.
Other changes and fixes
~~~~~~~~~~~~~~~~~~~~~~~
* Fix for crypt_wipe call to allocate space if the header is backed
by a file. This means that if you use detached header file, it will
now have always the full size after luksFormat, even if only
a few keyslots are used.
* Fixes to offline cryptsetup-reencrypt to preserve LUKS2 keyslots
area sizes after reencryption and fixes for some other issues when
creating temporary reencryption headers.
* Added some FIPS mode workarounds. We cannot (yet) use Argon2 in
FIPS mode, libcryptsetup now fallbacks to use PBKDF2 in FIPS mode.
* Rejects conversion to LUKS1 if PBKDF2 hash algorithms
in keyslots differ.
* The hash setting on command line now applies also to LUKS2 PBKDF2
digest. In previous versions, the LUKS2 key digest used PBKDF2-SHA256
(except for converted headers).
* Allow LUKS2 keyslots area to increase if data offset allows it.
Cryptsetup can fine-tune LUKS2 metadata area sizes through
--luks2-metadata-size=BYTES and --luks2-keyslots-size=BYTES.
Please DO NOT use these low-level options until you need it for
some very specific additional feature.
Also, the code now prints these LUKS2 header area sizes in dump
command.
* For LUKS2, keyslot can use different encryption that data with
new options --keyslot-key-size=BITS and --keyslot-cipher=STRING
in all commands that create new LUKS keyslot.
Please DO NOT use these low-level options until you need it for
some very specific additional feature.
* Code now avoids data flush when reading device status through
device-mapper.
* The Nettle crypto backend and the userspace kernel crypto API
backend were enhanced to allow more available hash functions
(like SHA3 variants).
* Upstream code now does not require libgcrypt-devel
for autoconfigure, because OpenSSL is the default.
The libgcrypt does not use standard pkgconfig detection and
requires specific macro (part of libgcrypt development files)
to be always present during autoconfigure.
With other crypto backends, like OpenSSL, this makes no sense,
so this part of autoconfigure is now optional.
* Cryptsetup now understands new --debug-json option that allows
an additional dump of some JSON information. These are no longer
present in standard debug output because it could contain some
specific LUKS header parameters.
* The luksDump contains the hash algorithm used in Anti-Forensic
function.
* All debug messages are now sent through configured log callback
functions, so an application can easily use own debug messages
handling. In previous versions debug messages were printed directly
to standard output.)
Libcryptsetup API additions
~~~~~~~~~~~~~~~~~~~~~~~~~~~
These new calls are now exported, for details see libcryptsetup.h:
* crypt_init_data_device
* crypt_get_metadata_device_name
functions to init devices with separate metadata and data device
before a format function is called.
* crypt_set_data_offset
sets the data offset for LUKS to the specified value
in 512-byte sectors.
It should replace alignment calculation in LUKS param structures.
* crypt_get_metadata_size
* crypt_set_metadata_size
allows to set/get area sizes in LUKS header
(according to specification).
* crypt_get_default_type
get default compiled-in LUKS type (version).
* crypt_get_pbkdf_type_params
allows to get compiled-in PBKDF parameters.
* crypt_keyslot_set_encryption
* crypt_keyslot_get_encryption
allows to set/get per-keyslot encryption algorithm for LUKS2.
* crypt_keyslot_get_pbkdf
allows to get PBKDF parameters per-keyslot.
and these new defines:
* CRYPT_LOG_DEBUG_JSON (message type for JSON debug)
* CRYPT_DEBUG_JSON (log level for JSON debug)
* CRYPT_ACTIVATE_RECALCULATE (dm-integrity recalculate flag)
* CRYPT_ACTIVATE_REFRESH (new open with refresh flag)
All existing API calls should remain backward compatible.
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Optional authenticated encryption is still an experimental feature
and can have performance problems for high-speed devices and device
with larger IO blocks (like RAID).
* Authenticated encryption does not use encryption for a dm-integrity
journal. While it does not influence data confidentiality or
integrity protection, an attacker can get some more information
from data journal or cause that system will corrupt sectors after
journal replay. (That corruption will be detected though.)
* The LUKS2 metadata area increase is mainly needed for the new online
reencryption as the major feature for the next release.

View File

@@ -0,0 +1,266 @@
Cryptsetup 2.2.0-rc1 Release Notes
==================================
Testing release with new experimental features and bug fixes.
Cryptsetup 2.2 version introduces a new LUKS2 online reencryption
extension that allows reencryption of mounted LUKS2 devices
(device in use) in the background.
This testing release is intended for more extensive testing
of very complex online reencryption feature; it is expected
that it contains bugs, performance issues and that some functions
are in this testing release limited.
Please do not use this testing version in production environments.
Also, use it only if you have a full data backup.
Changes since version 2.2.0-rc0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Add integritysetup support for bitmap mode introduced in Linux kernel 5.2.
Integritysetup now supports --integrity-bitmap-mode option and
--bitmap-sector-per-bit and --bitmap-flush-time commandline options.
In the bitmap operation mode, if a bit in the bitmap is 1, the corresponding
region's data and integrity tags are not synchronized - if the machine
crashes, the unsynchronized regions will be recalculated.
The bitmap mode is faster than the journal mode because we don't have
to write the data twice, but it is also less reliable, because if data
corruption happens when the machine crashes, it may not be detected.
This can be used only for standalone devices, not with dm-crypt.
* The libcryptsetup now keeps all file descriptors to underlying device
open during the whole lifetime of crypt device context to avoid excessive
scanning in udev (udev run scan on every descriptor close).
* The luksDump command now prints more info for reencryption keyslot
(when a device is in-reencryption).
* New --device-size parameter is supported for LUKS2 reencryption.
It may be used to encrypt/reencrypt only the initial part of the data
device if the user is aware that the rest of the device is empty.
Note: This change causes API break since the last rc0 release
(crypt_params_reencrypt structure contains additional field).
* New --resume-only parameter is supported for LUKS2 reencryption.
This flag resumes reencryption process if it exists (not starting
new reencryption).
* The repair command now tries LUKS2 reencryption recovery if needed.
* If reencryption device is a file image, an interactive dialog now
asks if reencryption should be run safely in offline mode
(if autodetection of active devices failed).
* Fix activation through a token where dm-crypt volume key was not
set through keyring (but using old device-mapper table parameter mode).
* Online reencryption can now retain all keyslots (if all passphrases
are provided). Note that keyslot numbers will change in this case.
Changes since version 2.1.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~
LUKS2 online reencryption
~~~~~~~~~~~~~~~~~~~~~~~~~
The reencryption is intended to provide a reliable way to change
volume key or an algorithm change while the encrypted device is still
in use.
It is based on userspace-only approach (no kernel changes needed)
that uses the device-mapper subsystem to remap active devices on-the-fly
dynamically. The device is split into several segments (encrypted by old
key, new key and so-called hotzone, where reencryption is actively running).
The flexible LUKS2 metadata format is used to store intermediate states
(segment mappings) and both version of keyslots (old and new keys).
Also, it provides a binary area (in the unused keyslot area space)
to provide recovery metadata in the case of unexpected failure during
reencryption. LUKS2 header is during the reencryption marked with
"online-reencryption" keyword. After the reencryption is finished,
this keyword is removed, and the device is backward compatible with all
older cryptsetup tools (that support LUKS2).
The recovery supports three resilience modes:
- checksum: default mode, where individual checksums of ciphertext hotzone
sectors are stored, so the recovery process can detect which sectors were
already reencrypted. It requires that the device sector write is atomic.
- journal: the hotzone is journaled in the binary area
(so the data are written twice)
- none: performance mode; there is no protection
(similar to old offline reencryption)
These resilience modes are not available if reencryption uses data shift.
Note: until we have full documentation (both of the process and metadata),
please refer to Ondrej's slides (some slight details are no longer relevant)
https://okozina.fedorapeople.org/online-disk-reencryption-with-luks2-compact.pdf
The offline reencryption tool (cryptsetup-reencrypt) is still supported
for both LUKS1 and LUKS2 format.
Cryptsetup examples for reencryption
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The reencryption feature is integrated directly into cryptsetup utility
as the new "reencrypt" action (command).
There are three basic modes - to perform reencryption (change of already
existing LUKS2 device), to add encryption to plaintext device and to remove
encryption from a device (decryption).
In all cases, if existing LUKS2 metadata contains information about
the ongoing reencryption process, following reencrypt command continues
with the ongoing reencryption process until it is finished.
You can activate a device with ongoing reencryption as the standard LUKS2
device, but the reencryption process will not continue until the cryptsetup
reencrypt command is issued.
1) Reencryption
~~~~~~~~~~~~~~~
This mode is intended to change any attribute of the data encryption
(change of the volume key, algorithm or sector size).
Note that authenticated encryption is not yet supported.
You can start the reencryption process by specifying a LUKS2 device or with
a detached LUKS2 header.
The code should automatically recognize if the device is in use (and if it
should use online mode of reencryption).
If you do not specify parameters, only volume key is changed
(a new random key is generated).
# cryptsetup reencrypt <device> [--header <hdr>]
You can also start reencryption using active mapped device name:
# cryptsetup reencrypt --active-name <name>
You can also specify the resilience mode (none, checksum, journal) with
--resilience=<mode> option, for checksum mode also the hash algorithm with
--resilience-hash=<alg> (only hash algorithms supported by cryptographic
backend are available).
The maximal size of reencryption hotzone can be limited by
--hotzone-size=<size> option and applies to all reencryption modes.
Note that for checksum and journal mode hotzone size is also limited
by available space in binary keyslot area.
2) Encryption
~~~~~~~~~~~~~
This mode provides a way to encrypt a plaintext device to LUKS2 format.
This option requires reduction of device size (for LUKS2 header) or new
detached header.
# cryptsetup reencrypt <device> --encrypt --reduce-device-size <size>
Or with detached header:
# cryptsetup reencrypt <device> --encrypt --header <hdr>
3) Decryption
~~~~~~~~~~~~~
This mode provides the removal of existing LUKS2 encryption and replacing
a device with plaintext content only.
For now, we support only decryption with a detached header.
# cryptsetup reencrypt <device> --decrypt --header <hdr>
For all three modes, you can split the process to metadata initialization
(prepare keyslots and segments but do not run reencryption yet) and the data
reencryption step by using --init-only option.
Prepares metadata:
# cryptsetup reencrypt --init-only <parameters>
Starts the data processing:
# cryptsetup reencrypt <device>
Please note, that due to the Linux kernel limitation, the encryption or
decryption process cannot be run entirely online - there must be at least
short offline window where operation adds/removes device-mapper crypt (LUKS2) layer.
This step should also include modification of /etc/crypttab and fstab UUIDs,
but it is out of the scope of cryptsetup tools.
Limitations
~~~~~~~~~~~
Most of these limitations will be (hopefully) fixed in next versions.
* Only one active keyslot is supported (all old keyslots will be removed
after reencryption).
* Only block devices are now supported as parameters. As a workaround
for images in a file, please explicitly map a loop device over the image
and use the loop device as the parameter.
* Devices with authenticated encryption are not supported. (Later it will
be limited by the fixed per-sector metadata, per-sector metadata size
cannot be changed without a new device format operation.)
* The reencryption uses userspace crypto library, with fallback to
the kernel (if available). There can be some specific configurations
where the fallback does not provide optimal performance.
* There are no translations of error messages until the final release
(some messages can be rephrased as well).
* The repair command is not finished; the recovery of interrupted
reencryption is made automatically on the first device activation.
* Reencryption triggers too many udev scans on metadata updates (on closing
write enabled file descriptors). This has a negative performance impact on the whole
reencryption and generates excessive I/O load on the system.
New libcryptsetup reencryption API
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The libcryptsetup contains new API calls that are used to setup and
run the reencryption.
Note that there can be some changes in API implementation of these functions
and/or some new function can be introduced in final cryptsetup 2.2 release.
New API symbols (see documentation in libcryptsetup.h)
* struct crypt_params_reencrypt - reencryption parameters
* crypt_reencrypt_init_by_passphrase
* crypt_reencrypt_init_by_keyring
- function to configure LUKS2 metadata for reencryption;
if metadata already exists, it configures the context from this metadata
* crypt_reencrypt
- run the reencryption process (processing the data)
- the optional callback function can be used to interrupt the reencryption
or report the progress.
* crypt_reencrypt_status
- function to query LUKS2 metadata about the reencryption state
Other changes and fixes
~~~~~~~~~~~~~~~~~~~~~~~
* Add optional global serialization lock for memory hard PBKDF.
(The --serialize-memory-hard-pbkdf option in cryptsetup and
CRYPT_ACTIVATE_SERIALIZE_MEMORY_HARD_PBKDF in activation flag.)
This is an "ugly" optional workaround for a situation when multiple devices
are being activated in parallel (like systemd crypttab activation).
The system instead of returning ENOMEM (no memory available) starts
out-of-memory (OOM) killer to kill processes randomly.
Until we find a reliable way how to work with memory-hard function
in these situations, cryptsetup provide a way how to serialize memory-hard
unlocking among parallel cryptsetup instances to workaround this problem.
This flag is intended to be used only in very specific situations,
never use it directly :-)
* Abort conversion to LUKS1 with incompatible sector size that is
not supported in LUKS1.
* Report error (-ENOENT) if no LUKS keyslots are available. User can now
distinguish between a wrong passphrase and no keyslot available.
* Fix a possible segfault in detached header handling (double free).

View File

@@ -64,6 +64,8 @@ libcryptsetup_la_SOURCES = \
lib/utils_device_locking.c \
lib/utils_device_locking.h \
lib/utils_pbkdf.c \
lib/utils_storage_wrappers.c \
lib/utils_storage_wrappers.h \
lib/libdevmapper.c \
lib/utils_dm.h \
lib/volumekey.c \
@@ -97,6 +99,9 @@ libcryptsetup_la_SOURCES = \
lib/luks2/luks2_digest_pbkdf2.c \
lib/luks2/luks2_keyslot.c \
lib/luks2/luks2_keyslot_luks2.c \
lib/luks2/luks2_keyslot_reenc.c \
lib/luks2/luks2_reencrypt.c \
lib/luks2/luks2_segment.c \
lib/luks2/luks2_token_keyring.c \
lib/luks2/luks2_token.c \
lib/luks2/luks2_internal.h \

View File

@@ -1,5 +1,5 @@
/* base64.c -- Encode binary data using printable characters.
Copyright (C) 1999-2001, 2004-2006, 2009-2018 Free Software Foundation, Inc.
Copyright (C) 1999-2001, 2004-2006, 2009-2019 Free Software Foundation, Inc.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -70,7 +70,7 @@ base64_encode_fast (const char *restrict in, size_t inlen, char *restrict out)
{
while (inlen)
{
*out++ = b64c[to_uchar (in[0]) >> 2];
*out++ = b64c[(to_uchar (in[0]) >> 2) & 0x3f];
*out++ = b64c[((to_uchar (in[0]) << 4) + (to_uchar (in[1]) >> 4)) & 0x3f];
*out++ = b64c[((to_uchar (in[1]) << 2) + (to_uchar (in[2]) >> 6)) & 0x3f];
*out++ = b64c[to_uchar (in[2]) & 0x3f];
@@ -103,7 +103,7 @@ base64_encode (const char *restrict in, size_t inlen,
while (inlen && outlen)
{
*out++ = b64c[to_uchar (in[0]) >> 2];
*out++ = b64c[(to_uchar (in[0]) >> 2) & 0x3f];
if (!--outlen)
break;
*out++ = b64c[((to_uchar (in[0]) << 4)

View File

@@ -1,5 +1,5 @@
/* base64.h -- Encode binary data using printable characters.
Copyright (C) 2004-2006, 2009-2018 Free Software Foundation, Inc.
Copyright (C) 2004-2006, 2009-2019 Free Software Foundation, Inc.
Written by Simon Josefsson.
This program is free software; you can redistribute it and/or modify

View File

@@ -1,9 +1,9 @@
/*
* cryptsetup plain device helper functions
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2010-2018 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -64,7 +64,7 @@ static int hash(const char *hash_name, size_t key_size, char *key,
#define PLAIN_HASH_LEN_MAX 256
int crypt_plain_hash(struct crypt_device *ctx __attribute__((unused)),
int crypt_plain_hash(struct crypt_device *cd,
const char *hash_name,
char *key, size_t key_size,
const char *passphrase, size_t passphrase_size)
@@ -73,7 +73,7 @@ int crypt_plain_hash(struct crypt_device *ctx __attribute__((unused)),
size_t hash_size, pad_size;
int r;
log_dbg("Plain: hashing passphrase using %s.", hash_name);
log_dbg(cd, "Plain: hashing passphrase using %s.", hash_name);
if (strlen(hash_name) >= PLAIN_HASH_LEN_MAX)
return -EINVAL;
@@ -85,11 +85,11 @@ int crypt_plain_hash(struct crypt_device *ctx __attribute__((unused)),
*s = '\0';
s++;
if (!*s || sscanf(s, "%zd", &hash_size) != 1) {
log_dbg("Hash length is not a number");
log_dbg(cd, "Hash length is not a number");
return -EINVAL;
}
if (hash_size > key_size) {
log_dbg("Hash length %zd > key length %zd",
log_dbg(cd, "Hash length %zd > key length %zd",
hash_size, key_size);
return -EINVAL;
}
@@ -102,7 +102,7 @@ int crypt_plain_hash(struct crypt_device *ctx __attribute__((unused)),
/* No hash, copy passphrase directly */
if (!strcmp(hash_name_buf, "plain")) {
if (passphrase_size < hash_size) {
log_dbg("Too short plain passphrase.");
log_dbg(cd, "Too short plain passphrase.");
return -EINVAL;
}
memcpy(key, passphrase, hash_size);

View File

@@ -4,12 +4,14 @@ libcrypto_backend_la_CFLAGS = $(AM_CFLAGS) @CRYPTO_CFLAGS@
libcrypto_backend_la_SOURCES = \
lib/crypto_backend/crypto_backend.h \
lib/crypto_backend/crypto_backend_internal.h \
lib/crypto_backend/crypto_cipher_kernel.c \
lib/crypto_backend/crypto_storage.c \
lib/crypto_backend/pbkdf_check.c \
lib/crypto_backend/crc32.c \
lib/crypto_backend/argon2_generic.c \
lib/crypto_backend/cipher_generic.c
lib/crypto_backend/cipher_generic.c \
lib/crypto_backend/cipher_check.c
if CRYPTO_BACKEND_GCRYPT
libcrypto_backend_la_SOURCES += lib/crypto_backend/crypto_gcrypt.c

View File

@@ -274,6 +274,7 @@ int argon2_verify(const char *encoded, const void *pwd, const size_t pwdlen,
}
/* No field can be longer than the encoded length */
/* coverity[strlen_assign] */
max_field_len = (uint32_t)encoded_len;
ctx.saltlen = max_field_len;

View File

@@ -125,7 +125,7 @@ void NOT_OPTIMIZED secure_wipe_memory(void *v, size_t n) {
SecureZeroMemory(v, n);
#elif defined memset_s
memset_s(v, n, 0, n);
#elif defined(__OpenBSD__)
#elif defined(HAVE_EXPLICIT_BZERO)
explicit_bzero(v, n);
#else
static void *(*const volatile memset_sec)(void *, int, size_t) = &memset;
@@ -299,7 +299,7 @@ static int fill_memory_blocks_mt(argon2_instance_t *instance) {
for (r = 0; r < instance->passes; ++r) {
for (s = 0; s < ARGON2_SYNC_POINTS; ++s) {
uint32_t l;
uint32_t l, ll;
/* 2. Calling threads */
for (l = 0; l < instance->lanes; ++l) {
@@ -324,6 +324,9 @@ static int fill_memory_blocks_mt(argon2_instance_t *instance) {
sizeof(argon2_position_t));
if (argon2_thread_create(&thread[l], &fill_segment_thr,
(void *)&thr_data[l])) {
/* Wait for already running threads */
for (ll = 0; ll < l; ++ll)
argon2_thread_join(thread[ll]);
rc = ARGON2_THREAD_FAIL;
goto fail;
}

View File

@@ -1,8 +1,8 @@
/*
* Argon2 PBKDF2 library wrapper
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Milan Broz
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -20,7 +20,7 @@
*/
#include <errno.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
#if HAVE_ARGON2_H
#include <argon2.h>
#else
@@ -77,117 +77,3 @@ int argon2(const char *type, const char *password, size_t password_length,
return r;
#endif
}
#if 0
#include <stdio.h>
struct test_vector {
argon2_type type;
unsigned int memory;
unsigned int iterations;
unsigned int parallelism;
const char *password;
unsigned int password_length;
const char *salt;
unsigned int salt_length;
const char *key;
unsigned int key_length;
const char *ad;
unsigned int ad_length;
const char *output;
unsigned int output_length;
};
struct test_vector test_vectors[] = {
/* Argon2 RFC */
{
Argon2_i, 32, 3, 4,
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01", 32,
"\x02\x02\x02\x02\x02\x02\x02\x02"
"\x02\x02\x02\x02\x02\x02\x02\x02", 16,
"\x03\x03\x03\x03\x03\x03\x03\x03", 8,
"\x04\x04\x04\x04\x04\x04\x04\x04"
"\x04\x04\x04\x04", 12,
"\xc8\x14\xd9\xd1\xdc\x7f\x37\xaa"
"\x13\xf0\xd7\x7f\x24\x94\xbd\xa1"
"\xc8\xde\x6b\x01\x6d\xd3\x88\xd2"
"\x99\x52\xa4\xc4\x67\x2b\x6c\xe8", 32
},
{
Argon2_id, 32, 3, 4,
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01", 32,
"\x02\x02\x02\x02\x02\x02\x02\x02"
"\x02\x02\x02\x02\x02\x02\x02\x02", 16,
"\x03\x03\x03\x03\x03\x03\x03\x03", 8,
"\x04\x04\x04\x04\x04\x04\x04\x04"
"\x04\x04\x04\x04", 12,
"\x0d\x64\x0d\xf5\x8d\x78\x76\x6c"
"\x08\xc0\x37\xa3\x4a\x8b\x53\xc9"
"\xd0\x1e\xf0\x45\x2d\x75\xb6\x5e"
"\xb5\x25\x20\xe9\x6b\x01\xe6\x59", 32
}
};
static void printhex(const char *s, const char *buf, size_t len)
{
size_t i;
printf("%s: ", s);
for (i = 0; i < len; i++)
printf("\\x%02x", (unsigned char)buf[i]);
printf("\n");
fflush(stdout);
}
static int argon2_test_vectors(void)
{
char result[64];
int i, r;
struct test_vector *vec;
argon2_context context;
printf("Argon2 running test vectors\n");
for (i = 0; i < (sizeof(test_vectors) / sizeof(*test_vectors)); i++) {
vec = &test_vectors[i];
memset(result, 0, sizeof(result));
memset(&context, 0, sizeof(context));
context.flags = ARGON2_DEFAULT_FLAGS;
context.version = ARGON2_VERSION_NUMBER;
context.out = (uint8_t *)result;
context.outlen = (uint32_t)vec->output_length;
context.pwd = (uint8_t *)vec->password;
context.pwdlen = (uint32_t)vec->password_length;
context.salt = (uint8_t *)vec->salt;
context.saltlen = (uint32_t)vec->salt_length;
context.secret = (uint8_t *)vec->key;
context.secretlen = (uint32_t)vec->key_length;;
context.ad = (uint8_t *)vec->ad;
context.adlen = (uint32_t)vec->ad_length;
context.t_cost = vec->iterations;
context.m_cost = vec->memory;
context.lanes = vec->parallelism;
context.threads = vec->parallelism;
r = argon2_ctx(&context, vec->type);
if (r != ARGON2_OK) {
printf("Argon2 failed %i, vector %d\n", r, i);
return -EINVAL;
}
if (memcmp(result, vec->output, vec->output_length) != 0) {
printf("vector %u\n", i);
printhex(" got", result, vec->output_length);
printhex("want", vec->output, vec->output_length);
return -EINVAL;
}
}
return 0;
}
#endif

View File

@@ -0,0 +1,157 @@
/*
* Cipher performance check
*
* Copyright (C) 2018-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2018-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <errno.h>
#include <time.h>
#include "crypto_backend_internal.h"
/*
* This is not simulating storage, so using disk block causes extreme overhead.
* Let's use some fixed block size where results are more reliable...
*/
#define CIPHER_BLOCK_BYTES 65536
/*
* If the measured value is lower, encrypted buffer is probably too small
* and calculated values are not reliable.
*/
#define CIPHER_TIME_MIN_MS 0.001
/*
* The whole test depends on Linux kernel usermode crypto API for now.
* (The same implementations are used in dm-crypt though.)
*/
static int time_ms(struct timespec *start, struct timespec *end, double *ms)
{
double start_ms, end_ms;
start_ms = start->tv_sec * 1000.0 + start->tv_nsec / (1000.0 * 1000);
end_ms = end->tv_sec * 1000.0 + end->tv_nsec / (1000.0 * 1000);
*ms = end_ms - start_ms;
return 0;
}
static int cipher_perf_one(const char *name, const char *mode, char *buffer, size_t buffer_size,
const char *key, size_t key_size, const char *iv, size_t iv_size, int enc)
{
struct crypt_cipher_kernel cipher;
size_t done = 0, block = CIPHER_BLOCK_BYTES;
int r;
if (buffer_size < block)
block = buffer_size;
r = crypt_cipher_init_kernel(&cipher, name, mode, key, key_size);
if (r < 0)
return r;
while (done < buffer_size) {
if ((done + block) > buffer_size)
block = buffer_size - done;
if (enc)
r = crypt_cipher_encrypt_kernel(&cipher, &buffer[done], &buffer[done],
block, iv, iv_size);
else
r = crypt_cipher_decrypt_kernel(&cipher, &buffer[done], &buffer[done],
block, iv, iv_size);
if (r < 0)
break;
done += block;
}
crypt_cipher_destroy_kernel(&cipher);
return r;
}
static int cipher_measure(const char *name, const char *mode, char *buffer, size_t buffer_size,
const char *key, size_t key_size, const char *iv, size_t iv_size,
int encrypt, double *ms)
{
struct timespec start, end;
int r;
/*
* Using getrusage would be better here but the precision
* is not adequate, so better stick with CLOCK_MONOTONIC
*/
if (clock_gettime(CLOCK_MONOTONIC_RAW, &start) < 0)
return -EINVAL;
r = cipher_perf_one(name, mode, buffer, buffer_size, key, key_size, iv, iv_size, encrypt);
if (r < 0)
return r;
if (clock_gettime(CLOCK_MONOTONIC_RAW, &end) < 0)
return -EINVAL;
r = time_ms(&start, &end, ms);
if (r < 0)
return r;
if (*ms < CIPHER_TIME_MIN_MS)
return -ERANGE;
return 0;
}
static double speed_mbs(unsigned long bytes, double ms)
{
double speed = bytes, s = ms / 1000.;
return speed / (1024 * 1024) / s;
}
int crypt_cipher_perf_kernel(const char *name, const char *mode, char *buffer, size_t buffer_size,
const char *key, size_t key_size, const char *iv, size_t iv_size,
double *encryption_mbs, double *decryption_mbs)
{
double ms_enc, ms_dec, ms;
int r, repeat_enc, repeat_dec;
ms_enc = 0.0;
repeat_enc = 1;
while (ms_enc < 1000.0) {
r = cipher_measure(name, mode, buffer, buffer_size, key, key_size, iv, iv_size, 1, &ms);
if (r < 0)
return r;
ms_enc += ms;
repeat_enc++;
}
ms_dec = 0.0;
repeat_dec = 1;
while (ms_dec < 1000.0) {
r = cipher_measure(name, mode, buffer, buffer_size, key, key_size, iv, iv_size, 0, &ms);
if (r < 0)
return r;
ms_dec += ms;
repeat_dec++;
}
*encryption_mbs = speed_mbs(buffer_size * repeat_enc, ms_enc);
*decryption_mbs = speed_mbs(buffer_size * repeat_dec, ms_dec);
return 0;
}

View File

@@ -1,8 +1,8 @@
/*
* Linux kernel cipher generic utilities
*
* Copyright (C) 2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2018, Milan Broz
* Copyright (C) 2018-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2018-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -42,7 +42,6 @@
#include "crypto_backend.h"
static const uint32_t crc32_tab[] = {
0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L,
0x706af48fL, 0xe963a535L, 0x9e6495a3L, 0x0edb8832L, 0x79dcb8a4L,
@@ -113,4 +112,3 @@ uint32_t crypt_crc32(uint32_t seed, const unsigned char *buf, size_t len)
return crc;
}

View File

@@ -1,8 +1,8 @@
/*
* crypto backend implementation
*
* Copyright (C) 2010-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2018, Milan Broz
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -22,6 +22,7 @@
#define _CRYPTO_BACKEND_H
#include <stdint.h>
#include <stdbool.h>
#include <stddef.h>
#include <string.h>
@@ -58,14 +59,15 @@ void crypt_hmac_destroy(struct crypt_hmac *ctx);
enum { CRYPT_RND_NORMAL = 0, CRYPT_RND_KEY = 1, CRYPT_RND_SALT = 2 };
int crypt_backend_rng(char *buffer, size_t length, int quality, int fips);
/* PBKDF*/
struct crypt_pbkdf_limits {
uint32_t min_iterations, max_iterations;
uint32_t min_memory, max_memory;
uint32_t min_parallel, max_parallel;
};
int crypt_pbkdf_get_limits(const char *kdf, struct crypt_pbkdf_limits *l);
/* PBKDF*/
int crypt_pbkdf_get_limits(const char *kdf, struct crypt_pbkdf_limits *l);
int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
@@ -79,26 +81,10 @@ int crypt_pbkdf_perf(const char *kdf, const char *hash,
uint32_t *iterations_out, uint32_t *memory_out,
int (*progress)(uint32_t time_ms, void *usrptr), void *usrptr);
#if USE_INTERNAL_PBKDF2
/* internal PBKDF2 implementation */
int pkcs5_pbkdf2(const char *hash,
const char *P, size_t Plen,
const char *S, size_t Slen,
unsigned int c,
unsigned int dkLen, char *DK,
unsigned int hash_block_size);
#endif
/* Argon2 implementation wrapper */
int argon2(const char *type, const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel);
/* CRC32 */
uint32_t crypt_crc32(uint32_t seed, const unsigned char *buf, size_t len);
/* ciphers */
/* Block ciphers */
int crypt_cipher_ivsize(const char *name, const char *mode);
int crypt_cipher_wrapped_key(const char *name, const char *mode);
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
@@ -110,20 +96,28 @@ int crypt_cipher_encrypt(struct crypt_cipher *ctx,
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length);
bool crypt_cipher_kernel_only(struct crypt_cipher *ctx);
/* Check availability of a cipher */
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length);
/* Benchmark of kernel cipher performance */
int crypt_cipher_perf_kernel(const char *name, const char *mode, char *buffer, size_t buffer_size,
const char *key, size_t key_size, const char *iv, size_t iv_size,
double *encryption_mbs, double *decryption_mbs);
/* storage encryption wrappers */
int crypt_storage_init(struct crypt_storage **ctx, uint64_t sector_start,
/* Check availability of a cipher (in kernel only) */
int crypt_cipher_check_kernel(const char *name, const char *mode,
const char *integrity, size_t key_length);
/* Storage encryption wrappers */
int crypt_storage_init(struct crypt_storage **ctx, size_t sector_size,
const char *cipher, const char *cipher_mode,
const void *key, size_t key_length);
void crypt_storage_destroy(struct crypt_storage *ctx);
int crypt_storage_decrypt(struct crypt_storage *ctx, uint64_t sector,
size_t count, char *buffer);
int crypt_storage_encrypt(struct crypt_storage *ctx, uint64_t sector,
size_t count, char *buffer);
int crypt_storage_decrypt(struct crypt_storage *ctx, uint64_t iv_offset,
uint64_t length, char *buffer);
int crypt_storage_encrypt(struct crypt_storage *ctx, uint64_t iv_offset,
uint64_t length, char *buffer);
bool crypt_storage_kernel_only(struct crypt_storage *ctx);
/* Memzero helper (memset on stack can be optimized out) */
static inline void crypt_backend_memzero(void *s, size_t n)

View File

@@ -0,0 +1,59 @@
/*
* crypto backend implementation
*
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef _CRYPTO_BACKEND_INTERNAL_H
#define _CRYPTO_BACKEND_INTERNAL_H
#include "crypto_backend.h"
#if USE_INTERNAL_PBKDF2
/* internal PBKDF2 implementation */
int pkcs5_pbkdf2(const char *hash,
const char *P, size_t Plen,
const char *S, size_t Slen,
unsigned int c,
unsigned int dkLen, char *DK,
unsigned int hash_block_size);
#endif
/* Argon2 implementation wrapper */
int argon2(const char *type, const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel);
/* Block ciphers: fallback to kernel crypto API */
struct crypt_cipher_kernel {
int tfmfd;
int opfd;
};
int crypt_cipher_init_kernel(struct crypt_cipher_kernel *ctx, const char *name,
const char *mode, const void *key, size_t key_length);
int crypt_cipher_encrypt_kernel(struct crypt_cipher_kernel *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length);
int crypt_cipher_decrypt_kernel(struct crypt_cipher_kernel *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length);
void crypt_cipher_destroy_kernel(struct crypt_cipher_kernel *ctx);
#endif /* _CRYPTO_BACKEND_INTERNAL_H */

View File

@@ -1,8 +1,8 @@
/*
* Linux kernel userspace API crypto backend implementation (skcipher)
*
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2018, Milan Broz
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -27,7 +27,7 @@
#include <unistd.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
#ifdef ENABLE_AF_ALG
@@ -40,11 +40,6 @@
#define SOL_ALG 279
#endif
struct crypt_cipher {
int tfmfd;
int opfd;
};
/*
* ciphers
*
@@ -52,45 +47,41 @@ struct crypt_cipher {
* ENOTSUP - AF_ALG family not available
* (but cannot check specifically for skcipher API)
*/
static int _crypt_cipher_init(struct crypt_cipher **ctx,
static int _crypt_cipher_init(struct crypt_cipher_kernel *ctx,
const void *key, size_t key_length,
struct sockaddr_alg *sa)
{
struct crypt_cipher *h;
if (!ctx)
return -EINVAL;
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
h->opfd = -1;
h->tfmfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
if (h->tfmfd < 0) {
crypt_cipher_destroy(h);
ctx->opfd = -1;
ctx->tfmfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
if (ctx->tfmfd < 0) {
crypt_cipher_destroy_kernel(ctx);
return -ENOTSUP;
}
if (bind(h->tfmfd, (struct sockaddr *)sa, sizeof(*sa)) < 0) {
crypt_cipher_destroy(h);
if (bind(ctx->tfmfd, (struct sockaddr *)sa, sizeof(*sa)) < 0) {
crypt_cipher_destroy_kernel(ctx);
return -ENOENT;
}
if (setsockopt(h->tfmfd, SOL_ALG, ALG_SET_KEY, key, key_length) < 0) {
crypt_cipher_destroy(h);
if (setsockopt(ctx->tfmfd, SOL_ALG, ALG_SET_KEY, key, key_length) < 0) {
crypt_cipher_destroy_kernel(ctx);
return -EINVAL;
}
h->opfd = accept(h->tfmfd, NULL, 0);
if (h->opfd < 0) {
crypt_cipher_destroy(h);
ctx->opfd = accept(ctx->tfmfd, NULL, 0);
if (ctx->opfd < 0) {
crypt_cipher_destroy_kernel(ctx);
return -EINVAL;
}
*ctx = h;
return 0;
}
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
int crypt_cipher_init_kernel(struct crypt_cipher_kernel *ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
@@ -106,10 +97,10 @@ int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
}
/* The in/out should be aligned to page boundary */
static int crypt_cipher_crypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length,
uint32_t direction)
static int _crypt_cipher_crypt(struct crypt_cipher_kernel *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length,
uint32_t direction)
{
int r = 0;
ssize_t len;
@@ -173,36 +164,37 @@ bad:
return r;
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
int crypt_cipher_encrypt_kernel(struct crypt_cipher_kernel *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_crypt(ctx, in, out, length,
iv, iv_length, ALG_OP_ENCRYPT);
return _crypt_cipher_crypt(ctx, in, out, length,
iv, iv_length, ALG_OP_ENCRYPT);
}
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
int crypt_cipher_decrypt_kernel(struct crypt_cipher_kernel *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_crypt(ctx, in, out, length,
iv, iv_length, ALG_OP_DECRYPT);
return _crypt_cipher_crypt(ctx, in, out, length,
iv, iv_length, ALG_OP_DECRYPT);
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
void crypt_cipher_destroy_kernel(struct crypt_cipher_kernel *ctx)
{
if (ctx->tfmfd >= 0)
close(ctx->tfmfd);
if (ctx->opfd >= 0)
close(ctx->opfd);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
ctx->tfmfd = -1;
ctx->opfd = -1;
}
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length)
int crypt_cipher_check_kernel(const char *name, const char *mode,
const char *integrity, size_t key_length)
{
struct crypt_cipher *c = NULL;
struct crypt_cipher_kernel c;
char mode_name[64], tmp_salg_name[180], *real_mode = NULL, *cipher_iv = NULL, *key;
const char *salg_type;
bool aead;
@@ -247,47 +239,45 @@ int crypt_cipher_check(const char *name, const char *mode,
if (!key)
return -ENOMEM;
r = crypt_backend_rng(key, key_length, CRYPT_RND_NORMAL, 0);
if (r < 0) {
free (key);
return r;
}
/* We cannot use RNG yet, any key works here, tweak the first part if it is split key (XTS). */
memset(key, 0xab, key_length);
*key = 0xef;
r = _crypt_cipher_init(&c, key, key_length, &sa);
if (c)
crypt_cipher_destroy(c);
crypt_cipher_destroy_kernel(&c);
free(key);
return r;
}
#else /* ENABLE_AF_ALG */
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *buffer, size_t length)
int crypt_cipher_init_kernel(struct crypt_cipher_kernel *ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
return -ENOTSUP;
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
void crypt_cipher_destroy_kernel(struct crypt_cipher_kernel *ctx)
{
return;
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
int crypt_cipher_encrypt_kernel(struct crypt_cipher_kernel *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return -EINVAL;
}
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
int crypt_cipher_decrypt_kernel(struct crypt_cipher_kernel *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return -EINVAL;
}
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length)
int crypt_cipher_check_kernel(const char *name, const char *mode,
const char *integrity, size_t key_length)
{
/* Cannot check, expect success. */
return 0;
}
#endif

View File

@@ -1,8 +1,8 @@
/*
* GCRYPT crypto backend implementation
*
* Copyright (C) 2010-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2018, Milan Broz
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -24,7 +24,7 @@
#include <errno.h>
#include <assert.h>
#include <gcrypt.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
static int crypto_backend_initialised = 0;
static int crypto_backend_secmem = 1;
@@ -43,6 +43,14 @@ struct crypt_hmac {
int hash_len;
};
struct crypt_cipher {
bool use_kernel;
union {
struct crypt_cipher_kernel kernel;
gcry_cipher_hd_t hd;
} u;
};
/*
* Test for wrong Whirlpool variant,
* Ref: http://lists.gnupg.org/pipermail/gcrypt-devel/2014-January/002889.html
@@ -366,3 +374,108 @@ int crypt_pbkdf(const char *kdf, const char *hash,
key, key_length, iterations, memory, parallel);
return -EINVAL;
}
/* Block ciphers */
static int _cipher_init(gcry_cipher_hd_t *hd, const char *name,
const char *mode, const void *buffer, size_t length)
{
int cipher_id, mode_id;
cipher_id = gcry_cipher_map_name(name);
if (cipher_id == GCRY_CIPHER_MODE_NONE)
return -ENOENT;
if (!strcmp(mode, "ecb"))
mode_id = GCRY_CIPHER_MODE_ECB;
else if (!strcmp(mode, "cbc"))
mode_id = GCRY_CIPHER_MODE_CBC;
#if HAVE_DECL_GCRY_CIPHER_MODE_XTS
else if (!strcmp(mode, "xts"))
mode_id = GCRY_CIPHER_MODE_XTS;
#endif
else
return -ENOENT;
if (gcry_cipher_open(hd, cipher_id, mode_id, 0))
return -EINVAL;
if (gcry_cipher_setkey(*hd, buffer, length)) {
gcry_cipher_close(*hd);
return -EINVAL;
}
return 0;
}
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct crypt_cipher *h;
int r;
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
if (!_cipher_init(&h->u.hd, name, mode, key, key_length)) {
h->use_kernel = false;
*ctx = h;
return 0;
}
r = crypt_cipher_init_kernel(&h->u.kernel, name, mode, key, key_length);
if (r < 0) {
free(h);
return r;
}
h->use_kernel = true;
*ctx = h;
return 0;
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
{
if (ctx->use_kernel)
crypt_cipher_destroy_kernel(&ctx->u.kernel);
else
gcry_cipher_close(ctx->u.hd);
free(ctx);
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
if (ctx->use_kernel)
return crypt_cipher_encrypt_kernel(&ctx->u.kernel, in, out, length, iv, iv_length);
if (iv && gcry_cipher_setiv(ctx->u.hd, iv, iv_length))
return -EINVAL;
if (gcry_cipher_encrypt(ctx->u.hd, out, length, in, length))
return -EINVAL;
return 0;
}
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
if (ctx->use_kernel)
return crypt_cipher_decrypt_kernel(&ctx->u.kernel, in, out, length, iv, iv_length);
if (iv && gcry_cipher_setiv(ctx->u.hd, iv, iv_length))
return -EINVAL;
if (gcry_cipher_decrypt(ctx->u.hd, out, length, in, length))
return -EINVAL;
return 0;
}
bool crypt_cipher_kernel_only(struct crypt_cipher *ctx)
{
return ctx->use_kernel;
}

View File

@@ -1,8 +1,8 @@
/*
* Linux kernel userspace API crypto backend implementation
*
* Copyright (C) 2010-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2018, Milan Broz
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -27,7 +27,7 @@
#include <sys/socket.h>
#include <sys/utsname.h>
#include <linux/if_alg.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
/* FIXME: remove later */
#ifndef AF_ALG
@@ -48,12 +48,21 @@ struct hash_alg {
};
static struct hash_alg hash_algs[] = {
{ "sha1", "sha1", 20, 64 },
{ "sha256", "sha256", 32, 64 },
{ "sha512", "sha512", 64, 128 },
{ "ripemd160", "rmd160", 20, 64 },
{ "whirlpool", "wp512", 64, 64 },
{ NULL, NULL, 0, 0 }
{ "sha1", "sha1", 20, 64 },
{ "sha224", "sha224", 28, 64 },
{ "sha256", "sha256", 32, 64 },
{ "sha384", "sha384", 48, 128 },
{ "sha512", "sha512", 64, 128 },
{ "ripemd160", "rmd160", 20, 64 },
{ "whirlpool", "wp512", 64, 64 },
{ "sha3-224", "sha3-224", 28, 144 },
{ "sha3-256", "sha3-256", 32, 136 },
{ "sha3-384", "sha3-384", 48, 104 },
{ "sha3-512", "sha3-512", 64, 72 },
{ "stribog256","streebog256", 32, 64 },
{ "stribog512","streebog512", 64, 64 },
{ "sm3", "sm3", 32, 64 },
{ NULL, NULL, 0, 0 }
};
struct crypt_hash {
@@ -68,6 +77,10 @@ struct crypt_hmac {
int hash_len;
};
struct crypt_cipher {
struct crypt_cipher_kernel ck;
};
static int crypt_kernel_socket_init(struct sockaddr_alg *sa, int *tfmfd, int *opfd,
const void *key, size_t key_length)
{
@@ -181,7 +194,7 @@ int crypt_hash_init(struct crypt_hash **ctx, const char *name)
}
h->hash_len = ha->length;
strncpy((char *)sa.salg_name, ha->kernel_name, sizeof(sa.salg_name));
strncpy((char *)sa.salg_name, ha->kernel_name, sizeof(sa.salg_name)-1);
if (crypt_kernel_socket_init(&sa, &h->tfmfd, &h->opfd, NULL, 0) < 0) {
free(h);
@@ -333,3 +346,49 @@ int crypt_pbkdf(const char *kdf, const char *hash,
return -EINVAL;
}
/* Block ciphers */
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct crypt_cipher *h;
int r;
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
r = crypt_cipher_init_kernel(&h->ck, name, mode, key, key_length);
if (r < 0) {
free(h);
return r;
}
*ctx = h;
return 0;
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
{
crypt_cipher_destroy_kernel(&ctx->ck);
free(ctx);
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_encrypt_kernel(&ctx->ck, in, out, length, iv, iv_length);
}
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_decrypt_kernel(&ctx->ck, in, out, length, iv, iv_length);
}
bool crypt_cipher_kernel_only(struct crypt_cipher *ctx)
{
return true;
}

View File

@@ -1,8 +1,8 @@
/*
* Nettle crypto backend implementation
*
* Copyright (C) 2011-2018 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2018, Milan Broz
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -23,11 +23,19 @@
#include <string.h>
#include <errno.h>
#include <nettle/sha.h>
#include <nettle/sha3.h>
#include <nettle/hmac.h>
#include <nettle/pbkdf2.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
static char *version = "Nettle";
#if HAVE_NETTLE_VERSION_H
#include <nettle/version.h>
#define VSTR(s) STR(s)
#define STR(s) #s
static const char *version = "Nettle "VSTR(NETTLE_VERSION_MAJOR)"."VSTR(NETTLE_VERSION_MINOR);
#else
static const char *version = "Nettle";
#endif
typedef void (*init_func) (void *);
typedef void (*update_func) (void *, size_t, const uint8_t *);
@@ -45,6 +53,24 @@ struct hash_alg {
set_key_func hmac_set_key;
};
/* Missing HMAC wrappers in Nettle */
#define HMAC_FCE(xxx) \
struct xhmac_##xxx##_ctx HMAC_CTX(struct xxx##_ctx); \
static void xhmac_##xxx##_set_key(struct xhmac_##xxx##_ctx *ctx, \
size_t key_length, const uint8_t *key) \
{HMAC_SET_KEY(ctx, &nettle_##xxx, key_length, key);} \
static void xhmac_##xxx##_update(struct xhmac_##xxx##_ctx *ctx, \
size_t length, const uint8_t *data) \
{xxx##_update(&ctx->state, length, data);} \
static void xhmac_##xxx##_digest(struct xhmac_##xxx##_ctx *ctx, \
size_t length, uint8_t *digest) \
{HMAC_DIGEST(ctx, &nettle_##xxx, length, digest);}
HMAC_FCE(sha3_224);
HMAC_FCE(sha3_256);
HMAC_FCE(sha3_384);
HMAC_FCE(sha3_512);
static struct hash_alg hash_algs[] = {
{ "sha1", SHA1_DIGEST_SIZE,
(init_func) sha1_init,
@@ -94,6 +120,41 @@ static struct hash_alg hash_algs[] = {
(digest_func) hmac_ripemd160_digest,
(set_key_func) hmac_ripemd160_set_key,
},
/* Nettle prior to version 3.2 has incompatible SHA3 implementation */
#if NETTLE_SHA3_FIPS202
{ "sha3-224", SHA3_224_DIGEST_SIZE,
(init_func) sha3_224_init,
(update_func) sha3_224_update,
(digest_func) sha3_224_digest,
(update_func) xhmac_sha3_224_update,
(digest_func) xhmac_sha3_224_digest,
(set_key_func) xhmac_sha3_224_set_key,
},
{ "sha3-256", SHA3_256_DIGEST_SIZE,
(init_func) sha3_256_init,
(update_func) sha3_256_update,
(digest_func) sha3_256_digest,
(update_func) xhmac_sha3_256_update,
(digest_func) xhmac_sha3_256_digest,
(set_key_func) xhmac_sha3_256_set_key,
},
{ "sha3-384", SHA3_384_DIGEST_SIZE,
(init_func) sha3_384_init,
(update_func) sha3_384_update,
(digest_func) sha3_384_digest,
(update_func) xhmac_sha3_384_update,
(digest_func) xhmac_sha3_384_digest,
(set_key_func) xhmac_sha3_384_set_key,
},
{ "sha3-512", SHA3_512_DIGEST_SIZE,
(init_func) sha3_512_init,
(update_func) sha3_512_update,
(digest_func) sha3_512_digest,
(update_func) xhmac_sha3_512_update,
(digest_func) xhmac_sha3_512_digest,
(set_key_func) xhmac_sha3_512_set_key,
},
#endif
{ NULL, 0, NULL, NULL, NULL, NULL, NULL, NULL, }
};
@@ -105,6 +166,11 @@ struct crypt_hash {
struct sha256_ctx sha256;
struct sha384_ctx sha384;
struct sha512_ctx sha512;
struct ripemd160_ctx ripemd160;
struct sha3_224_ctx sha3_224;
struct sha3_256_ctx sha3_256;
struct sha3_384_ctx sha3_384;
struct sha3_512_ctx sha3_512;
} nettle_ctx;
};
@@ -116,11 +182,20 @@ struct crypt_hmac {
struct hmac_sha256_ctx sha256;
struct hmac_sha384_ctx sha384;
struct hmac_sha512_ctx sha512;
struct hmac_ripemd160_ctx ripemd160;
struct xhmac_sha3_224_ctx sha3_224;
struct xhmac_sha3_256_ctx sha3_256;
struct xhmac_sha3_384_ctx sha3_384;
struct xhmac_sha3_512_ctx sha3_512;
} nettle_ctx;
size_t key_length;
uint8_t *key;
};
struct crypt_cipher {
struct crypt_cipher_kernel ck;
};
uint32_t crypt_backend_flags(void)
{
return 0;
@@ -299,8 +374,8 @@ int crypt_pbkdf(const char *kdf, const char *hash,
if (r < 0)
return r;
nettle_pbkdf2(&h->nettle_ctx, h->hash->nettle_hmac_update,
h->hash->nettle_hmac_digest, h->hash->length, iterations,
nettle_pbkdf2(&h->nettle_ctx, h->hash->hmac_update,
h->hash->hmac_digest, h->hash->length, iterations,
salt_length, (const uint8_t *)salt, key_length,
(uint8_t *)key);
crypt_hmac_destroy(h);
@@ -312,3 +387,49 @@ int crypt_pbkdf(const char *kdf, const char *hash,
return -EINVAL;
}
/* Block ciphers */
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct crypt_cipher *h;
int r;
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
r = crypt_cipher_init_kernel(&h->ck, name, mode, key, key_length);
if (r < 0) {
free(h);
return r;
}
*ctx = h;
return 0;
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
{
crypt_cipher_destroy_kernel(&ctx->ck);
free(ctx);
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_encrypt_kernel(&ctx->ck, in, out, length, iv, iv_length);
}
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_decrypt_kernel(&ctx->ck, in, out, length, iv, iv_length);
}
bool crypt_cipher_kernel_only(struct crypt_cipher *ctx)
{
return true;
}

View File

@@ -1,8 +1,8 @@
/*
* NSS crypto backend implementation
*
* Copyright (C) 2010-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2018, Milan Broz
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -23,7 +23,7 @@
#include <errno.h>
#include <nss.h>
#include <pk11pub.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
#define CONST_CAST(x) (x)(uintptr_t)
@@ -59,6 +59,10 @@ struct crypt_hmac {
const struct hash_alg *hash;
};
struct crypt_cipher {
struct crypt_cipher_kernel ck;
};
static struct hash_alg *_get_alg(const char *name)
{
int i = 0;
@@ -331,3 +335,49 @@ int crypt_pbkdf(const char *kdf, const char *hash,
return -EINVAL;
}
/* Block ciphers */
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct crypt_cipher *h;
int r;
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
r = crypt_cipher_init_kernel(&h->ck, name, mode, key, key_length);
if (r < 0) {
free(h);
return r;
}
*ctx = h;
return 0;
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
{
crypt_cipher_destroy_kernel(&ctx->ck);
free(ctx);
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_encrypt_kernel(&ctx->ck, in, out, length, iv, iv_length);
}
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
return crypt_cipher_decrypt_kernel(&ctx->ck, in, out, length, iv, iv_length);
}
bool crypt_cipher_kernel_only(struct crypt_cipher *ctx)
{
return true;
}

View File

@@ -1,8 +1,8 @@
/*
* OPENSSL crypto backend implementation
*
* Copyright (C) 2010-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2018, Milan Broz
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -33,7 +33,7 @@
#include <openssl/evp.h>
#include <openssl/hmac.h>
#include <openssl/rand.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
static int crypto_backend_initialised = 0;
@@ -49,6 +49,18 @@ struct crypt_hmac {
int hash_len;
};
struct crypt_cipher {
bool use_kernel;
union {
struct crypt_cipher_kernel kernel;
struct {
EVP_CIPHER_CTX *hd_enc;
EVP_CIPHER_CTX *hd_dec;
size_t iv_length;
} lib;
} u;
};
/*
* Compatible wrappers for OpenSSL < 1.1.0 and LibreSSL < 2.7.0
*/
@@ -324,7 +336,7 @@ int crypt_pbkdf(const char *kdf, const char *hash,
return -EINVAL;
if (!PKCS5_PBKDF2_HMAC(password, (int)password_length,
(unsigned char *)salt, (int)salt_length,
(const unsigned char *)salt, (int)salt_length,
(int)iterations, hash_id, (int)key_length, (unsigned char *)key))
return -EINVAL;
return 0;
@@ -335,3 +347,161 @@ int crypt_pbkdf(const char *kdf, const char *hash,
return -EINVAL;
}
/* Block ciphers */
static void _cipher_destroy(EVP_CIPHER_CTX **hd_enc, EVP_CIPHER_CTX **hd_dec)
{
EVP_CIPHER_CTX_free(*hd_enc);
*hd_enc = NULL;
EVP_CIPHER_CTX_free(*hd_dec);
*hd_dec = NULL;
}
static int _cipher_init(EVP_CIPHER_CTX **hd_enc, EVP_CIPHER_CTX **hd_dec, const char *name,
const char *mode, const void *key, size_t key_length, size_t *iv_length)
{
char cipher_name[256];
const EVP_CIPHER *type;
int r, key_bits;
key_bits = key_length * 8;
if (!strcmp(mode, "xts"))
key_bits /= 2;
r = snprintf(cipher_name, sizeof(cipher_name), "%s-%d-%s", name, key_bits, mode);
if (r < 0 || r >= (int)sizeof(cipher_name))
return -EINVAL;
type = EVP_get_cipherbyname(cipher_name);
if (!type)
return -ENOENT;
if (EVP_CIPHER_key_length(type) != (int)key_length)
return -EINVAL;
*hd_enc = EVP_CIPHER_CTX_new();
*hd_dec = EVP_CIPHER_CTX_new();
*iv_length = EVP_CIPHER_iv_length(type);
if (!*hd_enc || !*hd_dec)
return -EINVAL;
if (EVP_EncryptInit_ex(*hd_enc, type, NULL, key, NULL) != 1 ||
EVP_DecryptInit_ex(*hd_dec, type, NULL, key, NULL) != 1) {
_cipher_destroy(hd_enc, hd_dec);
return -EINVAL;
}
if (EVP_CIPHER_CTX_set_padding(*hd_enc, 0) != 1 ||
EVP_CIPHER_CTX_set_padding(*hd_dec, 0) != 1) {
_cipher_destroy(hd_enc, hd_dec);
return -EINVAL;
}
return 0;
}
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct crypt_cipher *h;
int r;
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
if (!_cipher_init(&h->u.lib.hd_enc, &h->u.lib.hd_dec, name, mode, key,
key_length, &h->u.lib.iv_length)) {
h->use_kernel = false;
*ctx = h;
return 0;
}
r = crypt_cipher_init_kernel(&h->u.kernel, name, mode, key, key_length);
if (r < 0) {
free(h);
return r;
}
h->use_kernel = true;
*ctx = h;
return 0;
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
{
if (ctx->use_kernel)
crypt_cipher_destroy_kernel(&ctx->u.kernel);
else
_cipher_destroy(&ctx->u.lib.hd_enc, &ctx->u.lib.hd_dec);
free(ctx);
}
static int _cipher_encrypt(struct crypt_cipher *ctx, const unsigned char *in, unsigned char *out,
int length, const unsigned char *iv, size_t iv_length)
{
int len;
if (ctx->u.lib.iv_length != iv_length)
return -EINVAL;
if (EVP_EncryptInit_ex(ctx->u.lib.hd_enc, NULL, NULL, NULL, iv) != 1)
return -EINVAL;
if (EVP_EncryptUpdate(ctx->u.lib.hd_enc, out, &len, in, length) != 1)
return -EINVAL;
if (EVP_EncryptFinal(ctx->u.lib.hd_enc, out + len, &len) != 1)
return -EINVAL;
return 0;
}
static int _cipher_decrypt(struct crypt_cipher *ctx, const unsigned char *in, unsigned char *out,
int length, const unsigned char *iv, size_t iv_length)
{
int len;
if (ctx->u.lib.iv_length != iv_length)
return -EINVAL;
if (EVP_DecryptInit_ex(ctx->u.lib.hd_dec, NULL, NULL, NULL, iv) != 1)
return -EINVAL;
if (EVP_DecryptUpdate(ctx->u.lib.hd_dec, out, &len, in, length) != 1)
return -EINVAL;
if (EVP_DecryptFinal(ctx->u.lib.hd_dec, out + len, &len) != 1)
return -EINVAL;
return 0;
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
if (ctx->use_kernel)
return crypt_cipher_encrypt_kernel(&ctx->u.kernel, in, out, length, iv, iv_length);
return _cipher_encrypt(ctx, (const unsigned char*)in,
(unsigned char *)out, length, (const unsigned char*)iv, iv_length);
}
int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length)
{
if (ctx->use_kernel)
return crypt_cipher_decrypt_kernel(&ctx->u.kernel, in, out, length, iv, iv_length);
return _cipher_decrypt(ctx, (const unsigned char*)in,
(unsigned char *)out, length, (const unsigned char*)iv, iv_length);
}
bool crypt_cipher_kernel_only(struct crypt_cipher *ctx)
{
return ctx->use_kernel;
}

View File

@@ -2,7 +2,7 @@
* Generic wrapper for storage encryption modes and Initial Vectors
* (reimplementation of some functions from Linux dm-crypt kernel)
*
* Copyright (C) 2014-2018, Milan Broz
* Copyright (C) 2014-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -25,7 +25,6 @@
#include "crypto_backend.h"
#define SECTOR_SHIFT 9
#define SECTOR_SIZE (1 << SECTOR_SHIFT)
/*
* Internal IV helper
@@ -41,7 +40,8 @@ struct crypt_sector_iv {
/* Block encryption storage context */
struct crypt_storage {
uint64_t sector_start;
unsigned sector_shift;
unsigned iv_shift;
struct crypt_cipher *cipher;
struct crypt_sector_iv cipher_iv;
};
@@ -194,7 +194,7 @@ static void crypt_sector_iv_destroy(struct crypt_sector_iv *ctx)
/* Block encryption storage wrappers */
int crypt_storage_init(struct crypt_storage **ctx,
uint64_t sector_start,
size_t sector_size,
const char *cipher,
const char *cipher_mode,
const void *key, size_t key_length)
@@ -204,6 +204,11 @@ int crypt_storage_init(struct crypt_storage **ctx,
char *cipher_iv = NULL;
int r = -EIO;
if (sector_size < (1 << SECTOR_SHIFT) ||
sector_size > (1 << (SECTOR_SHIFT + 3)) ||
sector_size & (sector_size - 1))
return -EINVAL;
s = malloc(sizeof(*s));
if (!s)
return -ENOMEM;
@@ -230,27 +235,33 @@ int crypt_storage_init(struct crypt_storage **ctx,
return r;
}
s->sector_start = sector_start;
s->sector_shift = int_log2(sector_size);
s->iv_shift = s->sector_shift - SECTOR_SHIFT;
*ctx = s;
return 0;
}
int crypt_storage_decrypt(struct crypt_storage *ctx,
uint64_t sector, size_t count,
char *buffer)
uint64_t iv_offset,
uint64_t length, char *buffer)
{
unsigned int i;
uint64_t i;
int r = 0;
for (i = 0; i < count; i++) {
r = crypt_sector_iv_generate(&ctx->cipher_iv, sector + i);
if (length & ((1 << ctx->sector_shift) - 1))
return -EINVAL;
length >>= ctx->sector_shift;
for (i = 0; i < length; i++) {
r = crypt_sector_iv_generate(&ctx->cipher_iv, iv_offset + (uint64_t)(i << ctx->iv_shift));
if (r)
break;
r = crypt_cipher_decrypt(ctx->cipher,
&buffer[i * SECTOR_SIZE],
&buffer[i * SECTOR_SIZE],
SECTOR_SIZE,
&buffer[i << ctx->sector_shift],
&buffer[i << ctx->sector_shift],
1 << ctx->sector_shift,
ctx->cipher_iv.iv,
ctx->cipher_iv.iv_size);
if (r)
@@ -261,20 +272,25 @@ int crypt_storage_decrypt(struct crypt_storage *ctx,
}
int crypt_storage_encrypt(struct crypt_storage *ctx,
uint64_t sector, size_t count,
char *buffer)
uint64_t iv_offset,
uint64_t length, char *buffer)
{
unsigned int i;
uint64_t i;
int r = 0;
for (i = 0; i < count; i++) {
r = crypt_sector_iv_generate(&ctx->cipher_iv, sector + i);
if (length & ((1 << ctx->sector_shift) - 1))
return -EINVAL;
length >>= ctx->sector_shift;
for (i = 0; i < length; i++) {
r = crypt_sector_iv_generate(&ctx->cipher_iv, iv_offset + (i << ctx->iv_shift));
if (r)
break;
r = crypt_cipher_encrypt(ctx->cipher,
&buffer[i * SECTOR_SIZE],
&buffer[i * SECTOR_SIZE],
SECTOR_SIZE,
&buffer[i << ctx->sector_shift],
&buffer[i << ctx->sector_shift],
1 << ctx->sector_shift,
ctx->cipher_iv.iv,
ctx->cipher_iv.iv_size);
if (r)
@@ -297,3 +313,8 @@ void crypt_storage_destroy(struct crypt_storage *ctx)
memset(ctx, 0, sizeof(*ctx));
free(ctx);
}
bool crypt_storage_kernel_only(struct crypt_storage *ctx)
{
return crypt_cipher_kernel_only(ctx->cipher);
}

View File

@@ -4,8 +4,8 @@
* Copyright (C) 2004 Free Software Foundation
*
* cryptsetup related changes
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2018, Milan Broz
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -25,7 +25,7 @@
#include <errno.h>
#include <alloca.h>
#include "crypto_backend.h"
#include "crypto_backend_internal.h"
static int hash_buf(const char *src, size_t src_len,
char *dst, size_t dst_len,
@@ -230,197 +230,3 @@ out:
return rc;
}
#if 0
#include <stdio.h>
struct test_vector {
const char *hash;
unsigned int hash_block_length;
unsigned int iterations;
const char *password;
unsigned int password_length;
const char *salt;
unsigned int salt_length;
const char *output;
unsigned int output_length;
};
struct test_vector test_vectors[] = {
/* RFC 3962 */
{
"sha1", 64, 1,
"password", 8,
"ATHENA.MIT.EDUraeburn", 21,
"\xcd\xed\xb5\x28\x1b\xb2\xf8\x01"
"\x56\x5a\x11\x22\xb2\x56\x35\x15"
"\x0a\xd1\xf7\xa0\x4b\xb9\xf3\xa3"
"\x33\xec\xc0\xe2\xe1\xf7\x08\x37", 32
}, {
"sha1", 64, 2,
"password", 8,
"ATHENA.MIT.EDUraeburn", 21,
"\x01\xdb\xee\x7f\x4a\x9e\x24\x3e"
"\x98\x8b\x62\xc7\x3c\xda\x93\x5d"
"\xa0\x53\x78\xb9\x32\x44\xec\x8f"
"\x48\xa9\x9e\x61\xad\x79\x9d\x86", 32
}, {
"sha1", 64, 1200,
"password", 8,
"ATHENA.MIT.EDUraeburn", 21,
"\x5c\x08\xeb\x61\xfd\xf7\x1e\x4e"
"\x4e\xc3\xcf\x6b\xa1\xf5\x51\x2b"
"\xa7\xe5\x2d\xdb\xc5\xe5\x14\x2f"
"\x70\x8a\x31\xe2\xe6\x2b\x1e\x13", 32
}, {
"sha1", 64, 5,
"password", 8,
"\0224VxxV4\022", 8, // "\x1234567878563412
"\xd1\xda\xa7\x86\x15\xf2\x87\xe6"
"\xa1\xc8\xb1\x20\xd7\x06\x2a\x49"
"\x3f\x98\xd2\x03\xe6\xbe\x49\xa6"
"\xad\xf4\xfa\x57\x4b\x6e\x64\xee", 32
}, {
"sha1", 64, 1200,
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", 64,
"pass phrase equals block size", 29,
"\x13\x9c\x30\xc0\x96\x6b\xc3\x2b"
"\xa5\x5f\xdb\xf2\x12\x53\x0a\xc9"
"\xc5\xec\x59\xf1\xa4\x52\xf5\xcc"
"\x9a\xd9\x40\xfe\xa0\x59\x8e\xd1", 32
}, {
"sha1", 64, 1200,
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", 65,
"pass phrase exceeds block size", 30,
"\x9c\xca\xd6\xd4\x68\x77\x0c\xd5"
"\x1b\x10\xe6\xa6\x87\x21\xbe\x61"
"\x1a\x8b\x4d\x28\x26\x01\xdb\x3b"
"\x36\xbe\x92\x46\x91\x5e\xc8\x2a", 32
}, {
"sha1", 64, 50,
"\360\235\204\236", 4, // g-clef ("\xf09d849e)
"EXAMPLE.COMpianist", 18,
"\x6b\x9c\xf2\x6d\x45\x45\x5a\x43"
"\xa5\xb8\xbb\x27\x6a\x40\x3b\x39"
"\xe7\xfe\x37\xa0\xc4\x1e\x02\xc2"
"\x81\xff\x30\x69\xe1\xe9\x4f\x52", 32
}, {
/* RFC-6070 */
"sha1", 64, 1,
"password", 8,
"salt", 4,
"\x0c\x60\xc8\x0f\x96\x1f\x0e\x71\xf3\xa9"
"\xb5\x24\xaf\x60\x12\x06\x2f\xe0\x37\xa6", 20
}, {
"sha1", 64, 2,
"password", 8,
"salt", 4,
"\xea\x6c\x01\x4d\xc7\x2d\x6f\x8c\xcd\x1e"
"\xd9\x2a\xce\x1d\x41\xf0\xd8\xde\x89\x57", 20
}, {
"sha1", 64, 4096,
"password", 8,
"salt", 4,
"\x4b\x00\x79\x01\xb7\x65\x48\x9a\xbe\xad"
"\x49\xd9\x26\xf7\x21\xd0\x65\xa4\x29\xc1", 20
}, {
"sha1", 64, 16777216,
"password", 8,
"salt", 4,
"\xee\xfe\x3d\x61\xcd\x4d\xa4\xe4\xe9\x94"
"\x5b\x3d\x6b\xa2\x15\x8c\x26\x34\xe9\x84", 20
}, {
"sha1", 64, 4096,
"passwordPASSWORDpassword", 24,
"saltSALTsaltSALTsaltSALTsaltSALTsalt", 36,
"\x3d\x2e\xec\x4f\xe4\x1c\x84\x9b\x80\xc8"
"\xd8\x36\x62\xc0\xe4\x4a\x8b\x29\x1a\x96"
"\x4c\xf2\xf0\x70\x38", 25
}, {
"sha1", 64, 4096,
"pass\0word", 9,
"sa\0lt", 5,
"\x56\xfa\x6a\xa7\x55\x48\x09\x9d\xcc\x37"
"\xd7\xf0\x34\x25\xe0\xc3", 16
}, {
/* empty password test */
"sha1", 64, 2,
"", 0,
"salt", 4,
"\x13\x3a\x4c\xe8\x37\xb4\xd2\x52\x1e\xe2"
"\xbf\x03\xe1\x1c\x71\xca\x79\x4e\x07\x97", 20
}, {
/* Password exceeds block size test */
"sha256", 64, 1200,
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", 65,
"pass phrase exceeds block size", 30,
"\x22\x34\x4b\xc4\xb6\xe3\x26\x75"
"\xa8\x09\x0f\x3e\xa8\x0b\xe0\x1d"
"\x5f\x95\x12\x6a\x2c\xdd\xc3\xfa"
"\xcc\x4a\x5e\x6d\xca\x04\xec\x58", 32
}, {
"sha512", 128, 1200,
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", 129,
"pass phrase exceeds block size", 30,
"\x0f\xb2\xed\x2c\x0e\x6e\xfb\x7d"
"\x7d\x8e\xdd\x58\x01\xb4\x59\x72"
"\x99\x92\x16\x30\x5e\xa4\x36\x8d"
"\x76\x14\x80\xf3\xe3\x7a\x22\xb9", 32
}, {
"whirlpool", 64, 1200,
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", 65,
"pass phrase exceeds block size", 30,
"\x9c\x1c\x74\xf5\x88\x26\xe7\x6a"
"\x53\x58\xf4\x0c\x39\xe7\x80\x89"
"\x07\xc0\x31\x19\x9a\x50\xa2\x48"
"\xf1\xd9\xfe\x78\x64\xe5\x84\x50", 32
}
};
static void printhex(const char *s, const char *buf, size_t len)
{
size_t i;
printf("%s: ", s);
for (i = 0; i < len; i++)
printf("\\x%02x", (unsigned char)buf[i]);
printf("\n");
fflush(stdout);
}
static int pkcs5_pbkdf2_test_vectors(void)
{
char result[64];
unsigned int i, j;
struct test_vector *vec;
for (i = 0; i < (sizeof(test_vectors) / sizeof(*test_vectors)); i++) {
vec = &test_vectors[i];
for (j = 1; j <= vec->output_length; j++) {
if (pkcs5_pbkdf2(vec->hash,
vec->password, vec->password_length,
vec->salt, vec->salt_length,
vec->iterations,
j, result, vec->hash_block_length)) {
printf("pbkdf2 failed, vector %d\n", i);
return -EINVAL;
}
if (memcmp(result, vec->output, j) != 0) {
printf("vector %u\n", i);
printhex(" got", result, j);
printhex("want", vec->output, j);
return -EINVAL;
}
memset(result, 0, sizeof(result));
}
}
return 0;
}
#endif

View File

@@ -1,8 +1,8 @@
/*
* PBKDF performance check
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2018, Milan Broz
* Copyright (C) 2016-2018, Ondrej Mosnacek
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
* Copyright (C) 2016-2019 Ondrej Mosnacek
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -202,7 +202,7 @@ static int next_argon2_params(uint32_t *t_cost, uint32_t *m_cost,
static int crypt_argon2_check(const char *kdf, const char *password,
size_t password_length, const char *salt,
size_t salt_length, size_t key_length,
uint32_t min_t_cost, uint32_t max_m_cost,
uint32_t min_t_cost, uint32_t min_m_cost, uint32_t max_m_cost,
uint32_t parallel, uint32_t target_ms,
uint32_t *out_t_cost, uint32_t *out_m_cost,
int (*progress)(uint32_t time_ms, void *usrptr),
@@ -210,7 +210,7 @@ static int crypt_argon2_check(const char *kdf, const char *password,
{
int r = 0;
char *key = NULL;
uint32_t t_cost, m_cost, min_m_cost = 8 * parallel;
uint32_t t_cost, m_cost;
long ms;
long ms_atleast = (long)target_ms * BENCH_PERCENT_ATLEAST / 100;
long ms_atmost = (long)target_ms * BENCH_PERCENT_ATMOST / 100;
@@ -218,6 +218,9 @@ static int crypt_argon2_check(const char *kdf, const char *password,
if (key_length <= 0 || target_ms <= 0)
return -EINVAL;
if (min_m_cost < (parallel * 8))
min_m_cost = parallel * 8;
if (max_m_cost < min_m_cost)
return -EINVAL;
@@ -403,6 +406,7 @@ int crypt_pbkdf_perf(const char *kdf, const char *hash,
if (!kdf || !iterations_out || !memory_out)
return -EINVAL;
/* FIXME: whole limits propagation should be more clear here */
r = crypt_pbkdf_get_limits(kdf, &pbkdf_limits);
if (r < 0)
return r;
@@ -418,7 +422,9 @@ int crypt_pbkdf_perf(const char *kdf, const char *hash,
else if (!strncmp(kdf, "argon2", 6))
r = crypt_argon2_check(kdf, password, password_size,
salt, salt_size, volume_key_size,
pbkdf_limits.min_iterations, max_memory_kb,
pbkdf_limits.min_iterations,
pbkdf_limits.min_memory,
max_memory_kb,
parallel_threads, time_ms, iterations_out,
memory_out, progress, usrptr);
return r;

View File

@@ -1,7 +1,7 @@
/*
* Integrity volume handling
*
* Copyright (C) 2016-2018, Milan Broz
* Copyright (C) 2016-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -34,15 +34,15 @@ static int INTEGRITY_read_superblock(struct crypt_device *cd,
{
int devfd, r;
devfd = device_open(device, O_RDONLY);
if(devfd < 0) {
devfd = device_open(cd, device, O_RDONLY);
if(devfd < 0)
return -EINVAL;
}
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), sb, sizeof(*sb), offset) != sizeof(*sb) ||
memcmp(sb->magic, SB_MAGIC, sizeof(sb->magic)) ||
(sb->version != SB_VERSION_1 && sb->version != SB_VERSION_2)) {
(sb->version != SB_VERSION_1 && sb->version != SB_VERSION_2 &&
sb->version != SB_VERSION_3)) {
log_std(cd, "No integrity superblock detected on %s.\n",
device_path(device));
r = -EINVAL;
@@ -55,7 +55,6 @@ static int INTEGRITY_read_superblock(struct crypt_device *cd,
r = 0;
}
close(devfd);
return r;
}
@@ -64,7 +63,7 @@ int INTEGRITY_read_sb(struct crypt_device *cd, struct crypt_params_integrity *pa
struct superblock sb;
int r;
r = INTEGRITY_read_superblock(cd, crypt_data_device(cd), 0, &sb);
r = INTEGRITY_read_superblock(cd, crypt_metadata_device(cd), 0, &sb);
if (r)
return r;
@@ -92,9 +91,11 @@ int INTEGRITY_dump(struct crypt_device *cd, struct device *device, uint64_t offs
log_std(cd, "sector_size %u\n", SECTOR_SIZE << sb.log2_sectors_per_block);
if (sb.version == SB_VERSION_2 && (sb.flags & SB_FLAG_RECALCULATING))
log_std(cd, "recalc_sector %" PRIu64 "\n", sb.recalc_sector);
log_std(cd, "flags %s%s\n",
log_std(cd, "log2_blocks_per_bitmap %u\n", sb.log2_blocks_per_bitmap_bit);
log_std(cd, "flags %s%s%s\n",
sb.flags & SB_FLAG_HAVE_JOURNAL_MAC ? "have_journal_mac " : "",
sb.flags & SB_FLAG_RECALCULATING ? "recalculating " : "");
sb.flags & SB_FLAG_RECALCULATING ? "recalculating " : "",
sb.flags & SB_FLAG_DIRTY_BITMAP ? "dirty_bitmap " : "");
return 0;
}
@@ -180,6 +181,69 @@ int INTEGRITY_tag_size(struct crypt_device *cd,
return iv_tag_size + auth_tag_size;
}
int INTEGRITY_create_dmd_device(struct crypt_device *cd,
const struct crypt_params_integrity *params,
struct volume_key *vk,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key,
struct crypt_dm_active_device *dmd,
uint32_t flags)
{
int r;
if (!dmd)
return -EINVAL;
*dmd = (struct crypt_dm_active_device) {
.flags = flags,
};
r = INTEGRITY_data_sectors(cd, crypt_metadata_device(cd),
crypt_get_data_offset(cd) * SECTOR_SIZE, &dmd->size);
if (r < 0)
return r;
return dm_integrity_target_set(&dmd->segment, 0, dmd->size,
crypt_metadata_device(cd), crypt_data_device(cd),
crypt_get_integrity_tag_size(cd), crypt_get_data_offset(cd),
crypt_get_sector_size(cd), vk, journal_crypt_key,
journal_mac_key, params);
}
int INTEGRITY_activate_dmd_device(struct crypt_device *cd,
const char *name,
struct crypt_dm_active_device *dmd)
{
int r;
uint32_t dmi_flags;
struct dm_target *tgt = &dmd->segment;
if (!single_segment(dmd) || tgt->type != DM_INTEGRITY)
return -EINVAL;
log_dbg(cd, "Trying to activate INTEGRITY device on top of %s, using name %s, tag size %d, provided sectors %" PRIu64".",
device_path(tgt->data_device), name, tgt->u.integrity.tag_size, dmd->size);
r = device_block_adjust(cd, tgt->data_device, DEV_EXCL,
tgt->u.integrity.offset, NULL, &dmd->flags);
if (r)
return r;
if (tgt->u.integrity.meta_device) {
r = device_block_adjust(cd, tgt->u.integrity.meta_device, DEV_EXCL, 0, NULL, NULL);
if (r)
return r;
}
r = dm_create_device(cd, name, "INTEGRITY", dmd);
if (r < 0 && (dm_flags(cd, DM_INTEGRITY, &dmi_flags) || !(dmi_flags & DM_INTEGRITY_SUPPORTED))) {
log_err(cd, _("Kernel doesn't support dm-integrity mapping."));
return -ENOTSUP;
}
return r;
}
int INTEGRITY_activate(struct crypt_device *cd,
const char *name,
const struct crypt_params_integrity *params,
@@ -188,52 +252,14 @@ int INTEGRITY_activate(struct crypt_device *cd,
struct volume_key *journal_mac_key,
uint32_t flags)
{
uint32_t dmi_flags;
struct crypt_dm_active_device dmdi = {
.target = DM_INTEGRITY,
.data_device = crypt_data_device(cd),
.flags = flags,
.u.integrity = {
.offset = crypt_get_data_offset(cd),
.tag_size = crypt_get_integrity_tag_size(cd),
.sector_size = crypt_get_sector_size(cd),
.vk = vk,
.journal_crypt_key = journal_crypt_key,
.journal_integrity_key = journal_mac_key,
}
};
int r;
struct crypt_dm_active_device dmd = {};
int r = INTEGRITY_create_dmd_device(cd, params, vk, journal_crypt_key, journal_mac_key, &dmd, flags);
r = INTEGRITY_data_sectors(cd, dmdi.data_device,
dmdi.u.integrity.offset * SECTOR_SIZE, &dmdi.size);
if (r < 0)
return r;
if (params) {
dmdi.u.integrity.journal_size = params->journal_size;
dmdi.u.integrity.journal_watermark = params->journal_watermark;
dmdi.u.integrity.journal_commit_time = params->journal_commit_time;
dmdi.u.integrity.interleave_sectors = params->interleave_sectors;
dmdi.u.integrity.buffer_sectors = params->buffer_sectors;
dmdi.u.integrity.integrity = params->integrity;
dmdi.u.integrity.journal_integrity = params->journal_integrity;
dmdi.u.integrity.journal_crypt = params->journal_crypt;
}
log_dbg("Trying to activate INTEGRITY device on top of %s, using name %s, tag size %d, provided sectors %" PRIu64".",
device_path(dmdi.data_device), name, dmdi.u.integrity.tag_size, dmdi.size);
r = device_block_adjust(cd, dmdi.data_device, DEV_EXCL,
dmdi.u.integrity.offset, NULL, &dmdi.flags);
if (r)
return r;
r = dm_create_device(cd, name, "INTEGRITY", &dmdi, 0);
if (r < 0 && (dm_flags(DM_INTEGRITY, &dmi_flags) || !(dmi_flags & DM_INTEGRITY_SUPPORTED))) {
log_err(cd, _("Kernel doesn't support dm-integrity mapping."));
return -ENOTSUP;
}
r = INTEGRITY_activate_dmd_device(cd, name, &dmd);
dm_targets_free(cd, &dmd);
return r;
}
@@ -245,55 +271,56 @@ int INTEGRITY_format(struct crypt_device *cd,
uint32_t dmi_flags;
char tmp_name[64], tmp_uuid[40];
struct crypt_dm_active_device dmdi = {
.target = DM_INTEGRITY,
.data_device = crypt_data_device(cd),
.size = 8,
.flags = CRYPT_ACTIVATE_PRIVATE, /* We always create journal but it can be unused later */
.u.integrity = {
.offset = crypt_get_data_offset(cd),
.tag_size = crypt_get_integrity_tag_size(cd),
.sector_size = crypt_get_sector_size(cd),
.journal_crypt_key = journal_crypt_key,
.journal_integrity_key = journal_mac_key,
}
};
struct dm_target *tgt = &dmdi.segment;
int r;
uuid_t tmp_uuid_bin;
if (params) {
dmdi.u.integrity.journal_size = params->journal_size;
dmdi.u.integrity.journal_watermark = params->journal_watermark;
dmdi.u.integrity.journal_commit_time = params->journal_commit_time;
dmdi.u.integrity.interleave_sectors = params->interleave_sectors;
dmdi.u.integrity.buffer_sectors = params->buffer_sectors;
dmdi.u.integrity.journal_integrity = params->journal_integrity;
dmdi.u.integrity.journal_crypt = params->journal_crypt;
dmdi.u.integrity.integrity = params->integrity;
}
struct volume_key *vk = NULL;
uuid_generate(tmp_uuid_bin);
uuid_unparse(tmp_uuid_bin, tmp_uuid);
snprintf(tmp_name, sizeof(tmp_name), "temporary-cryptsetup-%s", tmp_uuid);
log_dbg("Trying to format INTEGRITY device on top of %s, tmp name %s, tag size %d.",
device_path(dmdi.data_device), tmp_name, dmdi.u.integrity.tag_size);
r = device_block_adjust(cd, dmdi.data_device, DEV_EXCL, dmdi.u.integrity.offset, NULL, NULL);
if (r < 0 && (dm_flags(DM_INTEGRITY, &dmi_flags) || !(dmi_flags & DM_INTEGRITY_SUPPORTED))) {
log_err(cd, _("Kernel doesn't support dm-integrity mapping."));
return -ENOTSUP;
}
if (r)
return r;
/* There is no data area, we can actually use fake zeroed key */
if (params && params->integrity_key_size)
dmdi.u.integrity.vk = crypt_alloc_volume_key(params->integrity_key_size, NULL);
vk = crypt_alloc_volume_key(params->integrity_key_size, NULL);
r = dm_create_device(cd, tmp_name, "INTEGRITY", &dmdi, 0);
r = dm_integrity_target_set(tgt, 0, dmdi.size, crypt_metadata_device(cd),
crypt_data_device(cd), crypt_get_integrity_tag_size(cd),
crypt_get_data_offset(cd), crypt_get_sector_size(cd), vk,
journal_crypt_key, journal_mac_key, params);
if (r < 0) {
crypt_free_volume_key(vk);
return r;
}
crypt_free_volume_key(dmdi.u.integrity.vk);
log_dbg(cd, "Trying to format INTEGRITY device on top of %s, tmp name %s, tag size %d.",
device_path(tgt->data_device), tmp_name, tgt->u.integrity.tag_size);
r = device_block_adjust(cd, tgt->data_device, DEV_EXCL, tgt->u.integrity.offset, NULL, NULL);
if (r < 0 && (dm_flags(cd, DM_INTEGRITY, &dmi_flags) || !(dmi_flags & DM_INTEGRITY_SUPPORTED))) {
log_err(cd, _("Kernel doesn't support dm-integrity mapping."));
r = -ENOTSUP;
}
if (r) {
dm_targets_free(cd, &dmdi);
return r;
}
if (tgt->u.integrity.meta_device) {
r = device_block_adjust(cd, tgt->u.integrity.meta_device, DEV_EXCL, 0, NULL, NULL);
if (r) {
dm_targets_free(cd, &dmdi);
return r;
}
}
r = dm_create_device(cd, tmp_name, "INTEGRITY", &dmdi);
crypt_free_volume_key(vk);
dm_targets_free(cd, &dmdi);
if (r)
return r;

View File

@@ -1,7 +1,7 @@
/*
* Integrity header defitinion
*
* Copyright (C) 2016-2018, Milan Broz
* Copyright (C) 2016-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -27,14 +27,17 @@ struct crypt_device;
struct device;
struct crypt_params_integrity;
struct volume_key;
struct crypt_dm_active_device;
/* dm-integrity helper */
#define SB_MAGIC "integrt"
#define SB_VERSION_1 1
#define SB_VERSION_2 2
#define SB_VERSION_3 3
#define SB_FLAG_HAVE_JOURNAL_MAC (1 << 0)
#define SB_FLAG_RECALCULATING (1 << 1) /* V2 only */
#define SB_FLAG_DIRTY_BITMAP (1 << 2) /* V3 only */
struct superblock {
uint8_t magic[8];
@@ -45,8 +48,9 @@ struct superblock {
uint64_t provided_data_sectors;
uint32_t flags;
uint8_t log2_sectors_per_block;
uint8_t pad[3];
uint64_t recalc_sector; /* V2 only */
uint8_t log2_blocks_per_bitmap_bit; /* V3 only */
uint8_t pad[2];
uint64_t recalc_sector; /* V2 only */
} __attribute__ ((packed));
int INTEGRITY_read_sb(struct crypt_device *cd, struct crypt_params_integrity *params);
@@ -75,4 +79,16 @@ int INTEGRITY_activate(struct crypt_device *cd,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key,
uint32_t flags);
int INTEGRITY_create_dmd_device(struct crypt_device *cd,
const struct crypt_params_integrity *params,
struct volume_key *vk,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key,
struct crypt_dm_active_device *dmd,
uint32_t flags);
int INTEGRITY_activate_dmd_device(struct crypt_device *cd,
const char *name,
struct crypt_dm_active_device *dmd);
#endif

View File

@@ -1,10 +1,10 @@
/*
* libcryptsetup - cryptsetup library internal
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -40,6 +40,7 @@
#include "utils_keyring.h"
#include "utils_io.h"
#include "crypto_backend.h"
#include "utils_storage_wrappers.h"
#include "libcryptsetup.h"
@@ -65,11 +66,21 @@
# define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
#endif
#define MOVE_REF(x, y) \
do { \
typeof (x) *_px = &(x), *_py = &(y); \
*_px = *_py; \
*_py = NULL; \
} while (0)
struct crypt_device;
struct luks2_reenc_context;
struct volume_key {
int id;
size_t keylength;
const char *key_description;
struct volume_key *next;
char key[];
};
@@ -77,6 +88,11 @@ struct volume_key *crypt_alloc_volume_key(size_t keylength, const char *key);
struct volume_key *crypt_generate_volume_key(struct crypt_device *cd, size_t keylength);
void crypt_free_volume_key(struct volume_key *vk);
int crypt_volume_key_set_description(struct volume_key *key, const char *key_description);
void crypt_volume_key_set_id(struct volume_key *vk, int id);
int crypt_volume_key_get_id(const struct volume_key *vk);
void crypt_volume_key_add_next(struct volume_key **vks, struct volume_key *vk);
struct volume_key *crypt_volume_key_next(struct volume_key *vk);
struct volume_key *crypt_volume_key_by_id(struct volume_key *vk, int id);
struct crypt_pbkdf_type *crypt_get_pbkdf(struct crypt_device *cd);
int init_pbkdf_type(struct crypt_device *cd,
@@ -87,38 +103,47 @@ int verify_pbkdf_params(struct crypt_device *cd,
int crypt_benchmark_pbkdf_internal(struct crypt_device *cd,
struct crypt_pbkdf_type *pbkdf,
size_t volume_key_size);
const char *crypt_get_cipher_spec(struct crypt_device *cd);
/* Device backend */
struct device;
int device_alloc(struct device **device, const char *path);
int device_alloc(struct crypt_device *cd, struct device **device, const char *path);
int device_alloc_no_check(struct device **device, const char *path);
void device_free(struct device *device);
void device_close(struct crypt_device *cd, struct device *device);
void device_free(struct crypt_device *cd, struct device *device);
const char *device_path(const struct device *device);
const char *device_dm_name(const struct device *device);
const char *device_block_path(const struct device *device);
void device_topology_alignment(struct device *device,
unsigned long *required_alignment, /* bytes */
unsigned long *alignment_offset, /* bytes */
unsigned long default_alignment);
size_t device_block_size(struct device *device);
void device_topology_alignment(struct crypt_device *cd,
struct device *device,
unsigned long *required_alignment, /* bytes */
unsigned long *alignment_offset, /* bytes */
unsigned long default_alignment);
size_t device_block_size(struct crypt_device *cd, struct device *device);
int device_read_ahead(struct device *device, uint32_t *read_ahead);
int device_size(struct device *device, uint64_t *size);
int device_open(struct device *device, int flags);
int device_open(struct crypt_device *cd, struct device *device, int flags);
int device_open_excl(struct crypt_device *cd, struct device *device, int flags);
void device_release_excl(struct crypt_device *cd, struct device *device);
void device_disable_direct_io(struct device *device);
int device_is_identical(struct device *device1, struct device *device2);
int device_is_rotational(struct device *device);
size_t device_alignment(struct device *device);
int device_direct_io(const struct device *device);
int device_fallocate(struct device *device, uint64_t size);
void device_sync(struct device *device, int devfd);
void device_sync(struct crypt_device *cd, struct device *device);
int device_check_size(struct crypt_device *cd,
struct device *device,
uint64_t req_offset, int falloc);
int device_open_locked(struct device *device, int flags);
int device_open_locked(struct crypt_device *cd, struct device *device, int flags);
int device_read_lock(struct crypt_device *cd, struct device *device);
int device_write_lock(struct crypt_device *cd, struct device *device);
void device_read_unlock(struct device *device);
void device_write_unlock(struct device *device);
void device_read_unlock(struct crypt_device *cd, struct device *device);
void device_write_unlock(struct crypt_device *cd, struct device *device);
bool device_is_locked(struct device *device);
enum devcheck { DEV_OK = 0, DEV_EXCL = 1, DEV_SHARED = 2 };
enum devcheck { DEV_OK = 0, DEV_EXCL = 1 };
int device_check_access(struct crypt_device *cd,
struct device *device,
enum devcheck device_check);
@@ -130,6 +155,13 @@ int device_block_adjust(struct crypt_device *cd,
uint32_t *flags);
size_t size_round_up(size_t size, size_t block);
int create_or_reload_device(struct crypt_device *cd, const char *name,
const char *type, struct crypt_dm_active_device *dmd);
int create_or_reload_device_with_integrity(struct crypt_device *cd, const char *name,
const char *type, struct crypt_dm_active_device *dmd,
struct crypt_dm_active_device *dmdi);
/* Receive backend devices from context helpers */
struct device *crypt_metadata_device(struct crypt_device *cd);
struct device *crypt_data_device(struct crypt_device *cd);
@@ -144,6 +176,7 @@ char *crypt_get_base_device(const char *dev_path);
uint64_t crypt_dev_partition_offset(const char *dev_path);
int lookup_by_disk_id(const char *dm_uuid);
int lookup_by_sysfs_uuid_field(const char *dm_uuid, size_t max_len);
int crypt_uuid_cmp(const char *dm_uuid, const char *hdr_uuid);
size_t crypt_getpagesize(void);
unsigned crypt_cpusonline(void);
@@ -152,7 +185,7 @@ uint64_t crypt_getphysmemory_kb(void);
int init_crypto(struct crypt_device *ctx);
void logger(struct crypt_device *cd, int level, const char *file, int line, const char *format, ...) __attribute__ ((format (printf, 5, 6)));
#define log_dbg(x...) logger(NULL, CRYPT_LOG_DEBUG, __FILE__, __LINE__, x)
#define log_dbg(c, x...) logger(c, CRYPT_LOG_DEBUG, __FILE__, __LINE__, x)
#define log_std(c, x...) logger(c, CRYPT_LOG_NORMAL, __FILE__, __LINE__, x)
#define log_verbose(c, x...) logger(c, CRYPT_LOG_VERBOSE, __FILE__, __LINE__, x)
#define log_err(c, x...) logger(c, CRYPT_LOG_ERROR, __FILE__, __LINE__, x)
@@ -169,7 +202,7 @@ int crypt_random_get(struct crypt_device *ctx, char *buf, size_t len, int qualit
void crypt_random_exit(void);
int crypt_random_default_key_rng(void);
int crypt_plain_hash(struct crypt_device *ctx,
int crypt_plain_hash(struct crypt_device *cd,
const char *hash_name,
char *key, size_t key_size,
const char *passphrase, size_t passphrase_size);
@@ -180,6 +213,11 @@ int PLAIN_activate(struct crypt_device *cd,
uint32_t flags);
void *crypt_get_hdr(struct crypt_device *cd, const char *type);
void crypt_set_reenc_context(struct crypt_device *cd, struct luks2_reenc_context *rh);
struct luks2_reenc_context *crypt_get_reenc_context(struct crypt_device *cd);
int onlyLUKS2(struct crypt_device *cd);
int onlyLUKS2mask(struct crypt_device *cd, uint32_t mask);
int crypt_wipe_device(struct crypt_device *cd,
struct device *device,
@@ -198,8 +236,9 @@ int crypt_get_integrity_tag_size(struct crypt_device *cd);
int crypt_key_in_keyring(struct crypt_device *cd);
void crypt_set_key_in_keyring(struct crypt_device *cd, unsigned key_in_keyring);
int crypt_volume_key_load_in_keyring(struct crypt_device *cd, struct volume_key *vk);
int crypt_use_keyring_for_vk(const struct crypt_device *cd);
void crypt_drop_keyring_key(struct crypt_device *cd, const char *key_description);
int crypt_use_keyring_for_vk(struct crypt_device *cd);
void crypt_drop_keyring_key_by_description(struct crypt_device *cd, const char *key_description, key_type_t ktype);
void crypt_drop_keyring_key(struct crypt_device *cd, struct volume_key *vks);
static inline uint64_t version(uint16_t major, uint16_t minor, uint16_t patch, uint16_t release)
{
@@ -208,4 +247,7 @@ static inline uint64_t version(uint16_t major, uint16_t minor, uint16_t patch, u
int kernel_version(uint64_t *kversion);
int crypt_serialize_lock(struct crypt_device *cd);
void crypt_serialize_unlock(struct crypt_device *cd);
#endif /* INTERNAL_H */

View File

@@ -1,10 +1,10 @@
/*
* libcryptsetup - cryptsetup library
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -64,6 +64,23 @@ struct crypt_device; /* crypt device handle */
*/
int crypt_init(struct crypt_device **cd, const char *device);
/**
* Initialize crypt device handle with optional data device and check
* if devices exist.
*
* @param cd Returns pointer to crypt device handle
* @param device Path to the backing device or detached header.
* @param data_device Path to the data device or @e NULL.
*
* @return @e 0 on success or negative errno value otherwise.
*
* @note Note that logging is not initialized here, possible messages use
* default log function.
*/
int crypt_init_data_device(struct crypt_device **cd,
const char *device,
const char *data_device);
/**
* Initialize crypt device handle from provided active device name,
* and, optionally, from separate metadata (header) device
@@ -131,8 +148,29 @@ void crypt_set_confirm_callback(struct crypt_device *cd,
* @param cd crypt device handle
* @param device path to device
*
* @returns 0 on success or negative errno value otherwise.
*/
int crypt_set_data_device(struct crypt_device *cd, const char *device);
/**
* Set data device offset in 512-byte sectors.
* Used for LUKS.
* This function is replacement for data alignment fields in LUKS param struct.
* If set to 0 (default), old behaviour is preserved.
* This value is reset on @link crypt_load @endlink.
*
* @param cd crypt device handle
* @param data_offset data offset in bytes
*
* @returns 0 on success or negative errno value otherwise.
*
* @note Data offset must be aligned to multiple of 8 (alignment to 4096-byte sectors)
* and must be big enough to accommodate the whole LUKS header with all keyslots.
* @note Data offset is enforced by this function, device topology
* information is no longer used after calling this function.
*/
int crypt_set_data_offset(struct crypt_device *cd, uint64_t data_offset);
/** @} */
/**
@@ -151,6 +189,8 @@ int crypt_set_data_device(struct crypt_device *cd, const char *device);
#define CRYPT_LOG_VERBOSE 2
/** debug log level - always on stdout */
#define CRYPT_LOG_DEBUG -1
/** debug log level - additional JSON output (for LUKS2) */
#define CRYPT_LOG_DEBUG_JSON -2
/**
* Set log function.
@@ -246,6 +286,16 @@ struct crypt_pbkdf_type {
int crypt_set_pbkdf_type(struct crypt_device *cd,
const struct crypt_pbkdf_type *pbkdf);
/**
* Get PBKDF (Password-Based Key Derivation Algorithm) parameters.
*
* @param pbkdf_type type of PBKDF
*
* @return struct on success or NULL value otherwise.
*
*/
const struct crypt_pbkdf_type *crypt_get_pbkdf_type_params(const char *pbkdf_type);
/**
* Get default PBKDF (Password-Based Key Derivation Algorithm) settings for keyslots.
* Works only with LUKS device handles (both versions).
@@ -307,6 +357,39 @@ int crypt_memory_lock(struct crypt_device *cd, int lock);
* In current version locking can be only switched off and cannot be switched on later.
*/
int crypt_metadata_locking(struct crypt_device *cd, int enable);
/**
* Set metadata header area sizes. This applies only to LUKS2.
* These values limit amount of metadata anf number of supportable keyslots.
*
* @param cd crypt device handle, can be @e NULL
* @param metadata_size size in bytes of JSON area + 4k binary header
* @param keyslots_size size in bytes of binary keyslots area
*
* @returns @e 0 on success or negative errno value otherwise.
*
* @note The metadata area is stored twice and both copies contain 4k binary header.
* Only 16,32,64,128,256,512,1024,2048 and 4096 kB value is allowed (see LUKS2 specification).
* @note Keyslots area size must be multiple of 4k with maximum 128MB.
*/
int crypt_set_metadata_size(struct crypt_device *cd,
uint64_t metadata_size,
uint64_t keyslots_size);
/**
* Get metadata header area sizes. This applies only to LUKS2.
* These values limit amount of metadata anf number of supportable keyslots.
*
* @param cd crypt device handle
* @param metadata_size size in bytes of JSON area + 4k binary header
* @param keyslots_size size in bytes of binary keyslots area
*
* @returns @e 0 on success or negative errno value otherwise.
*/
int crypt_get_metadata_size(struct crypt_device *cd,
uint64_t *metadata_size,
uint64_t *keyslots_size);
/** @} */
/**
@@ -343,6 +426,13 @@ int crypt_metadata_locking(struct crypt_device *cd, int enable);
*/
const char *crypt_get_type(struct crypt_device *cd);
/**
* Get device default LUKS type
*
* @return string according to device type (CRYPT_LUKS1 or CRYPT_LUKS2).
*/
const char *crypt_get_default_type(void);
/**
*
* Structure used as parameter for PLAIN device type.
@@ -456,11 +546,15 @@ struct crypt_params_tcrypt {
*
* @see crypt_format, crypt_load
*
* @note In bitmap tracking mode, the journal is implicitly disabled.
* As an ugly workaround for compatibility, journal_watermark is overloaded
* to mean 512-bytes sectors-per-bit and journal_commit_time means bitmap flush time.
* All other journal parameters are not applied in the bitmap mode.
*/
struct crypt_params_integrity {
uint64_t journal_size; /**< size of journal in bytes */
unsigned int journal_watermark; /**< journal flush watermark in percents */
unsigned int journal_commit_time; /**< journal commit time in ms */
unsigned int journal_watermark; /**< journal flush watermark in percents; in bitmap mode sectors-per-bit */
unsigned int journal_commit_time; /**< journal commit time (or bitmap flush time) in ms */
uint32_t interleave_sectors; /**< number of interleave sectors (power of two) */
uint32_t tag_size; /**< tag size per-sector in bytes */
uint32_t sector_size; /**< sector size in bytes */
@@ -867,6 +961,9 @@ int crypt_keyslot_add_by_volume_key(struct crypt_device *cd,
/** create keyslot with new volume key and assign it to current dm-crypt segment */
#define CRYPT_VOLUME_KEY_SET (1 << 1)
/** Assign key to first matching digest before creating new digest */
#define CRYPT_VOLUME_KEY_DIGEST_REUSE (1 << 2)
/**
* Add key slot using provided key.
*
@@ -960,6 +1057,14 @@ int crypt_keyslot_destroy(struct crypt_device *cd, int keyslot);
#define CRYPT_ACTIVATE_CHECK_AT_MOST_ONCE (1 << 15)
/** allow activation check including unbound keyslots (keyslots without segments) */
#define CRYPT_ACTIVATE_ALLOW_UNBOUND_KEY (1 << 16)
/** dm-integrity: activate automatic recalculation */
#define CRYPT_ACTIVATE_RECALCULATE (1 << 17)
/** reactivate existing and update flags, input only */
#define CRYPT_ACTIVATE_REFRESH (1 << 18)
/** Use global lock to serialize memory hard KDF on activation (OOM workaround) */
#define CRYPT_ACTIVATE_SERIALIZE_MEMORY_HARD_PBKDF (1 << 19)
/** dm-integrity: direct writes, use bitmap to track dirty sectors */
#define CRYPT_ACTIVATE_NO_JOURNAL_BITMAP (1 << 20)
/**
* Active device runtime attributes
@@ -1009,6 +1114,8 @@ uint64_t crypt_get_active_integrity_failures(struct crypt_device *cd,
*/
/** Unfinished offline reencryption */
#define CRYPT_REQUIREMENT_OFFLINE_REENCRYPT (1 << 0)
/** Online reencryption in-progress */
#define CRYPT_REQUIREMENT_ONLINE_REENCRYPT (1 << 1)
/** unknown requirement in header (output only) */
#define CRYPT_REQUIREMENT_UNKNOWN (1 << 31)
@@ -1317,6 +1424,16 @@ const char *crypt_get_uuid(struct crypt_device *cd);
*/
const char *crypt_get_device_name(struct crypt_device *cd);
/**
* Get path to detached metadata device or @e NULL if it is not detached.
*
* @param cd crypt device handle
*
* @return path to underlaying device name
*
*/
const char *crypt_get_metadata_device_name(struct crypt_device *cd);
/**
* Get device offset in 512-bytes sectors where real data starts (on underlying device).
*
@@ -1528,7 +1645,7 @@ int crypt_keyslot_area(struct crypt_device *cd,
uint64_t *length);
/**
* Get size (in bytes) of key for particular keyslot.
* Get size (in bytes) of stored key in particular keyslot.
* Use for LUKS2 unbound keyslots, for other keyslots it is the same as @ref crypt_get_volume_key_size
*
* @param cd crypt device handle
@@ -1539,6 +1656,50 @@ int crypt_keyslot_area(struct crypt_device *cd,
*/
int crypt_keyslot_get_key_size(struct crypt_device *cd, int keyslot);
/**
* Get cipher and key size for keyslot encryption.
* Use for LUKS2 keyslot to set different encryption type than for data encryption.
* Parameters will be used for next keyslot operations.
*
* @param cd crypt device handle
* @param keyslot keyslot number of CRYPT_ANY_SLOT for default
* @param key_size encryption key size (in bytes)
*
* @return cipher specification on success or @e NULL.
*
* @note This is the encryption of keyslot itself, not the data encryption algorithm!
*/
const char *crypt_keyslot_get_encryption(struct crypt_device *cd, int keyslot, size_t *key_size);
/**
* Get PBKDF parameters for keyslot.
*
* @param cd crypt device handle
* @param keyslot keyslot number
* @param pbkdf struct with returned PBKDF parameters
*
* @return @e 0 on success or negative errno value otherwise.
*/
int crypt_keyslot_get_pbkdf(struct crypt_device *cd, int keyslot, struct crypt_pbkdf_type *pbkdf);
/**
* Set encryption for keyslot.
* Use for LUKS2 keyslot to set different encryption type than for data encryption.
* Parameters will be used for next keyslot operations that create or change a keyslot.
*
* @param cd crypt device handle
* @param cipher (e.g. "aes-xts-plain64")
* @param key_size encryption key size (in bytes)
*
* @return @e 0 on success or negative errno value otherwise.
*
* @note To reset to default keyslot encryption (the same as for data)
* set cipher to NULL and key size to 0.
*/
int crypt_keyslot_set_encryption(struct crypt_device *cd,
const char *cipher,
size_t key_size);
/**
* Get directory where mapped crypt devices are created
*
@@ -1591,6 +1752,8 @@ int crypt_header_restore(struct crypt_device *cd,
/** Debug all */
#define CRYPT_DEBUG_ALL -1
/** Debug all with adidtional JSON dump (for LUKS2) */
#define CRYPT_DEBUG_JSON -2
/** Debug none */
#define CRYPT_DEBUG_NONE 0
@@ -1948,6 +2111,139 @@ int crypt_activate_by_token(struct crypt_device *cd,
uint32_t flags);
/** @} */
/**
* @defgroup crypt-reencryption LUKS2 volume reencryption support
*
* Set of functions to handling LUKS2 volume reencryption
*
* @addtogroup crypt-reencryption
* @{
*/
/** Initialize reencryption metadata but do not run reencryption yet. */
#define CRYPT_REENCRYPT_INITIALIZE_ONLY (1 << 0)
/** Move the first segment; used only with data shift. */
#define CRYPT_REENCRYPT_MOVE_FIRST_SEGMENT (1 << 1)
/** Resume already initialized reencryption only. */
#define CRYPT_REENCRYPT_RESUME_ONLY (1 << 2)
/** Run reencryption recovery only */
#define CRYPT_REENCRYPT_RECOVERY (1 << 3)
/**
* Reencryption direction
*/
typedef enum {
CRYPT_REENCRYPT_FORWARD = 0, /**< forward direction */
CRYPT_REENCRYPT_BACKWARD /**< backward direction */
} crypt_reencrypt_direction_info;
/**
* LUKS2 reencryption options.
*/
struct crypt_params_reencrypt {
const char *mode; /**< Mode as "encrypt" / "reencrypt" / "decrypt", immutable after first init. */
crypt_reencrypt_direction_info direction; /**< Reencryption direction, immutable after first init. */
const char *resilience; /**< Resilience mode: "none", "checksum", "journal" or "shift" (only "shift" is immutable after init) */
const char *hash; /**< Used hash for "checksum" resilience type, ignored otherwise. */
uint64_t data_shift; /**< Used in "shift" mode, must be non-zero, immutable after first init. */
uint64_t max_hotzone_size; /**< Hotzone size for "none" mode; maximum hotzone size for "checksum" mode. */
uint64_t device_size; /**< Reencrypt only initial part of the data device. */
const struct crypt_params_luks2 *luks2; /**< LUKS2 parameters for the final reencryption volume.*/
uint32_t flags; /**< Reencryption flags. */
};
/**
* Initialize reencryption metadata using passphrase.
*
* This function initializes on-disk metadata to include all reencryption segments,
* according to the provided options.
* If metadata already contains ongoing reencryption metadata, it loads these parameters
* (in this situation all parameters except @e name and @e passphrase can be omitted).
*
* @param cd crypt device handle
* @param name name of active device or @e NULL for offline reencryption
* @param passphrase passphrase used to unlock volume key
* @param passphrase_size size of @e passphrase (binary data)
* @param keyslot_old keyslot to unlock existing device or CRYPT_ANY_SLOT
* @param keyslot_new existing (unbound) reencryption keyslot; must be set except for decryption
* @param cipher cipher specification (e.g. "aes")
* @param cipher_mode cipher mode and IV (e.g. "xts-plain64")
* @param params reencryption parameters @link crypt_params_reencrypt @endlink.
*
* @return reencryption key slot number or negative errno otherwise.
*/
int crypt_reencrypt_init_by_passphrase(struct crypt_device *cd,
const char *name,
const char *passphrase,
size_t passphrase_size,
int keyslot_old,
int keyslot_new,
const char *cipher,
const char *cipher_mode,
const struct crypt_params_reencrypt *params);
/**
* Initialize reencryption metadata using passphrase in keyring.
*
* This function initializes on-disk metadata to include all reencryption segments,
* according to the provided options.
* If metadata already contains ongoing reencryption metadata, it loads these parameters
* (in this situation all parameters except @e name and @e key_description can be omitted).
*
* @param cd crypt device handle
* @param name name of active device or @e NULL for offline reencryption
* @param key_description passphrase (key) identification in keyring
* @param keyslot_old keyslot to unlock existing device or CRYPT_ANY_SLOT
* @param keyslot_new existing (unbound) reencryption keyslot; must be set except for decryption
* @param cipher cipher specification (e.g. "aes")
* @param cipher_mode cipher mode and IV (e.g. "xts-plain64")
* @param params reencryption parameters @link crypt_params_reencrypt @endlink.
*
* @return reencryption key slot number or negative errno otherwise.
*/
int crypt_reencrypt_init_by_keyring(struct crypt_device *cd,
const char *name,
const char *key_description,
int keyslot_old,
int keyslot_new,
const char *cipher,
const char *cipher_mode,
const struct crypt_params_reencrypt *params);
/**
* Run data reencryption.
*
* @param cd crypt device handle
* @param progress is a callback funtion reporting device \b size,
* current \b offset of reencryption and provided \b usrptr identification
*
* @return @e 0 on success or negative errno value otherwise.
*/
int crypt_reencrypt(struct crypt_device *cd,
int (*progress)(uint64_t size, uint64_t offset, void *usrptr));
/**
* Reencryption status info
*/
typedef enum {
CRYPT_REENCRYPT_NONE = 0, /**< No reencryption in progress */
CRYPT_REENCRYPT_CLEAN, /**< Ongoing reencryption in a clean state. */
CRYPT_REENCRYPT_CRASH, /**< Aborted reencryption that need internal recovery. */
CRYPT_REENCRYPT_INVALID /**< Invalid state. */
} crypt_reencrypt_info;
/**
* LUKS2 reencryption status.
*
* @param cd crypt device handle
* @param params reecryption parameters
*
* @return reencryption status info and parameters.
*/
crypt_reencrypt_info crypt_reencrypt_status(struct crypt_device *cd,
struct crypt_params_reencrypt *params);
/** @} */
#ifdef __cplusplus
}
#endif

View File

@@ -1,6 +1,7 @@
CRYPTSETUP_2.0 {
global:
crypt_init;
crypt_init_data_device;
crypt_init_by_name;
crypt_init_by_name_and_header;
@@ -68,14 +69,19 @@ CRYPTSETUP_2.0 {
crypt_get_cipher_mode;
crypt_get_integrity_info;
crypt_get_uuid;
crypt_set_data_offset;
crypt_get_data_offset;
crypt_get_iv_offset;
crypt_get_volume_key_size;
crypt_get_device_name;
crypt_get_metadata_device_name;
crypt_get_metadata_size;
crypt_set_metadata_size;
crypt_get_verity_info;
crypt_get_sector_size;
crypt_get_type;
crypt_get_default_type;
crypt_get_active_device;
crypt_get_active_integrity_failures;
crypt_persistent_flags_set;
@@ -85,12 +91,17 @@ CRYPTSETUP_2.0 {
crypt_get_rng_type;
crypt_set_pbkdf_type;
crypt_get_pbkdf_type;
crypt_get_pbkdf_type_params;
crypt_get_pbkdf_default;
crypt_keyslot_max;
crypt_keyslot_area;
crypt_keyslot_status;
crypt_keyslot_get_key_size;
crypt_keyslot_set_encryption;
crypt_keyslot_get_encryption;
crypt_keyslot_get_pbkdf;
crypt_get_dir;
crypt_set_debug_level;
crypt_log;
@@ -102,6 +113,11 @@ CRYPTSETUP_2.0 {
crypt_keyfile_device_read;
crypt_wipe;
crypt_reencrypt_init_by_passphrase;
crypt_reencrypt_init_by_keyring;
crypt_reencrypt;
crypt_reencrypt_status;
local:
*;
};

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,8 @@
/*
* loop-AES compatible volume handling
*
* Copyright (C) 2011-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2018, Milan Broz
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -137,13 +137,13 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
unsigned int key_lengths[LOOPAES_KEYS_MAX];
unsigned int i, key_index, key_len, offset;
log_dbg("Parsing loop-AES keyfile of size %zu.", buffer_len);
log_dbg(cd, "Parsing loop-AES keyfile of size %zu.", buffer_len);
if (!buffer_len)
return -EINVAL;
if (keyfile_is_gpg(buffer, buffer_len)) {
log_err(cd, _("Detected not yet supported GPG encrypted keyfile.\n"));
log_err(cd, _("Detected not yet supported GPG encrypted keyfile."));
log_std(cd, _("Please use gpg --decrypt <KEYFILE> | cryptsetup --keyfile=- ...\n"));
return -EINVAL;
}
@@ -164,7 +164,7 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
key_lengths[key_index]++;
}
if (offset == buffer_len) {
log_dbg("Unterminated key #%d in keyfile.", key_index);
log_dbg(cd, "Unterminated key #%d in keyfile.", key_index);
log_err(cd, _("Incompatible loop-AES keyfile detected."));
return -EINVAL;
}
@@ -177,7 +177,7 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
key_len = key_lengths[0];
for (i = 0; i < key_index; i++)
if (!key_lengths[i] || (key_lengths[i] != key_len)) {
log_dbg("Unexpected length %d of key #%d (should be %d).",
log_dbg(cd, "Unexpected length %d of key #%d (should be %d).",
key_lengths[i], i, key_len);
key_len = 0;
break;
@@ -189,7 +189,7 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
return -EINVAL;
}
log_dbg("Keyfile: %d keys of length %d.", key_index, key_len);
log_dbg(cd, "Keyfile: %d keys of length %d.", key_index, key_len);
*keys_count = key_index;
return hash_keys(cd, vk, hash, keys, key_index,
@@ -203,25 +203,15 @@ int LOOPAES_activate(struct crypt_device *cd,
struct volume_key *vk,
uint32_t flags)
{
char *cipher = NULL;
uint32_t req_flags, dmc_flags;
int r;
uint32_t req_flags, dmc_flags;
char *cipher = NULL;
struct crypt_dm_active_device dmd = {
.target = DM_CRYPT,
.size = 0,
.flags = flags,
.data_device = crypt_data_device(cd),
.u.crypt = {
.cipher = NULL,
.vk = vk,
.offset = crypt_get_data_offset(cd),
.iv_offset = crypt_get_iv_offset(cd),
.sector_size = crypt_get_sector_size(cd),
}
.flags = flags,
};
r = device_block_adjust(cd, dmd.data_device, DEV_EXCL,
dmd.u.crypt.offset, &dmd.size, &dmd.flags);
r = device_block_adjust(cd, crypt_data_device(cd), DEV_EXCL,
crypt_get_data_offset(cd), &dmd.size, &dmd.flags);
if (r)
return r;
@@ -235,18 +225,29 @@ int LOOPAES_activate(struct crypt_device *cd,
if (r < 0)
return -ENOMEM;
dmd.u.crypt.cipher = cipher;
log_dbg("Trying to activate loop-AES device %s using cipher %s.",
name, dmd.u.crypt.cipher);
r = dm_crypt_target_set(&dmd.segment, 0, dmd.size, crypt_data_device(cd),
vk, cipher, crypt_get_iv_offset(cd),
crypt_get_data_offset(cd), crypt_get_integrity(cd),
crypt_get_integrity_tag_size(cd), crypt_get_sector_size(cd));
r = dm_create_device(cd, name, CRYPT_LOOPAES, &dmd, 0);
if (r) {
free(cipher);
return r;
}
if (r < 0 && !dm_flags(DM_CRYPT, &dmc_flags) &&
log_dbg(cd, "Trying to activate loop-AES device %s using cipher %s.",
name, cipher);
r = dm_create_device(cd, name, CRYPT_LOOPAES, &dmd);
if (r < 0 && !dm_flags(cd, DM_CRYPT, &dmc_flags) &&
(dmc_flags & req_flags) != req_flags) {
log_err(cd, _("Kernel doesn't support loop-AES compatible mapping."));
r = -ENOTSUP;
}
dm_targets_free(cd, &dmd);
free(cipher);
return r;
}

View File

@@ -1,8 +1,8 @@
/*
* loop-AES compatible volume handling
*
* Copyright (C) 2011-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2018, Milan Broz
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -1,8 +1,8 @@
/*
* AFsplitter - Anti forensic information splitter
*
* Copyright (C) 2004, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
*
* AFsplitter diffuses information over a large stripe of data,
* therefore supporting secure data destruction.
@@ -25,7 +25,6 @@
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include <netinet/in.h>
#include <errno.h>
#include "internal.h"
#include "af.h"
@@ -34,7 +33,7 @@ static void XORblock(const char *src1, const char *src2, char *dst, size_t n)
{
size_t j;
for(j = 0; j < n; ++j)
for (j = 0; j < n; j++)
dst[j] = src1[j] ^ src2[j];
}
@@ -45,7 +44,7 @@ static int hash_buf(const char *src, char *dst, uint32_t iv,
char *iv_char = (char *)&iv;
int r;
iv = htonl(iv);
iv = be32_to_cpu(iv);
if (crypt_hash_init(&hd, hash_name))
return -EINVAL;
@@ -61,7 +60,8 @@ out:
return r;
}
/* diffuse: Information spreading over the whole dataset with
/*
* diffuse: Information spreading over the whole dataset with
* the help of hash function.
*/
static int diffuse(char *src, char *dst, size_t size, const char *hash_name)
@@ -101,48 +101,49 @@ static int diffuse(char *src, char *dst, size_t size, const char *hash_name)
* blocknumbers. The same blocksize and blocknumbers values
* must be supplied to AF_merge to recover information.
*/
int AF_split(const char *src, char *dst, size_t blocksize,
unsigned int blocknumbers, const char *hash)
int AF_split(struct crypt_device *ctx, const char *src, char *dst,
size_t blocksize, unsigned int blocknumbers, const char *hash)
{
unsigned int i;
char *bufblock;
int r;
if((bufblock = calloc(blocksize, 1)) == NULL) return -ENOMEM;
bufblock = crypt_safe_alloc(blocksize);
if (!bufblock)
return -ENOMEM;
/* process everything except the last block */
for(i=0; i<blocknumbers-1; i++) {
r = crypt_random_get(NULL, dst+(blocksize*i), blocksize, CRYPT_RND_NORMAL);
for (i = 0; i < blocknumbers - 1; i++) {
r = crypt_random_get(ctx, dst + blocksize * i, blocksize, CRYPT_RND_NORMAL);
if (r < 0)
goto out;
XORblock(dst+(blocksize*i),bufblock,bufblock,blocksize);
XORblock(dst + blocksize * i, bufblock, bufblock, blocksize);
r = diffuse(bufblock, bufblock, blocksize, hash);
if (r < 0)
goto out;
}
/* the last block is computed */
XORblock(src,bufblock,dst+(i*blocksize),blocksize);
XORblock(src, bufblock, dst + blocksize * i, blocksize);
r = 0;
out:
free(bufblock);
crypt_safe_free(bufblock);
return r;
}
int AF_merge(const char *src, char *dst, size_t blocksize,
unsigned int blocknumbers, const char *hash)
int AF_merge(struct crypt_device *ctx __attribute__((unused)), const char *src, char *dst,
size_t blocksize, unsigned int blocknumbers, const char *hash)
{
unsigned int i;
char *bufblock;
int r;
if((bufblock = calloc(blocksize, 1)) == NULL)
bufblock = crypt_safe_alloc(blocksize);
if (!bufblock)
return -ENOMEM;
memset(bufblock,0,blocksize);
for(i=0; i<blocknumbers-1; i++) {
XORblock(src+(blocksize*i),bufblock,bufblock,blocksize);
for(i = 0; i < blocknumbers - 1; i++) {
XORblock(src + blocksize * i, bufblock, bufblock, blocksize);
r = diffuse(bufblock, bufblock, blocksize, hash);
if (r < 0)
goto out;
@@ -150,7 +151,7 @@ int AF_merge(const char *src, char *dst, size_t blocksize,
XORblock(src + blocksize * i, bufblock, dst, blocksize);
r = 0;
out:
free(bufblock);
crypt_safe_free(bufblock);
return r;
}

View File

@@ -1,8 +1,8 @@
/*
* AFsplitter - Anti forensic information splitter
*
* Copyright (C) 2004, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
*
* AFsplitter diffuses information over a large stripe of data,
* therefore supporting secure data destruction.
@@ -39,8 +39,10 @@
* On error, both functions return -1, 0 otherwise.
*/
int AF_split(const char *src, char *dst, size_t blocksize, unsigned int blocknumbers, const char *hash);
int AF_merge(const char *src, char *dst, size_t blocksize, unsigned int blocknumbers, const char *hash);
int AF_split(struct crypt_device *ctx, const char *src, char *dst,
size_t blocksize, unsigned int blocknumbers, const char *hash);
int AF_merge(struct crypt_device *ctx, const char *src, char *dst, size_t blocksize,
unsigned int blocknumbers, const char *hash);
size_t AF_split_sectors(size_t blocksize, unsigned int blocknumbers);
int LUKS_encrypt_to_storage(

View File

@@ -1,9 +1,9 @@
/*
* LUKS - Linux Unified Key Setup
*
* Copyright (C) 2004-2006, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2018, Milan Broz
* Copyright (C) 2004-2006 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -58,25 +58,15 @@ static int LUKS_endec_template(char *src, size_t srcLength,
char name[PATH_MAX], path[PATH_MAX];
char cipher_spec[MAX_CIPHER_LEN * 3];
struct crypt_dm_active_device dmd = {
.target = DM_CRYPT,
.uuid = NULL,
.flags = CRYPT_ACTIVATE_PRIVATE,
.data_device = crypt_metadata_device(ctx),
.u.crypt = {
.cipher = cipher_spec,
.vk = vk,
.offset = sector,
.iv_offset = 0,
.sector_size = SECTOR_SIZE,
}
.flags = CRYPT_ACTIVATE_PRIVATE,
};
int r, devfd = -1;
int r, devfd = -1, remove_dev = 0;
size_t bsize, keyslot_alignment, alignment;
log_dbg("Using dmcrypt to access keyslot area.");
log_dbg(ctx, "Using dmcrypt to access keyslot area.");
bsize = device_block_size(dmd.data_device);
alignment = device_alignment(dmd.data_device);
bsize = device_block_size(ctx, crypt_metadata_device(ctx));
alignment = device_alignment(crypt_metadata_device(ctx));
if (!bsize || !alignment)
return -EINVAL;
@@ -96,27 +86,35 @@ static int LUKS_endec_template(char *src, size_t srcLength,
if (snprintf(cipher_spec, sizeof(cipher_spec), "%s-%s", cipher, cipher_mode) < 0)
return -ENOMEM;
r = device_block_adjust(ctx, dmd.data_device, DEV_OK,
dmd.u.crypt.offset, &dmd.size, &dmd.flags);
r = device_block_adjust(ctx, crypt_metadata_device(ctx), DEV_OK,
sector, &dmd.size, &dmd.flags);
if (r < 0) {
log_err(ctx, _("Device %s doesn't exist or access denied."),
device_path(dmd.data_device));
device_path(crypt_metadata_device(ctx)));
return -EIO;
}
if (mode != O_RDONLY && dmd.flags & CRYPT_ACTIVATE_READONLY) {
log_err(ctx, _("Cannot write to device %s, permission denied."),
device_path(dmd.data_device));
device_path(crypt_metadata_device(ctx)));
return -EACCES;
}
r = dm_create_device(ctx, name, "TEMP", &dmd, 0);
r = dm_crypt_target_set(&dmd.segment, 0, dmd.size,
crypt_metadata_device(ctx), vk, cipher_spec, 0, sector,
NULL, 0, SECTOR_SIZE);
if (r)
goto out;
r = dm_create_device(ctx, name, "TEMP", &dmd);
if (r < 0) {
if (r != -EACCES && r != -ENOTSUP)
_error_hint(ctx, device_path(dmd.data_device),
_error_hint(ctx, device_path(crypt_metadata_device(ctx)),
cipher, cipher_mode, vk->keylength * 8);
return -EIO;
r = -EIO;
goto out;
}
remove_dev = 1;
devfd = open(path, mode | O_DIRECT | O_SYNC);
if (devfd == -1) {
@@ -132,9 +130,11 @@ static int LUKS_endec_template(char *src, size_t srcLength,
} else
r = 0;
out:
dm_targets_free(ctx, &dmd);
if (devfd != -1)
close(devfd);
dm_remove_device(ctx, name, CRYPT_DEACTIVATE_FORCE);
if (remove_dev)
dm_remove_device(ctx, name, CRYPT_DEACTIVATE_FORCE);
return r;
}
@@ -145,20 +145,19 @@ int LUKS_encrypt_to_storage(char *src, size_t srcLength,
unsigned int sector,
struct crypt_device *ctx)
{
struct device *device = crypt_metadata_device(ctx);
struct crypt_storage *s;
int devfd = -1, r = 0;
int devfd, r = 0;
/* Only whole sector writes supported */
if (MISALIGNED_512(srcLength))
return -EINVAL;
/* Encrypt buffer */
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
r = crypt_storage_init(&s, SECTOR_SIZE, cipher, cipher_mode, vk->key, vk->keylength);
if (r)
log_dbg("Userspace crypto wrapper cannot use %s-%s (%d).",
log_dbg(ctx, "Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
/* Fallback to old temporary dmcrypt device */
@@ -172,9 +171,9 @@ int LUKS_encrypt_to_storage(char *src, size_t srcLength,
return r;
}
log_dbg("Using userspace crypto wrapper to access keyslot area.");
log_dbg(ctx, "Using userspace crypto wrapper to access keyslot area.");
r = crypt_storage_encrypt(s, 0, srcLength / SECTOR_SIZE, src);
r = crypt_storage_encrypt(s, 0, srcLength, src);
crypt_storage_destroy(s);
if (r)
@@ -183,21 +182,21 @@ int LUKS_encrypt_to_storage(char *src, size_t srcLength,
r = -EIO;
/* Write buffer to device */
devfd = device_open(device, O_RDWR);
if (device_is_locked(device))
devfd = device_open_locked(ctx, device, O_RDWR);
else
devfd = device_open(ctx, device, O_RDWR);
if (devfd < 0)
goto out;
if (write_lseek_blockwise(devfd, device_block_size(device),
if (write_lseek_blockwise(devfd, device_block_size(ctx, device),
device_alignment(device), src, srcLength,
sector * SECTOR_SIZE) < 0)
goto out;
r = 0;
out:
if (devfd >= 0) {
device_sync(device, devfd);
close(devfd);
}
device_sync(ctx, device);
if (r)
log_err(ctx, _("IO error while encrypting keyslot."));
@@ -214,16 +213,16 @@ int LUKS_decrypt_from_storage(char *dst, size_t dstLength,
struct device *device = crypt_metadata_device(ctx);
struct crypt_storage *s;
struct stat st;
int devfd = -1, r = 0;
int devfd, r = 0;
/* Only whole sector reads supported */
if (MISALIGNED_512(dstLength))
return -EINVAL;
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
r = crypt_storage_init(&s, SECTOR_SIZE, cipher, cipher_mode, vk->key, vk->keylength);
if (r)
log_dbg("Userspace crypto wrapper cannot use %s-%s (%d).",
log_dbg(ctx, "Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
/* Fallback to old temporary dmcrypt device */
@@ -237,17 +236,20 @@ int LUKS_decrypt_from_storage(char *dst, size_t dstLength,
return r;
}
log_dbg("Using userspace crypto wrapper to access keyslot area.");
log_dbg(ctx, "Using userspace crypto wrapper to access keyslot area.");
/* Read buffer from device */
devfd = device_open(device, O_RDONLY);
if (device_is_locked(device))
devfd = device_open_locked(ctx, device, O_RDONLY);
else
devfd = device_open(ctx, device, O_RDONLY);
if (devfd < 0) {
log_err(ctx, _("Cannot open device %s."), device_path(device));
crypt_storage_destroy(s);
return -EIO;
}
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(ctx, device),
device_alignment(device), dst, dstLength,
sector * SECTOR_SIZE) < 0) {
if (!fstat(devfd, &st) && (st.st_size < (off_t)dstLength))
@@ -255,15 +257,12 @@ int LUKS_decrypt_from_storage(char *dst, size_t dstLength,
else
log_err(ctx, _("IO error while decrypting keyslot."));
close(devfd);
crypt_storage_destroy(s);
return -EIO;
}
close(devfd);
/* Decrypt buffer */
r = crypt_storage_decrypt(s, 0, dstLength / SECTOR_SIZE, dst);
r = crypt_storage_decrypt(s, 0, dstLength, dst);
crypt_storage_destroy(s);
return r;

View File

@@ -1,9 +1,9 @@
/*
* LUKS - Linux Unified Key Setup
*
* Copyright (C) 2004-2006, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2013-2018, Milan Broz
* Copyright (C) 2004-2006 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2013-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -37,23 +37,6 @@
#include "af.h"
#include "internal.h"
/* Get size of struct luks_phdr with all keyslots material space */
static size_t LUKS_calculate_device_sectors(size_t keyLen)
{
size_t keyslot_sectors, sector;
int i;
keyslot_sectors = AF_split_sectors(keyLen, LUKS_STRIPES);
sector = LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE;
for (i = 0; i < LUKS_NUMKEYS; i++) {
sector = size_round_up(sector, LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE);
sector += keyslot_sectors;
}
return sector;
}
int LUKS_keyslot_area(const struct luks_phdr *hdr,
int keyslot,
uint64_t *offset,
@@ -111,13 +94,13 @@ static int LUKS_check_device_size(struct crypt_device *ctx, const struct luks_ph
return -EINVAL;
if (device_size(device, &dev_sectors)) {
log_dbg("Cannot get device size for device %s.", device_path(device));
log_dbg(ctx, "Cannot get device size for device %s.", device_path(device));
return -EIO;
}
dev_sectors >>= SECTOR_SHIFT;
hdr_sectors = LUKS_device_sectors(hdr);
log_dbg("Key length %u, device size %" PRIu64 " sectors, header size %"
log_dbg(ctx, "Key length %u, device size %" PRIu64 " sectors, header size %"
PRIu64 " sectors.", hdr->keyBytes, dev_sectors, hdr_sectors);
if (hdr_sectors > dev_sectors) {
@@ -144,7 +127,7 @@ static int LUKS_check_keyslots(struct crypt_device *ctx, const struct luks_phdr
for (i = 0; i < LUKS_NUMKEYS; i++) {
/* enforce stripes == 4000 */
if (phdr->keyblock[i].stripes != LUKS_STRIPES) {
log_dbg("Invalid stripes count %u in keyslot %u.",
log_dbg(ctx, "Invalid stripes count %u in keyslot %u.",
phdr->keyblock[i].stripes, i);
log_err(ctx, _("LUKS keyslot %u is invalid."), i);
return -1;
@@ -152,7 +135,7 @@ static int LUKS_check_keyslots(struct crypt_device *ctx, const struct luks_phdr
/* First sectors is the header itself */
if (phdr->keyblock[i].keyMaterialOffset * SECTOR_SIZE < sizeof(*phdr)) {
log_dbg("Invalid offset %u in keyslot %u.",
log_dbg(ctx, "Invalid offset %u in keyslot %u.",
phdr->keyblock[i].keyMaterialOffset, i);
log_err(ctx, _("LUKS keyslot %u is invalid."), i);
return -1;
@@ -163,7 +146,7 @@ static int LUKS_check_keyslots(struct crypt_device *ctx, const struct luks_phdr
continue;
if (phdr->payloadOffset <= phdr->keyblock[i].keyMaterialOffset) {
log_dbg("Invalid offset %u in keyslot %u (beyond data area offset %u).",
log_dbg(ctx, "Invalid offset %u in keyslot %u (beyond data area offset %u).",
phdr->keyblock[i].keyMaterialOffset, i,
phdr->payloadOffset);
log_err(ctx, _("LUKS keyslot %u is invalid."), i);
@@ -171,7 +154,7 @@ static int LUKS_check_keyslots(struct crypt_device *ctx, const struct luks_phdr
}
if (phdr->payloadOffset < (phdr->keyblock[i].keyMaterialOffset + secs_per_stripes)) {
log_dbg("Invalid keyslot size %u (offset %u, stripes %u) in "
log_dbg(ctx, "Invalid keyslot size %u (offset %u, stripes %u) in "
"keyslot %u (beyond data area offset %u).",
secs_per_stripes,
phdr->keyblock[i].keyMaterialOffset,
@@ -188,7 +171,7 @@ static int LUKS_check_keyslots(struct crypt_device *ctx, const struct luks_phdr
next = sorted_areas[i];
if (phdr->keyblock[next].keyMaterialOffset <
(phdr->keyblock[prev].keyMaterialOffset + secs_per_stripes)) {
log_dbg("Not enough space in LUKS keyslot %d.", prev);
log_dbg(ctx, "Not enough space in LUKS keyslot %d.", prev);
log_err(ctx, _("LUKS keyslot %u is invalid."), prev);
return -1;
}
@@ -217,9 +200,10 @@ int LUKS_hdr_backup(const char *backup_file, struct crypt_device *ctx)
{
struct device *device = crypt_metadata_device(ctx);
struct luks_phdr hdr;
int r = 0, devfd = -1;
int fd, devfd, r = 0;
size_t hdr_size;
size_t buffer_size;
ssize_t ret;
char *buffer = NULL;
r = LUKS_read_phdr(&hdr, 1, 0, ctx);
@@ -235,31 +219,30 @@ int LUKS_hdr_backup(const char *backup_file, struct crypt_device *ctx)
goto out;
}
log_dbg("Storing backup of header (%zu bytes) and keyslot area (%zu bytes).",
log_dbg(ctx, "Storing backup of header (%zu bytes) and keyslot area (%zu bytes).",
sizeof(hdr), hdr_size - LUKS_ALIGN_KEYSLOTS);
log_dbg("Output backup file size: %zu bytes.", buffer_size);
log_dbg(ctx, "Output backup file size: %zu bytes.", buffer_size);
devfd = device_open(device, O_RDONLY);
devfd = device_open(ctx, device, O_RDONLY);
if (devfd < 0) {
log_err(ctx, _("Device %s is not a valid LUKS device."), device_path(device));
r = -EINVAL;
goto out;
}
if (read_blockwise(devfd, device_block_size(device), device_alignment(device),
buffer, hdr_size) < (ssize_t)hdr_size) {
if (read_lseek_blockwise(devfd, device_block_size(ctx, device), device_alignment(device),
buffer, hdr_size, 0) < (ssize_t)hdr_size) {
r = -EIO;
goto out;
}
close(devfd);
/* Wipe unused area, so backup cannot contain old signatures */
if (hdr.keyblock[0].keyMaterialOffset * SECTOR_SIZE == LUKS_ALIGN_KEYSLOTS)
memset(buffer + sizeof(hdr), 0, LUKS_ALIGN_KEYSLOTS - sizeof(hdr));
devfd = open(backup_file, O_CREAT|O_EXCL|O_WRONLY, S_IRUSR);
if (devfd == -1) {
fd = open(backup_file, O_CREAT|O_EXCL|O_WRONLY, S_IRUSR);
if (fd == -1) {
if (errno == EEXIST)
log_err(ctx, _("Requested header backup file %s already exists."), backup_file);
else
@@ -267,7 +250,9 @@ int LUKS_hdr_backup(const char *backup_file, struct crypt_device *ctx)
r = -EINVAL;
goto out;
}
if (write_buffer(devfd, buffer, buffer_size) < (ssize_t)buffer_size) {
ret = write_buffer(fd, buffer, buffer_size);
close(fd);
if (ret < (ssize_t)buffer_size) {
log_err(ctx, _("Cannot write header backup file %s."), backup_file);
r = -EIO;
goto out;
@@ -275,8 +260,6 @@ int LUKS_hdr_backup(const char *backup_file, struct crypt_device *ctx)
r = 0;
out:
if (devfd >= 0)
close(devfd);
crypt_memzero(&hdr, sizeof(hdr));
crypt_safe_free(buffer);
return r;
@@ -288,8 +271,8 @@ int LUKS_hdr_restore(
struct crypt_device *ctx)
{
struct device *device = crypt_metadata_device(ctx);
int r = 0, devfd = -1, diff_uuid = 0;
ssize_t buffer_size = 0;
int fd, r = 0, devfd = -1, diff_uuid = 0;
ssize_t ret, buffer_size = 0;
char *buffer = NULL, msg[200];
struct luks_phdr hdr_file;
@@ -312,24 +295,24 @@ int LUKS_hdr_restore(
goto out;
}
devfd = open(backup_file, O_RDONLY);
if (devfd == -1) {
fd = open(backup_file, O_RDONLY);
if (fd == -1) {
log_err(ctx, _("Cannot open header backup file %s."), backup_file);
r = -EINVAL;
goto out;
}
if (read_buffer(devfd, buffer, buffer_size) < buffer_size) {
ret = read_buffer(fd, buffer, buffer_size);
close(fd);
if (ret < buffer_size) {
log_err(ctx, _("Cannot read header backup file %s."), backup_file);
r = -EIO;
goto out;
}
close(devfd);
devfd = -1;
r = LUKS_read_phdr(hdr, 0, 0, ctx);
if (r == 0) {
log_dbg("Device %s already contains LUKS header, checking UUID and offset.", device_path(device));
log_dbg(ctx, "Device %s already contains LUKS header, checking UUID and offset.", device_path(device));
if(hdr->payloadOffset != hdr_file.payloadOffset ||
hdr->keyBytes != hdr_file.keyBytes) {
log_err(ctx, _("Data offset or key size differs on device and backup, restore failed."));
@@ -353,10 +336,10 @@ int LUKS_hdr_restore(
goto out;
}
log_dbg("Storing backup of header (%zu bytes) and keyslot area (%zu bytes) to device %s.",
log_dbg(ctx, "Storing backup of header (%zu bytes) and keyslot area (%zu bytes) to device %s.",
sizeof(*hdr), buffer_size - LUKS_ALIGN_KEYSLOTS, device_path(device));
devfd = device_open(device, O_RDWR);
devfd = device_open(ctx, device, O_RDWR);
if (devfd < 0) {
if (errno == EACCES)
log_err(ctx, _("Cannot write to device %s, permission denied."),
@@ -367,21 +350,16 @@ int LUKS_hdr_restore(
goto out;
}
if (write_blockwise(devfd, device_block_size(device), device_alignment(device),
buffer, buffer_size) < buffer_size) {
if (write_lseek_blockwise(devfd, device_block_size(ctx, device), device_alignment(device),
buffer, buffer_size, 0) < buffer_size) {
r = -EIO;
goto out;
}
close(devfd);
devfd = -1;
/* Be sure to reload new data */
r = LUKS_read_phdr(hdr, 1, 0, ctx);
out:
if (devfd >= 0) {
device_sync(device, devfd);
close(devfd);
}
device_sync(ctx, device);
crypt_safe_free(buffer);
return r;
}
@@ -412,19 +390,18 @@ static int _keyslot_repair(struct luks_phdr *phdr, struct crypt_device *ctx)
log_verbose(ctx, _("Repairing keyslots."));
log_dbg("Generating second header with the same parameters for check.");
log_dbg(ctx, "Generating second header with the same parameters for check.");
/* cipherName, cipherMode, hashSpec, uuid are already null terminated */
/* payloadOffset - cannot check */
r = LUKS_generate_phdr(&temp_phdr, vk, phdr->cipherName, phdr->cipherMode,
phdr->hashSpec,phdr->uuid, LUKS_STRIPES,
phdr->payloadOffset, 0,
1, ctx);
phdr->hashSpec, phdr->uuid,
phdr->payloadOffset * SECTOR_SIZE, 0, 0, ctx);
if (r < 0)
goto out;
for(i = 0; i < LUKS_NUMKEYS; ++i) {
if (phdr->keyblock[i].active == LUKS_KEY_ENABLED) {
log_dbg("Skipping repair for active keyslot %i.", i);
log_dbg(ctx, "Skipping repair for active keyslot %i.", i);
continue;
}
@@ -491,7 +468,7 @@ static int _check_and_convert_hdr(const char *device,
char luksMagic[] = LUKS_MAGIC;
if(memcmp(hdr->magic, luksMagic, LUKS_MAGIC_L)) { /* Check magic */
log_dbg("LUKS header not detected.");
log_dbg(ctx, "LUKS header not detected.");
if (require_luks_device)
log_err(ctx, _("Device %s is not a valid LUKS device."), device);
return -EINVAL;
@@ -565,7 +542,7 @@ int LUKS_read_phdr_backup(const char *backup_file,
ssize_t hdr_size = sizeof(struct luks_phdr);
int devfd = 0, r = 0;
log_dbg("Reading LUKS header of size %d from backup file %s",
log_dbg(ctx, "Reading LUKS header of size %d from backup file %s",
(int)hdr_size, backup_file);
devfd = open(backup_file, O_RDONLY);
@@ -591,9 +568,9 @@ int LUKS_read_phdr(struct luks_phdr *hdr,
int repair,
struct crypt_device *ctx)
{
int devfd, r = 0;
struct device *device = crypt_metadata_device(ctx);
ssize_t hdr_size = sizeof(struct luks_phdr);
int devfd = 0, r = 0;
/* LUKS header starts at offset 0, first keyslot on LUKS_ALIGN_KEYSLOTS */
assert(sizeof(struct luks_phdr) <= LUKS_ALIGN_KEYSLOTS);
@@ -604,17 +581,17 @@ int LUKS_read_phdr(struct luks_phdr *hdr,
if (repair && !require_luks_device)
return -EINVAL;
log_dbg("Reading LUKS header of size %zu from device %s",
log_dbg(ctx, "Reading LUKS header of size %zu from device %s",
hdr_size, device_path(device));
devfd = device_open(device, O_RDONLY);
devfd = device_open(ctx, device, O_RDONLY);
if (devfd < 0) {
log_err(ctx, _("Cannot open device %s."), device_path(device));
return -EINVAL;
}
if (read_blockwise(devfd, device_block_size(device), device_alignment(device),
hdr, hdr_size) < hdr_size)
if (read_lseek_blockwise(devfd, device_block_size(ctx, device), device_alignment(device),
hdr, hdr_size, 0) < hdr_size)
r = -EIO;
else
r = _check_and_convert_hdr(device_path(device), hdr, require_luks_device,
@@ -629,11 +606,10 @@ int LUKS_read_phdr(struct luks_phdr *hdr,
* has bigger sector size.
*/
if (!r && hdr->keyblock[0].keyMaterialOffset * SECTOR_SIZE < LUKS_ALIGN_KEYSLOTS) {
log_dbg("Old unaligned LUKS keyslot detected, disabling direct-io.");
log_dbg(ctx, "Old unaligned LUKS keyslot detected, disabling direct-io.");
device_disable_direct_io(device);
}
close(devfd);
return r;
}
@@ -647,14 +623,14 @@ int LUKS_write_phdr(struct luks_phdr *hdr,
struct luks_phdr convHdr;
int r;
log_dbg("Updating LUKS header of size %zu on device %s",
log_dbg(ctx, "Updating LUKS header of size %zu on device %s",
sizeof(struct luks_phdr), device_path(device));
r = LUKS_check_device_size(ctx, hdr, 1);
if (r)
return r;
devfd = device_open(device, O_RDWR);
devfd = device_open(ctx, device, O_RDWR);
if (devfd < 0) {
if (errno == EACCES)
log_err(ctx, _("Cannot write to device %s, permission denied."),
@@ -679,13 +655,12 @@ int LUKS_write_phdr(struct luks_phdr *hdr,
convHdr.keyblock[i].stripes = htonl(hdr->keyblock[i].stripes);
}
r = write_blockwise(devfd, device_block_size(device), device_alignment(device),
&convHdr, hdr_size) < hdr_size ? -EIO : 0;
r = write_lseek_blockwise(devfd, device_block_size(ctx, device), device_alignment(device),
&convHdr, hdr_size, 0) < hdr_size ? -EIO : 0;
if (r)
log_err(ctx, _("Error during update of LUKS header on device %s."), device_path(device));
device_sync(device, devfd);
close(devfd);
device_sync(ctx, device);
/* Re-read header from disk to be sure that in-memory and on-disk data are the same. */
if (!r) {
@@ -705,7 +680,7 @@ int LUKS_check_cipher(struct crypt_device *ctx, size_t keylength, const char *ci
struct volume_key *empty_key;
char buf[SECTOR_SIZE];
log_dbg("Checking if cipher %s-%s is usable.", cipher, cipher_mode);
log_dbg(ctx, "Checking if cipher %s-%s is usable.", cipher, cipher_mode);
empty_key = crypt_alloc_volume_key(keylength, NULL);
if (!empty_key)
@@ -722,30 +697,53 @@ int LUKS_check_cipher(struct crypt_device *ctx, size_t keylength, const char *ci
}
int LUKS_generate_phdr(struct luks_phdr *header,
const struct volume_key *vk,
const char *cipherName, const char *cipherMode, const char *hashSpec,
const char *uuid, unsigned int stripes,
unsigned int alignPayload,
unsigned int alignOffset,
int detached_metadata_device,
struct crypt_device *ctx)
const struct volume_key *vk,
const char *cipherName,
const char *cipherMode,
const char *hashSpec,
const char *uuid,
uint64_t data_offset, /* in bytes */
uint64_t align_offset, /* in bytes */
uint64_t required_alignment, /* in bytes */
struct crypt_device *ctx)
{
unsigned int i = 0, hdr_sectors = LUKS_calculate_device_sectors(vk->keylength);
size_t blocksPerStripeSet, currentSector;
int r;
int i, r;
size_t keyslot_sectors, header_sectors;
uuid_t partitionUuid;
struct crypt_pbkdf_type *pbkdf;
double PBKDF2_temp;
char luksMagic[] = LUKS_MAGIC;
/* For separate metadata device allow zero alignment */
if (alignPayload == 0 && !detached_metadata_device)
alignPayload = DEFAULT_DISK_ALIGNMENT / SECTOR_SIZE;
if (data_offset % SECTOR_SIZE || align_offset % SECTOR_SIZE ||
required_alignment % SECTOR_SIZE)
return -EINVAL;
if (alignPayload && detached_metadata_device && alignPayload < hdr_sectors) {
log_err(ctx, _("Data offset for detached LUKS header must be "
"either 0 or higher than header size (%d sectors)."),
hdr_sectors);
memset(header, 0, sizeof(struct luks_phdr));
keyslot_sectors = AF_split_sectors(vk->keylength, LUKS_STRIPES);
header_sectors = LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE;
for (i = 0; i < LUKS_NUMKEYS; i++) {
header->keyblock[i].active = LUKS_KEY_DISABLED;
header->keyblock[i].keyMaterialOffset = header_sectors;
header->keyblock[i].stripes = LUKS_STRIPES;
header_sectors = size_round_up(header_sectors + keyslot_sectors,
LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE);
}
/* In sector is now size of all keyslot material space */
/* Data offset has priority */
if (data_offset)
header->payloadOffset = data_offset / SECTOR_SIZE;
else if (required_alignment) {
header->payloadOffset = size_round_up(header_sectors, (required_alignment / SECTOR_SIZE));
header->payloadOffset += (align_offset / SECTOR_SIZE);
} else
header->payloadOffset = 0;
if (header->payloadOffset && header->payloadOffset < header_sectors) {
log_err(ctx, _("Data offset for LUKS header must be "
"either 0 or higher than header size."));
return -EINVAL;
}
@@ -761,8 +759,6 @@ int LUKS_generate_phdr(struct luks_phdr *header,
if (!uuid)
uuid_generate(partitionUuid);
memset(header,0,sizeof(struct luks_phdr));
/* Set Magic */
memcpy(header->magic,luksMagic,LUKS_MAGIC_L);
header->version=1;
@@ -774,7 +770,7 @@ int LUKS_generate_phdr(struct luks_phdr *header,
LUKS_fix_header_compatible(header);
log_dbg("Generating LUKS header version %d using hash %s, %s, %s, MK %d bytes",
log_dbg(ctx, "Generating LUKS header version %d using hash %s, %s, %s, MK %d bytes",
header->version, header->hashSpec ,header->cipherName, header->cipherMode,
header->keyBytes);
@@ -800,34 +796,15 @@ int LUKS_generate_phdr(struct luks_phdr *header,
header->mkDigestSalt, LUKS_SALTSIZE,
header->mkDigest,LUKS_DIGESTSIZE,
header->mkDigestIterations, 0, 0);
if(r < 0) {
if (r < 0) {
log_err(ctx, _("Cannot create LUKS header: header digest failed (using hash %s)."),
header->hashSpec);
return r;
}
currentSector = LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE;
blocksPerStripeSet = AF_split_sectors(vk->keylength, stripes);
for(i = 0; i < LUKS_NUMKEYS; ++i) {
header->keyblock[i].active = LUKS_KEY_DISABLED;
header->keyblock[i].keyMaterialOffset = currentSector;
header->keyblock[i].stripes = stripes;
currentSector = size_round_up(currentSector + blocksPerStripeSet,
LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE);
}
if (detached_metadata_device) {
/* for separate metadata device use alignPayload directly */
header->payloadOffset = alignPayload;
} else {
/* alignOffset - offset from natural device alignment provided by topology info */
currentSector = size_round_up(currentSector, alignPayload);
header->payloadOffset = currentSector + alignOffset;
}
uuid_unparse(partitionUuid, header->uuid);
log_dbg("Data offset %d, UUID %s, digest iterations %" PRIu32,
log_dbg(ctx, "Data offset %d, UUID %s, digest iterations %" PRIu32,
header->payloadOffset, header->uuid, header->mkDigestIterations);
return 0;
@@ -875,7 +852,7 @@ int LUKS_set_key(unsigned int keyIndex,
return -EINVAL;
}
log_dbg("Calculating data for key slot %d", keyIndex);
log_dbg(ctx, "Calculating data for key slot %d", keyIndex);
pbkdf = crypt_get_pbkdf(ctx);
r = crypt_benchmark_pbkdf_internal(ctx, pbkdf, vk->keylength);
if (r < 0)
@@ -887,7 +864,7 @@ int LUKS_set_key(unsigned int keyIndex,
*/
hdr->keyblock[keyIndex].passwordIterations =
at_least(pbkdf->iterations, LUKS_SLOT_ITERATIONS_MIN);
log_dbg("Key slot %d use %" PRIu32 " password iterations.", keyIndex,
log_dbg(ctx, "Key slot %d use %" PRIu32 " password iterations.", keyIndex,
hdr->keyblock[keyIndex].passwordIterations);
derived_key = crypt_alloc_volume_key(hdr->keyBytes, NULL);
@@ -917,13 +894,13 @@ int LUKS_set_key(unsigned int keyIndex,
goto out;
}
log_dbg("Using hash %s for AF in key slot %d, %d stripes",
log_dbg(ctx, "Using hash %s for AF in key slot %d, %d stripes",
hdr->hashSpec, keyIndex, hdr->keyblock[keyIndex].stripes);
r = AF_split(vk->key,AfKey,vk->keylength,hdr->keyblock[keyIndex].stripes,hdr->hashSpec);
r = AF_split(ctx, vk->key, AfKey, vk->keylength, hdr->keyblock[keyIndex].stripes, hdr->hashSpec);
if (r < 0)
goto out;
log_dbg("Updating key slot %d [0x%04x] area.", keyIndex,
log_dbg(ctx, "Updating key slot %d [0x%04x] area.", keyIndex,
hdr->keyblock[keyIndex].keyMaterialOffset << 9);
/* Encryption via dm */
r = LUKS_encrypt_to_storage(AfKey,
@@ -936,7 +913,7 @@ int LUKS_set_key(unsigned int keyIndex,
goto out;
/* Mark the key as active in phdr */
r = LUKS_keyslot_set(hdr, (int)keyIndex, 1);
r = LUKS_keyslot_set(hdr, (int)keyIndex, 1, ctx);
if (r < 0)
goto out;
@@ -983,7 +960,7 @@ static int LUKS_open_key(unsigned int keyIndex,
size_t AFEKSize;
int r;
log_dbg("Trying to open key slot %d [%s].", keyIndex,
log_dbg(ctx, "Trying to open key slot %d [%s].", keyIndex,
dbg_slot_state(ki));
if (ki < CRYPT_SLOT_ACTIVE)
@@ -1008,7 +985,7 @@ static int LUKS_open_key(unsigned int keyIndex,
if (r < 0)
goto out;
log_dbg("Reading key slot %d area.", keyIndex);
log_dbg(ctx, "Reading key slot %d area.", keyIndex);
r = LUKS_decrypt_from_storage(AfKey,
AFEKSize,
hdr->cipherName, hdr->cipherMode,
@@ -1018,7 +995,7 @@ static int LUKS_open_key(unsigned int keyIndex,
if (r < 0)
goto out;
r = AF_merge(AfKey,vk->key,vk->keylength,hdr->keyblock[keyIndex].stripes,hdr->hashSpec);
r = AF_merge(ctx, AfKey, vk->key, vk->keylength, hdr->keyblock[keyIndex].stripes, hdr->hashSpec);
if (r < 0)
goto out;
@@ -1040,7 +1017,7 @@ int LUKS_open_key_with_hdr(int keyIndex,
struct volume_key **vk,
struct crypt_device *ctx)
{
unsigned int i;
unsigned int i, tried = 0;
int r;
*vk = crypt_alloc_volume_key(hdr->keyBytes, NULL);
@@ -1050,7 +1027,7 @@ int LUKS_open_key_with_hdr(int keyIndex,
return (r < 0) ? r : keyIndex;
}
for(i = 0; i < LUKS_NUMKEYS; i++) {
for (i = 0; i < LUKS_NUMKEYS; i++) {
r = LUKS_open_key(i, password, passwordLen, hdr, *vk, ctx);
if(r == 0)
return i;
@@ -1059,9 +1036,11 @@ int LUKS_open_key_with_hdr(int keyIndex,
former meaning password wrong, latter key slot inactive */
if ((r != -EPERM) && (r != -ENOENT))
return r;
if (r == -EPERM)
tried++;
}
/* Warning, early returns above */
return -EPERM;
return tried ? -EPERM : -ENOENT;
}
int LUKS_del_key(unsigned int keyIndex,
@@ -1076,7 +1055,7 @@ int LUKS_del_key(unsigned int keyIndex,
if (r)
return r;
r = LUKS_keyslot_set(hdr, keyIndex, 0);
r = LUKS_keyslot_set(hdr, keyIndex, 0, ctx);
if (r) {
log_err(ctx, _("Key slot %d is invalid, please select keyslot between 0 and %d."),
keyIndex, LUKS_NUMKEYS - 1);
@@ -1155,7 +1134,7 @@ int LUKS_keyslot_active_count(struct luks_phdr *hdr)
return num;
}
int LUKS_keyslot_set(struct luks_phdr *hdr, int keyslot, int enable)
int LUKS_keyslot_set(struct luks_phdr *hdr, int keyslot, int enable, struct crypt_device *ctx)
{
crypt_keyslot_info ki = LUKS_keyslot_info(hdr, keyslot);
@@ -1163,7 +1142,7 @@ int LUKS_keyslot_set(struct luks_phdr *hdr, int keyslot, int enable)
return -EINVAL;
hdr->keyblock[keyslot].active = enable ? LUKS_KEY_ENABLED : LUKS_KEY_DISABLED;
log_dbg("Key slot %d was %s in LUKS header.", keyslot, enable ? "enabled" : "disabled");
log_dbg(ctx, "Key slot %d was %s in LUKS header.", keyslot, enable ? "enabled" : "disabled");
return 0;
}
@@ -1173,41 +1152,20 @@ int LUKS1_activate(struct crypt_device *cd,
uint32_t flags)
{
int r;
char *dm_cipher = NULL;
enum devcheck device_check;
struct crypt_dm_active_device dmd = {
.target = DM_CRYPT,
.uuid = crypt_get_uuid(cd),
.flags = flags,
.size = 0,
.data_device = crypt_data_device(cd),
.u.crypt = {
.cipher = NULL,
.vk = vk,
.offset = crypt_get_data_offset(cd),
.iv_offset = 0,
.sector_size = crypt_get_sector_size(cd),
}
.flags = flags,
.uuid = crypt_get_uuid(cd),
};
if (dmd.flags & CRYPT_ACTIVATE_SHARED)
device_check = DEV_SHARED;
else
device_check = DEV_EXCL;
r = dm_crypt_target_set(&dmd.segment, 0, dmd.size, crypt_data_device(cd),
vk, crypt_get_cipher_spec(cd), crypt_get_iv_offset(cd),
crypt_get_data_offset(cd), crypt_get_integrity(cd),
crypt_get_integrity_tag_size(cd), crypt_get_sector_size(cd));
if (!r)
r = create_or_reload_device(cd, name, CRYPT_LUKS1, &dmd);
r = device_block_adjust(cd, dmd.data_device, device_check,
dmd.u.crypt.offset, &dmd.size, &dmd.flags);
if (r)
return r;
dm_targets_free(cd, &dmd);
r = asprintf(&dm_cipher, "%s-%s", crypt_get_cipher(cd), crypt_get_cipher_mode(cd));
if (r < 0)
return -ENOMEM;
dmd.u.crypt.cipher = dm_cipher;
r = dm_create_device(cd, name, CRYPT_LUKS1, &dmd, 0);
free(dm_cipher);
return r;
}
@@ -1229,7 +1187,7 @@ int LUKS_wipe_header_areas(struct luks_phdr *hdr,
wipe_block = 4096;
}
log_dbg("Wiping LUKS areas (0x%06" PRIx64 " - 0x%06" PRIx64") with zeroes.",
log_dbg(ctx, "Wiping LUKS areas (0x%06" PRIx64 " - 0x%06" PRIx64") with zeroes.",
offset, length + offset);
r = crypt_wipe_device(ctx, crypt_metadata_device(ctx), CRYPT_WIPE_ZERO,
@@ -1252,7 +1210,7 @@ int LUKS_wipe_header_areas(struct luks_phdr *hdr,
if (length == 0 || offset < 4096)
return -EINVAL;
log_dbg("Wiping keyslot %i area (0x%06" PRIx64 " - 0x%06" PRIx64") with random data.",
log_dbg(ctx, "Wiping keyslot %i area (0x%06" PRIx64 " - 0x%06" PRIx64") with random data.",
i, offset, length + offset);
r = crypt_wipe_device(ctx, crypt_metadata_device(ctx), CRYPT_WIPE_RANDOM,
@@ -1263,3 +1221,18 @@ int LUKS_wipe_header_areas(struct luks_phdr *hdr,
return r;
}
int LUKS_keyslot_pbkdf(struct luks_phdr *hdr, int keyslot, struct crypt_pbkdf_type *pbkdf)
{
if (keyslot >= LUKS_NUMKEYS || keyslot < 0)
return -EINVAL;
pbkdf->type = CRYPT_KDF_PBKDF2;
pbkdf->hash = hdr->hashSpec;
pbkdf->iterations = hdr->keyblock[keyslot].passwordIterations;
pbkdf->max_memory_kb = 0;
pbkdf->parallel_threads = 0;
pbkdf->time_ms = 0;
pbkdf->flags = 0;
return 0;
}

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup
*
* Copyright (C) 2004-2006, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2006 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -107,17 +107,15 @@ int LUKS_check_cipher(struct crypt_device *ctx,
const char *cipher,
const char *cipher_mode);
int LUKS_generate_phdr(
struct luks_phdr *header,
int LUKS_generate_phdr(struct luks_phdr *header,
const struct volume_key *vk,
const char *cipherName,
const char *cipherMode,
const char *hashSpec,
const char *uuid,
unsigned int stripes,
unsigned int alignPayload,
unsigned int alignOffset,
int detached_metadata_device,
uint64_t data_offset,
uint64_t align_offset,
uint64_t required_alignment,
struct crypt_device *ctx);
int LUKS_read_phdr(
@@ -177,13 +175,16 @@ int LUKS_wipe_header_areas(struct luks_phdr *hdr,
crypt_keyslot_info LUKS_keyslot_info(struct luks_phdr *hdr, int keyslot);
int LUKS_keyslot_find_empty(struct luks_phdr *hdr);
int LUKS_keyslot_active_count(struct luks_phdr *hdr);
int LUKS_keyslot_set(struct luks_phdr *hdr, int keyslot, int enable);
int LUKS_keyslot_set(struct luks_phdr *hdr, int keyslot, int enable,
struct crypt_device *ctx);
int LUKS_keyslot_area(const struct luks_phdr *hdr,
int keyslot,
uint64_t *offset,
uint64_t *length);
size_t LUKS_device_sectors(const struct luks_phdr *hdr);
size_t LUKS_keyslots_offset(const struct luks_phdr *hdr);
int LUKS_keyslot_pbkdf(struct luks_phdr *hdr, int keyslot,
struct crypt_pbkdf_type *pbkdf);
int LUKS1_activate(struct crypt_device *cd,
const char *name,

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -22,6 +22,8 @@
#ifndef _CRYPTSETUP_LUKS2_ONDISK_H
#define _CRYPTSETUP_LUKS2_ONDISK_H
#include <stdbool.h>
#include "libcryptsetup.h"
#define LUKS2_MAGIC_1ST "LUKS\xba\xbe"
@@ -45,11 +47,16 @@
#define LUKS2_DIGEST_MAX 8
#define CRYPT_ANY_SEGMENT -1
#define CRYPT_DEFAULT_SEGMENT 0
#define CRYPT_DEFAULT_SEGMENT_STR "0"
#define CRYPT_DEFAULT_SEGMENT -2
#define CRYPT_ONE_SEGMENT -3
#define CRYPT_ANY_DIGEST -1
/* 20 MiBs */
#define LUKS2_DEFAULT_NONE_REENCRYPTION_LENGTH 0x1400000
struct device;
/*
* LUKS2 header on-disk.
*
@@ -117,6 +124,77 @@ struct luks2_keyslot_params {
} area;
};
struct reenc_protection {
enum { REENC_PROTECTION_NONE = 0, /* none should be 0 always */
REENC_PROTECTION_CHECKSUM,
REENC_PROTECTION_JOURNAL,
REENC_PROTECTION_DATASHIFT } type;
union {
struct {
} none;
struct {
char hash[LUKS2_CHECKSUM_ALG_L]; // or include luks.h
struct crypt_hash *ch;
size_t hash_size;
/* buffer for checksums */
void *checksums;
size_t checksums_len;
} csum;
struct {
} ds;
} p;
};
struct luks2_reenc_context {
/* reencryption window attributes */
uint64_t offset;
uint64_t progress;
uint64_t length;
uint64_t data_shift;
size_t alignment;
uint64_t device_size;
bool online;
bool fixed_length;
crypt_reencrypt_direction_info direction;
enum { REENCRYPT = 0, ENCRYPT, DECRYPT } type;
char *device_name;
char *hotzone_name;
char *overlay_name;
/* reencryption window persistence attributes */
struct reenc_protection rp;
int reenc_keyslot;
/* already running reencryption */
json_object *jobj_segs_pre;
json_object *jobj_segs_after;
/* backup segments */
json_object *jobj_segment_new;
int digest_new;
json_object *jobj_segment_old;
int digest_old;
json_object *jobj_segment_moved;
struct volume_key *vks;
void *reenc_buffer;
ssize_t read;
struct crypt_storage_wrapper *cw1;
struct crypt_storage_wrapper *cw2;
uint32_t wflags1;
uint32_t wflags2;
struct crypt_lock_handle *reenc_lock;
};
crypt_reencrypt_info LUKS2_reenc_status(struct luks2_hdr *hdr);
/*
* Supportable header sizes (hdr_disk + JSON area)
* Also used as offset for the 2nd header.
@@ -125,19 +203,26 @@ struct luks2_keyslot_params {
#define LUKS2_HDR_BIN_LEN sizeof(struct luks2_hdr_disk)
#define LUKS2_HDR_DEFAULT_LEN 0x400000 /* 4 MiB */
//#define LUKS2_DEFAULT_HDR_SIZE 0x400000 /* 4 MiB */
#define LUKS2_DEFAULT_HDR_SIZE 0x1000000 /* 16 MiB */
#define LUKS2_MAX_KEYSLOTS_SIZE 0x8000000 /* 128 MiB */
#define LUKS2_HDR_OFFSET_MAX 0x400000 /* 4 MiB */
/* Offsets for secondary header (for scan if primary header is corrupted). */
#define LUKS2_HDR2_OFFSETS { 0x04000, 0x008000, 0x010000, 0x020000, \
0x40000, 0x080000, 0x100000, 0x200000, 0x400000 }
0x40000, 0x080000, 0x100000, 0x200000, LUKS2_HDR_OFFSET_MAX }
int LUKS2_hdr_version_unlocked(struct crypt_device *cd,
const char *backup_file);
int LUKS2_device_write_lock(struct crypt_device *cd,
struct luks2_hdr *hdr, struct device *device);
int LUKS2_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr, int repair);
int LUKS2_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr);
int LUKS2_hdr_write_force(struct crypt_device *cd, struct luks2_hdr *hdr);
int LUKS2_hdr_dump(struct crypt_device *cd, struct luks2_hdr *hdr);
int LUKS2_hdr_uuid(struct crypt_device *cd,
@@ -150,7 +235,7 @@ int LUKS2_hdr_labels(struct crypt_device *cd,
const char *subsystem,
int commit);
void LUKS2_hdr_free(struct luks2_hdr *hdr);
void LUKS2_hdr_free(struct crypt_device *cd, struct luks2_hdr *hdr);
int LUKS2_hdr_backup(struct crypt_device *cd,
struct luks2_hdr *hdr,
@@ -161,8 +246,9 @@ int LUKS2_hdr_restore(struct crypt_device *cd,
uint64_t LUKS2_hdr_and_areas_size(json_object *jobj);
uint64_t LUKS2_keyslots_size(json_object *jobj);
uint64_t LUKS2_metadata_size(json_object *jobj);
int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd);
int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd, const char *cipher_spec);
/*
* Generic LUKS2 keyslot
@@ -174,6 +260,13 @@ int LUKS2_keyslot_open(struct crypt_device *cd,
size_t password_len,
struct volume_key **vk);
int LUKS2_keyslot_open_all_segments(struct crypt_device *cd,
int keyslot_old,
int keyslot_new,
const char *password,
size_t password_len,
struct volume_key **vks);
int LUKS2_keyslot_store(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
@@ -182,6 +275,20 @@ int LUKS2_keyslot_store(struct crypt_device *cd,
const struct volume_key *vk,
const struct luks2_keyslot_params *params);
int LUKS2_keyslot_reencrypt_store(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const void *buffer,
size_t buffer_length);
int LUKS2_keyslot_reencrypt_create(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const struct crypt_params_reencrypt *params);
int reenc_keyslot_update(struct crypt_device *cd,
const struct luks2_reenc_context *rh);
int LUKS2_keyslot_wipe(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
@@ -258,12 +365,85 @@ int LUKS2_token_open_and_activate_any(struct crypt_device *cd,
int LUKS2_tokens_count(struct luks2_hdr *hdr);
/*
* Generic LUKS2 segment
*/
json_object *json_get_segments_jobj(json_object *hdr_jobj);
uint64_t json_segment_get_offset(json_object *jobj_segment, unsigned blockwise);
const char *json_segment_type(json_object *jobj_segment);
uint64_t json_segment_get_iv_offset(json_object *jobj_segment);
uint64_t json_segment_get_size(json_object *jobj_segment, unsigned blockwise);
const char *json_segment_get_cipher(json_object *jobj_segment);
int json_segment_get_sector_size(json_object *jobj_segment);
json_object *json_segment_get_flags(json_object *jobj_segment);
bool json_segment_is_backup(json_object *jobj_segment);
bool json_segment_is_reencrypt(json_object *jobj_segment);
json_object *json_segments_get_segment(json_object *jobj_segments, int segment);
int json_segments_count(json_object *jobj_segments);
json_object *json_segments_get_segment_by_flag(json_object *jobj_segments, const char *flag);
void json_segment_remove_flag(json_object *jobj_segment, const char *flag);
uint64_t json_segments_get_minimal_offset(json_object *jobj_segments, unsigned blockwise);
json_object *json_segment_create_linear(uint64_t offset, const uint64_t *length, unsigned reencryption);
json_object *json_segment_create_crypt(uint64_t offset, uint64_t iv_offset, const uint64_t *length, const char *cipher, uint32_t sector_size, unsigned reencryption);
int json_segments_segment_in_reencrypt(json_object *jobj_segments);
int LUKS2_segments_count(struct luks2_hdr *hdr);
int LUKS2_segment_first_unused_id(struct luks2_hdr *hdr);
int LUKS2_segment_set_flag(json_object *jobj_segment, const char *flag);
json_object *LUKS2_get_segment_by_flag(struct luks2_hdr *hdr, const char *flag);
int LUKS2_get_segment_id_by_flag(struct luks2_hdr *hdr, const char *flag);
json_object *LUKS2_get_ignored_segments(struct luks2_hdr *hdr);
int LUKS2_segments_set(struct crypt_device *cd,
struct luks2_hdr *hdr,
json_object *jobj_segments,
int commit);
uint64_t LUKS2_segment_offset(struct luks2_hdr *hdr,
int segment,
unsigned blockwise);
uint64_t LUKS2_segment_size(struct luks2_hdr *hdr,
int segment,
unsigned blockwise);
int LUKS2_segment_is_type(struct luks2_hdr *hdr,
int segment,
const char *type);
int LUKS2_segment_by_type(struct luks2_hdr *hdr,
const char *type);
int LUKS2_last_segment_by_type(struct luks2_hdr *hdr,
const char *type);
int LUKS2_get_default_segment(struct luks2_hdr *hdr);
int LUKS2_reencrypt_digest_new(struct luks2_hdr *hdr);
int LUKS2_reencrypt_digest_old(struct luks2_hdr *hdr);
const char *LUKS2_reencrypt_protection_type(struct luks2_hdr *hdr);
const char *LUKS2_reencrypt_protection_hash(struct luks2_hdr *hdr);
uint64_t LUKS2_reencrypt_data_shift(struct luks2_hdr *hdr);
const char *LUKS2_reencrypt_mode(struct luks2_hdr *hdr);
/*
* Generic LUKS2 digest
*/
int LUKS2_digest_by_segment(struct crypt_device *cd,
int LUKS2_digest_any_matching(struct crypt_device *cd,
struct luks2_hdr *hdr,
const struct volume_key *vk);
int LUKS2_digest_by_segment(struct luks2_hdr *hdr, int segment);
int LUKS2_digest_verify_by_digest(struct crypt_device *cd,
struct luks2_hdr *hdr,
int segment);
int digest,
const struct volume_key *vk);
int LUKS2_digest_verify_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr,
@@ -275,7 +455,7 @@ void LUKS2_digests_erase_unused(struct crypt_device *cd,
int LUKS2_digest_verify(struct crypt_device *cd,
struct luks2_hdr *hdr,
struct volume_key *vk,
const struct volume_key *vk,
int keyslot);
int LUKS2_digest_dump(struct crypt_device *cd,
@@ -295,9 +475,7 @@ int LUKS2_digest_segment_assign(struct crypt_device *cd,
int assign,
int commit);
int LUKS2_digest_by_keyslot(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot);
int LUKS2_digest_by_keyslot(struct luks2_hdr *hdr, int keyslot);
int LUKS2_digest_create(struct crypt_device *cd,
const char *type,
@@ -312,6 +490,25 @@ int LUKS2_activate(struct crypt_device *cd,
struct volume_key *vk,
uint32_t flags);
int LUKS2_activate_multi(struct crypt_device *cd,
const char *name,
struct volume_key *vks,
uint32_t flags);
struct crypt_dm_active_device;
int LUKS2_deactivate(struct crypt_device *cd,
const char *name,
struct luks2_hdr *hdr,
struct crypt_dm_active_device *dmd,
uint32_t flags);
int LUKS2_reload(struct crypt_device *cd,
const char *name,
struct volume_key *vks,
uint64_t device_size,
uint32_t flags);
int LUKS2_keyslot_luks2_format(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
@@ -327,9 +524,11 @@ int LUKS2_generate_hdr(
const char *integrity,
const char *uuid,
unsigned int sector_size,
unsigned int alignPayload,
unsigned int alignOffset,
int detached_metadata_device);
uint64_t data_offset,
uint64_t align_offset,
uint64_t required_alignment,
uint64_t metadata_size,
uint64_t keyslots_size);
int LUKS2_check_metadata_area_size(uint64_t metadata_size);
int LUKS2_check_keyslots_area_size(uint64_t keyslots_size);
@@ -338,23 +537,30 @@ int LUKS2_wipe_header_areas(struct crypt_device *cd,
struct luks2_hdr *hdr);
uint64_t LUKS2_get_data_offset(struct luks2_hdr *hdr);
int LUKS2_get_data_size(struct luks2_hdr *hdr, uint64_t *size, bool *dynamic);
int LUKS2_get_sector_size(struct luks2_hdr *hdr);
const char *LUKS2_get_cipher(struct luks2_hdr *hdr, int segment);
const char *LUKS2_get_integrity(struct luks2_hdr *hdr, int segment);
int LUKS2_keyslot_params_default(struct crypt_device *cd, struct luks2_hdr *hdr,
size_t key_size, struct luks2_keyslot_params *params);
int LUKS2_get_keyslot_params(struct luks2_hdr *hdr, int keyslot,
struct luks2_keyslot_params *params);
struct luks2_keyslot_params *params);
int LUKS2_get_volume_key_size(struct luks2_hdr *hdr, int segment);
int LUKS2_get_keyslot_key_size(struct luks2_hdr *hdr, int keyslot);
int LUKS2_keyslot_find_empty(struct luks2_hdr *hdr, const char *type);
int LUKS2_get_keyslot_stored_key_size(struct luks2_hdr *hdr, int keyslot);
const char *LUKS2_get_keyslot_cipher(struct luks2_hdr *hdr, int keyslot, size_t *key_size);
int LUKS2_keyslot_find_empty(struct luks2_hdr *hdr);
int LUKS2_keyslot_active_count(struct luks2_hdr *hdr, int segment);
int LUKS2_keyslot_for_segment(struct luks2_hdr *hdr, int keyslot, int segment);
int LUKS2_find_keyslot(struct luks2_hdr *hdr, const char *type);
int LUKS2_find_keyslot_for_segment(struct luks2_hdr *hdr, int segment, const char *type);
crypt_keyslot_info LUKS2_keyslot_info(struct luks2_hdr *hdr, int keyslot);
int LUKS2_keyslot_area(struct luks2_hdr *hdr,
int keyslot,
uint64_t *offset,
uint64_t *length);
int LUKS2_keyslot_pbkdf(struct luks2_hdr *hdr, int keyslot, struct crypt_pbkdf_type *pbkdf);
int LUKS2_set_keyslots_size(struct crypt_device *cd,
struct luks2_hdr *hdr,
uint64_t data_offset);
/*
* Permanent activation flags stored in header
*/
@@ -365,14 +571,17 @@ int LUKS2_config_set_flags(struct crypt_device *cd, struct luks2_hdr *hdr, uint3
* Requirements for device activation or header modification
*/
int LUKS2_config_get_requirements(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t *reqs);
int LUKS2_config_set_requirements(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t reqs);
int LUKS2_config_set_requirements(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t reqs, bool commit);
int LUKS2_unmet_requirements(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t reqs_mask, int quiet);
char *LUKS2_key_description_by_digest(struct crypt_device *cd, int digest);
int LUKS2_key_description_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int segment);
int LUKS2_volume_key_load_in_keyring_by_keyslot(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int keyslot);
int LUKS2_volume_key_load_in_keyring_by_digest(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int digest);
struct luks_phdr;
int LUKS2_luks1_to_luks2(struct crypt_device *cd,
@@ -382,4 +591,32 @@ int LUKS2_luks2_to_luks1(struct crypt_device *cd,
struct luks2_hdr *hdr2,
struct luks_phdr *hdr1);
/*
* LUKS2 reencryption
*/
int LUKS2_verify_and_upload_keys(struct crypt_device *cd,
struct luks2_hdr *hdr,
int digest_old,
int digest_new,
struct volume_key *vks);
int LUKS2_reenc_update_segments(struct crypt_device *cd,
struct luks2_hdr *hdr,
struct luks2_reenc_context *rh);
int LUKS2_reencrypt_locked_recovery_by_passphrase(struct crypt_device *cd,
int keyslot_old,
int keyslot_new,
const char *passphrase,
size_t passphrase_size,
uint32_t flags,
struct volume_key **vks);
void LUKS2_reenc_context_free(struct crypt_device *cd, struct luks2_reenc_context *rh);
int crypt_reencrypt_lock(struct crypt_device *cd, const char *uuid, struct crypt_lock_handle **reencrypt_lock);
void crypt_reencrypt_unlock(struct crypt_device *cd, struct crypt_lock_handle *reencrypt_lock);
int luks2_check_device_size(struct crypt_device *cd, struct luks2_hdr *hdr, uint64_t check_size, uint64_t *device_size, bool activation);
#endif

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2, digest handling
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -86,14 +86,12 @@ int LUKS2_digest_create(struct crypt_device *cd,
if (digest < 0)
return -EINVAL;
log_dbg("Creating new digest %d (%s).", digest, type);
log_dbg(cd, "Creating new digest %d (%s).", digest, type);
return dh->store(cd, digest, vk->key, vk->keylength) ?: digest;
}
int LUKS2_digest_by_keyslot(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot)
int LUKS2_digest_by_keyslot(struct luks2_hdr *hdr, int keyslot)
{
char keyslot_name[16];
json_object *jobj_digests, *jobj_digest_keyslots;
@@ -112,32 +110,43 @@ int LUKS2_digest_by_keyslot(struct crypt_device *cd,
return -ENOENT;
}
int LUKS2_digest_verify(struct crypt_device *cd,
int LUKS2_digest_verify_by_digest(struct crypt_device *cd,
struct luks2_hdr *hdr,
struct volume_key *vk,
int keyslot)
int digest,
const struct volume_key *vk)
{
const digest_handler *h;
int digest, r;
int r;
digest = LUKS2_digest_by_keyslot(cd, hdr, keyslot);
if (digest < 0)
return digest;
log_dbg("Verifying key from keyslot %d, digest %d.", keyslot, digest);
h = LUKS2_digest_handler(cd, digest);
if (!h)
return -EINVAL;
r = h->verify(cd, digest, vk->key, vk->keylength);
if (r < 0) {
log_dbg("Digest %d (%s) verify failed with %d.", digest, h->name, r);
log_dbg(cd, "Digest %d (%s) verify failed with %d.", digest, h->name, r);
return r;
}
return digest;
}
int LUKS2_digest_verify(struct crypt_device *cd,
struct luks2_hdr *hdr,
const struct volume_key *vk,
int keyslot)
{
int digest;
digest = LUKS2_digest_by_keyslot(hdr, keyslot);
if (digest < 0)
return digest;
log_dbg(cd, "Verifying key from keyslot %d, digest %d.", keyslot, digest);
return LUKS2_digest_verify_by_digest(cd, hdr, digest, vk);
}
int LUKS2_digest_dump(struct crypt_device *cd, int digest)
{
const digest_handler *h;
@@ -148,41 +157,36 @@ int LUKS2_digest_dump(struct crypt_device *cd, int digest)
return h->dump(cd, digest);
}
int LUKS2_digest_any_matching(struct crypt_device *cd,
struct luks2_hdr *hdr,
const struct volume_key *vk)
{
int digest;
for (digest = 0; digest < LUKS2_DIGEST_MAX; digest++)
if (LUKS2_digest_verify_by_digest(cd, hdr, digest, vk) == digest)
return digest;
return -ENOENT;
}
int LUKS2_digest_verify_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr,
int segment,
const struct volume_key *vk)
{
const digest_handler *h;
int digest, r;
digest = LUKS2_digest_by_segment(cd, hdr, segment);
if (digest < 0)
return digest;
log_dbg("Verifying key digest %d.", digest);
h = LUKS2_digest_handler(cd, digest);
if (!h)
return -EINVAL;
r = h->verify(cd, digest, vk->key, vk->keylength);
if (r < 0) {
log_dbg("Digest %d (%s) verify failed with %d.", digest, h->name, r);
return r;
}
return digest;
return LUKS2_digest_verify_by_digest(cd, hdr, LUKS2_digest_by_segment(hdr, segment), vk);
}
/* FIXME: segment can have more digests */
int LUKS2_digest_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr,
int segment)
int LUKS2_digest_by_segment(struct luks2_hdr *hdr, int segment)
{
char segment_name[16];
json_object *jobj_digests, *jobj_digest_segments;
if (segment == CRYPT_DEFAULT_SEGMENT)
segment = LUKS2_get_default_segment(hdr);
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
if (snprintf(segment_name, sizeof(segment_name), "%u", segment) < 1)
@@ -205,7 +209,7 @@ static int assign_one_digest(struct crypt_device *cd, struct luks2_hdr *hdr,
json_object *jobj1, *jobj_digest, *jobj_digest_keyslots;
char num[16];
log_dbg("Keyslot %i %s digest %i.", keyslot, assign ? "assigned to" : "unassigned from", digest);
log_dbg(cd, "Keyslot %i %s digest %i.", keyslot, assign ? "assigned to" : "unassigned from", digest);
jobj_digest = LUKS2_get_digest_jobj(hdr, digest);
if (!jobj_digest)
@@ -254,13 +258,43 @@ int LUKS2_digest_assign(struct crypt_device *cd, struct luks2_hdr *hdr,
return commit ? LUKS2_hdr_write(cd, hdr) : 0;
}
static int assign_all_segments(struct crypt_device *cd, struct luks2_hdr *hdr,
int digest, int assign)
{
json_object *jobj1, *jobj_digest, *jobj_digest_segments;
jobj_digest = LUKS2_get_digest_jobj(hdr, digest);
if (!jobj_digest)
return -EINVAL;
json_object_object_get_ex(jobj_digest, "segments", &jobj_digest_segments);
if (!jobj_digest_segments)
return -EINVAL;
if (assign) {
json_object_object_foreach(LUKS2_get_segments_jobj(hdr), key, value) {
UNUSED(value);
jobj1 = LUKS2_array_jobj(jobj_digest_segments, key);
if (!jobj1)
json_object_array_add(jobj_digest_segments, json_object_new_string(key));
}
} else {
jobj1 = json_object_new_array();
if (!jobj1)
return -ENOMEM;
json_object_object_add(jobj_digest, "segments", jobj1);
}
return 0;
}
static int assign_one_segment(struct crypt_device *cd, struct luks2_hdr *hdr,
int segment, int digest, int assign)
{
json_object *jobj1, *jobj_digest, *jobj_digest_segments;
char num[16];
log_dbg("Segment %i %s digest %i.", segment, assign ? "assigned to" : "unassigned from", digest);
log_dbg(cd, "Segment %i %s digest %i.", segment, assign ? "assigned to" : "unassigned from", digest);
jobj_digest = LUKS2_get_digest_jobj(hdr, digest);
if (!jobj_digest)
@@ -290,17 +324,27 @@ int LUKS2_digest_segment_assign(struct crypt_device *cd, struct luks2_hdr *hdr,
json_object *jobj_digests;
int r = 0;
if (segment == CRYPT_DEFAULT_SEGMENT)
segment = LUKS2_get_default_segment(hdr);
if (digest == CRYPT_ANY_DIGEST) {
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
json_object_object_foreach(jobj_digests, key, val) {
UNUSED(val);
r = assign_one_segment(cd, hdr, segment, atoi(key), assign);
if (segment == CRYPT_ANY_SEGMENT)
r = assign_all_segments(cd, hdr, atoi(key), assign);
else
r = assign_one_segment(cd, hdr, segment, atoi(key), assign);
if (r < 0)
break;
}
} else
r = assign_one_segment(cd, hdr, segment, digest, assign);
} else {
if (segment == CRYPT_ANY_SEGMENT)
r = assign_all_segments(cd, hdr, digest, assign);
else
r = assign_one_segment(cd, hdr, segment, digest, assign);
}
if (r < 0)
return r;
@@ -335,7 +379,7 @@ void LUKS2_digests_erase_unused(struct crypt_device *cd,
json_object_object_foreach(jobj_digests, key, val) {
if (digest_unused(val)) {
log_dbg("Erasing unused digest %d.", atoi(key));
log_dbg(cd, "Erasing unused digest %d.", atoi(key));
json_object_object_del(jobj_digests, key);
}
}
@@ -371,10 +415,15 @@ static char *get_key_description_by_digest(struct crypt_device *cd, int digest)
return desc;
}
char *LUKS2_key_description_by_digest(struct crypt_device *cd, int digest)
{
return get_key_description_by_digest(cd, digest);
}
int LUKS2_key_description_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int segment)
{
char *desc = get_key_description_by_digest(cd, LUKS2_digest_by_segment(cd, hdr, segment));
char *desc = get_key_description_by_digest(cd, LUKS2_digest_by_segment(hdr, segment));
int r;
r = crypt_volume_key_set_description(vk, desc);
@@ -385,7 +434,21 @@ int LUKS2_key_description_by_segment(struct crypt_device *cd,
int LUKS2_volume_key_load_in_keyring_by_keyslot(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int keyslot)
{
char *desc = get_key_description_by_digest(cd, LUKS2_digest_by_keyslot(cd, hdr, keyslot));
char *desc = get_key_description_by_digest(cd, LUKS2_digest_by_keyslot(hdr, keyslot));
int r;
r = crypt_volume_key_set_description(vk, desc);
if (!r)
r = crypt_volume_key_load_in_keyring(cd, vk);
free(desc);
return r;
}
int LUKS2_volume_key_load_in_keyring_by_digest(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int digest)
{
char *desc = get_key_description_by_digest(cd, digest);
int r;
r = crypt_volume_key_set_description(vk, desc);

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2, PBKDF2 digest handler (LUKS1 compatible)
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -94,18 +94,25 @@ static int PBKDF2_digest_store(struct crypt_device *cd,
size_t volume_key_len)
{
json_object *jobj_digest, *jobj_digests;
char salt[LUKS_SALTSIZE], digest_raw[128], num[16];
char salt[LUKS_SALTSIZE], digest_raw[128];
int hmac_size, r;
char *base64_str;
struct luks2_hdr *hdr;
struct crypt_pbkdf_limits pbkdf_limits;
const struct crypt_pbkdf_type *pbkdf_cd;
struct crypt_pbkdf_type pbkdf = {
.type = CRYPT_KDF_PBKDF2,
.hash = "sha256",
.time_ms = LUKS_MKD_ITERATIONS_MS,
};
log_dbg("Setting PBKDF2 type key digest %d.", digest);
/* Inherit hash from PBKDF setting */
pbkdf_cd = crypt_get_pbkdf_type(cd);
if (pbkdf_cd)
pbkdf.hash = pbkdf_cd->hash;
if (!pbkdf.hash)
pbkdf.hash = DEFAULT_LUKS1_HASH;
log_dbg(cd, "Setting PBKDF2 type key digest %d.", digest);
r = crypt_random_get(cd, salt, LUKS_SALTSIZE, CRYPT_RND_SALT);
if (r < 0)
@@ -124,8 +131,8 @@ static int PBKDF2_digest_store(struct crypt_device *cd,
}
hmac_size = crypt_hmac_size(pbkdf.hash);
if (hmac_size < 0)
return hmac_size;
if (hmac_size < 0 || hmac_size > (int)sizeof(digest_raw))
return -EINVAL;
r = crypt_pbkdf(CRYPT_KDF_PBKDF2, pbkdf.hash, volume_key, volume_key_len,
salt, LUKS_SALTSIZE, digest_raw, hmac_size,
@@ -163,12 +170,10 @@ static int PBKDF2_digest_store(struct crypt_device *cd,
json_object_object_add(jobj_digest, "digest", json_object_new_string(base64_str));
free(base64_str);
if (jobj_digests) {
snprintf(num, sizeof(num), "%d", digest);
json_object_object_add(jobj_digests, num, jobj_digest);
}
if (jobj_digests)
json_object_object_add_by_uint(jobj_digests, digest, jobj_digest);
JSON_DBG(jobj_digest, "Digest JSON");
JSON_DBG(cd, jobj_digest, "Digest JSON:");
return 0;
}

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -26,7 +26,8 @@
/*
* Helper functions
*/
json_object *parse_json_len(const char *json_area, uint64_t max_length, int *json_len)
json_object *parse_json_len(struct crypt_device *cd, const char *json_area,
uint64_t max_length, int *json_len)
{
json_object *jobj;
struct json_tokener *jtok;
@@ -37,13 +38,13 @@ json_object *parse_json_len(const char *json_area, uint64_t max_length, int *jso
jtok = json_tokener_new();
if (!jtok) {
log_dbg("ERROR: Failed to init json tokener");
log_dbg(cd, "ERROR: Failed to init json tokener");
return NULL;
}
jobj = json_tokener_parse_ex(jtok, json_area, max_length);
if (!jobj)
log_dbg("ERROR: Failed to parse json data (%d): %s",
log_dbg(cd, "ERROR: Failed to parse json data (%d): %s",
json_tokener_get_error(jtok),
json_tokener_error_desc(json_tokener_get_error(jtok)));
else
@@ -54,7 +55,8 @@ json_object *parse_json_len(const char *json_area, uint64_t max_length, int *jso
return jobj;
}
static void log_dbg_checksum(const uint8_t *csum, const char *csum_alg, const char *info)
static void log_dbg_checksum(struct crypt_device *cd,
const uint8_t *csum, const char *csum_alg, const char *info)
{
char csum_txt[2*LUKS2_CHECKSUM_L+1];
int i;
@@ -63,7 +65,7 @@ static void log_dbg_checksum(const uint8_t *csum, const char *csum_alg, const ch
snprintf(&csum_txt[i*2], 3, "%02hhx", (const char)csum[i]);
csum_txt[i*2+1] = '\0'; /* Just to be safe, sprintf should write \0 there. */
log_dbg("Checksum:%s (%s)", &csum_txt[0], info);
log_dbg(cd, "Checksum:%s (%s)", &csum_txt[0], info);
}
/*
@@ -98,7 +100,8 @@ static int hdr_checksum_calculate(const char *alg, struct luks2_hdr_disk *hdr_di
/*
* Compare hash (checksum) of on-disk and in-memory header.
*/
static int hdr_checksum_check(const char *alg, struct luks2_hdr_disk *hdr_disk,
static int hdr_checksum_check(struct crypt_device *cd,
const char *alg, struct luks2_hdr_disk *hdr_disk,
const char *json_area, size_t json_len)
{
struct luks2_hdr_disk hdr_tmp;
@@ -116,8 +119,8 @@ static int hdr_checksum_check(const char *alg, struct luks2_hdr_disk *hdr_disk,
if (r < 0)
return r;
log_dbg_checksum(hdr_disk->csum, alg, "on-disk");
log_dbg_checksum(hdr_tmp.csum, alg, "in-memory");
log_dbg_checksum(cd, hdr_disk->csum, alg, "on-disk");
log_dbg_checksum(cd, hdr_tmp.csum, alg, "in-memory");
if (memcmp(hdr_tmp.csum, hdr_disk->csum, (size_t)hash_size))
return -EINVAL;
@@ -187,7 +190,8 @@ static void hdr_to_disk(struct luks2_hdr *hdr,
/*
* Sanity checks before checksum is validated
*/
static int hdr_disk_sanity_check_pre(struct luks2_hdr_disk *hdr,
static int hdr_disk_sanity_check_pre(struct crypt_device *cd,
struct luks2_hdr_disk *hdr,
size_t *hdr_json_size, int secondary,
uint64_t offset)
{
@@ -195,25 +199,25 @@ static int hdr_disk_sanity_check_pre(struct luks2_hdr_disk *hdr,
return -EINVAL;
if (be16_to_cpu(hdr->version) != 2) {
log_dbg("Unsupported LUKS2 header version %u.", be16_to_cpu(hdr->version));
log_dbg(cd, "Unsupported LUKS2 header version %u.", be16_to_cpu(hdr->version));
return -EINVAL;
}
if (offset != be64_to_cpu(hdr->hdr_offset)) {
log_dbg("LUKS2 offset 0x%04x on device differs to expected offset 0x%04x.",
log_dbg(cd, "LUKS2 offset 0x%04x on device differs to expected offset 0x%04x.",
(unsigned)be64_to_cpu(hdr->hdr_offset), (unsigned)offset);
return -EINVAL;
}
if (secondary && (offset != be64_to_cpu(hdr->hdr_size))) {
log_dbg("LUKS2 offset 0x%04x in secondary header doesn't match size 0x%04x.",
log_dbg(cd, "LUKS2 offset 0x%04x in secondary header doesn't match size 0x%04x.",
(unsigned)offset, (unsigned)be64_to_cpu(hdr->hdr_size));
return -EINVAL;
}
/* FIXME: sanity check checksum alg. */
log_dbg("LUKS2 header version %u of size %u bytes, checksum %s.",
log_dbg(cd, "LUKS2 header version %u of size %u bytes, checksum %s.",
(unsigned)be16_to_cpu(hdr->version), (unsigned)be64_to_cpu(hdr->hdr_size),
hdr->checksum_alg);
@@ -224,16 +228,17 @@ static int hdr_disk_sanity_check_pre(struct luks2_hdr_disk *hdr,
/*
* Read LUKS2 header from disk at specific offset.
*/
static int hdr_read_disk(struct device *device, struct luks2_hdr_disk *hdr_disk,
static int hdr_read_disk(struct crypt_device *cd,
struct device *device, struct luks2_hdr_disk *hdr_disk,
char **json_area, uint64_t offset, int secondary)
{
size_t hdr_json_size = 0;
int devfd = -1, r;
int devfd, r;
log_dbg("Trying to read %s LUKS2 header at offset 0x%" PRIx64 ".",
log_dbg(cd, "Trying to read %s LUKS2 header at offset 0x%" PRIx64 ".",
secondary ? "secondary" : "primary", offset);
devfd = device_open_locked(device, O_RDONLY);
devfd = device_open_locked(cd, device, O_RDONLY);
if (devfd < 0)
return devfd == -1 ? -EIO : devfd;
@@ -241,16 +246,14 @@ static int hdr_read_disk(struct device *device, struct luks2_hdr_disk *hdr_disk,
* Read binary header and run sanity check before reading
* JSON area and validating checksum.
*/
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr_disk,
LUKS2_HDR_BIN_LEN, offset) != LUKS2_HDR_BIN_LEN) {
close(devfd);
return -EIO;
}
r = hdr_disk_sanity_check_pre(hdr_disk, &hdr_json_size, secondary, offset);
r = hdr_disk_sanity_check_pre(cd, hdr_disk, &hdr_json_size, secondary, offset);
if (r < 0) {
close(devfd);
return r;
}
@@ -259,27 +262,23 @@ static int hdr_read_disk(struct device *device, struct luks2_hdr_disk *hdr_disk,
*/
*json_area = malloc(hdr_json_size);
if (!*json_area) {
close(devfd);
return -ENOMEM;
}
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), *json_area, hdr_json_size,
offset + LUKS2_HDR_BIN_LEN) != (ssize_t)hdr_json_size) {
close(devfd);
free(*json_area);
*json_area = NULL;
return -EIO;
}
close(devfd);
/*
* Calculate and validate checksum and zero it afterwards.
*/
if (hdr_checksum_check(hdr_disk->checksum_alg, hdr_disk,
if (hdr_checksum_check(cd, hdr_disk->checksum_alg, hdr_disk,
*json_area, hdr_json_size)) {
log_dbg("LUKS2 header checksum error (offset %" PRIu64 ").", offset);
log_dbg(cd, "LUKS2 header checksum error (offset %" PRIu64 ").", offset);
r = -EINVAL;
}
memset(hdr_disk->csum, 0, LUKS2_CHECKSUM_L);
@@ -290,20 +289,21 @@ static int hdr_read_disk(struct device *device, struct luks2_hdr_disk *hdr_disk,
/*
* Write LUKS2 header to disk at specific offset.
*/
static int hdr_write_disk(struct device *device, struct luks2_hdr *hdr,
const char *json_area, int secondary)
static int hdr_write_disk(struct crypt_device *cd,
struct device *device, struct luks2_hdr *hdr,
const char *json_area, int secondary)
{
struct luks2_hdr_disk hdr_disk;
uint64_t offset = secondary ? hdr->hdr_size : 0;
size_t hdr_json_len;
int devfd = -1, r;
int devfd, r;
log_dbg("Trying to write LUKS2 header (%zu bytes) at offset %" PRIu64 ".",
log_dbg(cd, "Trying to write LUKS2 header (%zu bytes) at offset %" PRIu64 ".",
hdr->hdr_size, offset);
/* FIXME: read-only device silent fail? */
devfd = device_open_locked(device, O_RDWR);
devfd = device_open_locked(cd, device, O_RDWR);
if (devfd < 0)
return devfd == -1 ? -EINVAL : devfd;
@@ -314,21 +314,19 @@ static int hdr_write_disk(struct device *device, struct luks2_hdr *hdr,
/*
* Write header without checksum but with proper seqid.
*/
if (write_lseek_blockwise(devfd, device_block_size(device),
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), (char *)&hdr_disk,
LUKS2_HDR_BIN_LEN, offset) < (ssize_t)LUKS2_HDR_BIN_LEN) {
close(devfd);
return -EIO;
}
/*
* Write json area.
*/
if (write_lseek_blockwise(devfd, device_block_size(device),
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device),
CONST_CAST(char*)json_area, hdr_json_len,
LUKS2_HDR_BIN_LEN + offset) < (ssize_t)hdr_json_len) {
close(devfd);
return -EIO;
}
@@ -338,42 +336,62 @@ static int hdr_write_disk(struct device *device, struct luks2_hdr *hdr,
r = hdr_checksum_calculate(hdr_disk.checksum_alg, &hdr_disk,
json_area, hdr_json_len);
if (r < 0) {
close(devfd);
return r;
}
log_dbg_checksum(hdr_disk.csum, hdr_disk.checksum_alg, "in-memory");
log_dbg_checksum(cd, hdr_disk.csum, hdr_disk.checksum_alg, "in-memory");
if (write_lseek_blockwise(devfd, device_block_size(device),
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), (char *)&hdr_disk,
LUKS2_HDR_BIN_LEN, offset) < (ssize_t)LUKS2_HDR_BIN_LEN)
r = -EIO;
device_sync(device, devfd);
close(devfd);
device_sync(cd, device);
return r;
}
static int LUKS2_check_device_size(struct crypt_device *cd, struct device *device,
uint64_t hdr_size, int falloc)
static int LUKS2_check_sequence_id(struct crypt_device *cd, struct luks2_hdr *hdr, struct device *device)
{
uint64_t dev_size;
int devfd;
struct luks2_hdr_disk dhdr;
if (device_size(device, &dev_size)) {
log_dbg("Cannot get device size for device %s.", device_path(device));
if (!hdr)
return -EINVAL;
devfd = device_open_locked(cd, device, O_RDONLY);
if (devfd < 0)
return devfd == -1 ? -EINVAL : devfd;
/* we need only first 512 bytes, see luks2_hdr_disk structure */
if ((read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), &dhdr, 512, 0) != 512))
return -EIO;
/* there's nothing to check if there's no LUKS2 header */
if ((be16_to_cpu(dhdr.version) != 2) ||
memcmp(dhdr.magic, LUKS2_MAGIC_1ST, LUKS2_MAGIC_L) ||
strcmp(dhdr.uuid, hdr->uuid))
return 0;
return hdr->seqid != be64_to_cpu(dhdr.seqid);
}
int LUKS2_device_write_lock(struct crypt_device *cd, struct luks2_hdr *hdr, struct device *device)
{
int r = device_write_lock(cd, device);
if (r < 0) {
log_err(cd, _("Failed to acquire write lock on device %s."), device_path(device));
return r;
}
log_dbg("Device size %" PRIu64 ", header size %"
PRIu64 ".", dev_size, hdr_size);
if (hdr_size > dev_size) {
/* If it is header file, increase its size */
if (falloc && !device_fallocate(device, hdr_size))
return 0;
log_err(cd, _("Device %s is too small. (LUKS2 requires at least %" PRIu64 " bytes.)"),
device_path(device), hdr_size);
return -EINVAL;
/* run sequence id check only on first write lock (r == 1) and w/o LUKS2 reencryption in-progress */
if (r == 1 && !crypt_get_reenc_context(cd)) {
log_dbg(cd, "Checking context sequence id matches value stored on disk.");
if (LUKS2_check_sequence_id(cd, hdr, device)) {
device_write_unlock(cd, device);
log_err(cd, _("Detected attempt for concurrent LUKS2 metadata update. Aborting operation."));
return -EINVAL;
}
}
return 0;
@@ -383,7 +401,7 @@ static int LUKS2_check_device_size(struct crypt_device *cd, struct device *devic
* Convert in-memory LUKS2 header and write it to disk.
* This will increase sequence id, write both header copies and calculate checksum.
*/
int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr, struct device *device)
int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr, struct device *device, bool seqid_check)
{
char *json_area;
const char *json_text;
@@ -391,11 +409,11 @@ int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr, struct
int r;
if (hdr->version != 2) {
log_dbg("Unsupported LUKS2 header version (%u).", hdr->version);
log_dbg(cd, "Unsupported LUKS2 header version (%u).", hdr->version);
return -EINVAL;
}
r = LUKS2_check_device_size(cd, crypt_metadata_device(cd), LUKS2_hdr_and_areas_size(hdr->jobj), 1);
r = device_check_size(cd, crypt_metadata_device(cd), LUKS2_hdr_and_areas_size(hdr->jobj), 1);
if (r)
return r;
@@ -414,55 +432,55 @@ int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr, struct
json_text = json_object_to_json_string_ext(hdr->jobj,
JSON_C_TO_STRING_PLAIN | JSON_C_TO_STRING_NOSLASHESCAPE);
if (!json_text || !*json_text) {
log_dbg("Cannot parse JSON object to text representation.");
log_dbg(cd, "Cannot parse JSON object to text representation.");
free(json_area);
return -ENOMEM;
}
if (strlen(json_text) > (json_area_len - 1)) {
log_dbg("JSON is too large (%zu > %zu).", strlen(json_text), json_area_len);
log_dbg(cd, "JSON is too large (%zu > %zu).", strlen(json_text), json_area_len);
free(json_area);
return -EINVAL;
}
strncpy(json_area, json_text, json_area_len);
/* Increase sequence id before writing it to disk. */
hdr->seqid++;
r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write device lock."));
if (seqid_check)
r = LUKS2_device_write_lock(cd, hdr, device);
else
r = device_write_lock(cd, device);
if (r < 0) {
free(json_area);
return r;
}
/* Increase sequence id before writing it to disk. */
hdr->seqid++;
/* Write primary and secondary header */
r = hdr_write_disk(device, hdr, json_area, 0);
r = hdr_write_disk(cd, device, hdr, json_area, 0);
if (!r)
r = hdr_write_disk(device, hdr, json_area, 1);
r = hdr_write_disk(cd, device, hdr, json_area, 1);
if (r)
log_dbg("LUKS2 header write failed (%d).", r);
log_dbg(cd, "LUKS2 header write failed (%d).", r);
device_write_unlock(device);
/* FIXME: try recovery here? */
device_write_unlock(cd, device);
free(json_area);
return r;
}
static int validate_json_area(const char *json_area, uint64_t json_len, uint64_t max_length)
static int validate_json_area(struct crypt_device *cd, const char *json_area,
uint64_t json_len, uint64_t max_length)
{
char c;
/* Enforce there are no needless opening bytes */
if (*json_area != '{') {
log_dbg("ERROR: Opening character must be left curly bracket: '{'.");
log_dbg(cd, "ERROR: Opening character must be left curly bracket: '{'.");
return -EINVAL;
}
if (json_len >= max_length) {
log_dbg("ERROR: Missing trailing null byte beyond parsed json data string.");
log_dbg(cd, "ERROR: Missing trailing null byte beyond parsed json data string.");
return -EINVAL;
}
@@ -475,7 +493,7 @@ static int validate_json_area(const char *json_area, uint64_t json_len, uint64_t
do {
c = *(json_area + json_len);
if (c != '\0') {
log_dbg("ERROR: Forbidden ascii code 0x%02hhx found beyond json data string at offset %" PRIu64,
log_dbg(cd, "ERROR: Forbidden ascii code 0x%02hhx found beyond json data string at offset %" PRIu64,
c, json_len);
return -EINVAL;
}
@@ -484,37 +502,38 @@ static int validate_json_area(const char *json_area, uint64_t json_len, uint64_t
return 0;
}
static int validate_luks2_json_object(json_object *jobj_hdr, uint64_t length)
static int validate_luks2_json_object(struct crypt_device *cd, json_object *jobj_hdr, uint64_t length)
{
int r;
/* we require top level object to be of json_type_object */
r = !json_object_is_type(jobj_hdr, json_type_object);
if (r) {
log_dbg("ERROR: Resulting object is not a json object type");
log_dbg(cd, "ERROR: Resulting object is not a json object type");
return r;
}
r = LUKS2_hdr_validate(jobj_hdr, length);
r = LUKS2_hdr_validate(cd, jobj_hdr, length);
if (r) {
log_dbg("Repairing JSON metadata.");
log_dbg(cd, "Repairing JSON metadata.");
/* try to correct known glitches */
LUKS2_hdr_repair(jobj_hdr);
LUKS2_hdr_repair(cd, jobj_hdr);
/* run validation again */
r = LUKS2_hdr_validate(jobj_hdr, length);
r = LUKS2_hdr_validate(cd, jobj_hdr, length);
}
if (r)
log_dbg("ERROR: LUKS2 validation failed");
log_dbg(cd, "ERROR: LUKS2 validation failed");
return r;
}
static json_object *parse_and_validate_json(const char *json_area, uint64_t max_length)
static json_object *parse_and_validate_json(struct crypt_device *cd,
const char *json_area, uint64_t max_length)
{
int json_len, r;
json_object *jobj = parse_json_len(json_area, max_length, &json_len);
json_object *jobj = parse_json_len(cd, json_area, max_length, &json_len);
if (!jobj)
return NULL;
@@ -522,9 +541,9 @@ static json_object *parse_and_validate_json(const char *json_area, uint64_t max_
/* successful parse_json_len must not return offset <= 0 */
assert(json_len > 0);
r = validate_json_area(json_area, json_len, max_length);
r = validate_json_area(cd, json_area, json_len, max_length);
if (!r)
r = validate_luks2_json_object(jobj, max_length);
r = validate_luks2_json_object(cd, jobj, max_length);
if (r) {
json_object_put(jobj);
@@ -534,19 +553,19 @@ static json_object *parse_and_validate_json(const char *json_area, uint64_t max_
return jobj;
}
static int detect_device_signatures(const char *path)
static int detect_device_signatures(struct crypt_device *cd, const char *path)
{
blk_probe_status prb_state;
int r;
struct blkid_handle *h;
if (!blk_supported()) {
log_dbg("Blkid probing of device signatures disabled.");
log_dbg(cd, "Blkid probing of device signatures disabled.");
return 0;
}
if ((r = blk_init_by_path(&h, path))) {
log_dbg("Failed to initialize blkid_handle by path.");
log_dbg(cd, "Failed to initialize blkid_handle by path.");
return -EINVAL;
}
@@ -560,22 +579,22 @@ static int detect_device_signatures(const char *path)
switch (prb_state) {
case PRB_AMBIGUOUS:
log_dbg("Blkid probe couldn't decide device type unambiguously.");
log_dbg(cd, "Blkid probe couldn't decide device type unambiguously.");
/* fall through */
case PRB_FAIL:
log_dbg("Blkid probe failed.");
log_dbg(cd, "Blkid probe failed.");
r = -EINVAL;
break;
case PRB_OK: /* crypto_LUKS type is filtered out */
r = -EINVAL;
if (blk_is_partition(h))
log_dbg("Blkid probe detected partition type '%s'", blk_get_partition_type(h));
log_dbg(cd, "Blkid probe detected partition type '%s'", blk_get_partition_type(h));
else if (blk_is_superblock(h))
log_dbg("blkid probe detected superblock type '%s'", blk_get_superblock_type(h));
log_dbg(cd, "blkid probe detected superblock type '%s'", blk_get_superblock_type(h));
break;
case PRB_EMPTY:
log_dbg("Blkid probe detected no foreign device signature.");
log_dbg(cd, "Blkid probe detected no foreign device signature.");
}
blk_free(h);
return r;
@@ -600,16 +619,16 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
/* Skip auto-recovery if locks are disabled and we're not doing LUKS2 explicit repair */
if (do_recovery && do_blkprobe && !crypt_metadata_locking_enabled()) {
do_recovery = 0;
log_dbg("Disabling header auto-recovery due to locking being disabled.");
log_dbg(cd, "Disabling header auto-recovery due to locking being disabled.");
}
/*
* Read primary LUKS2 header (offset 0).
*/
state_hdr1 = HDR_FAIL;
r = hdr_read_disk(device, &hdr_disk1, &json_area1, 0, 0);
r = hdr_read_disk(cd, device, &hdr_disk1, &json_area1, 0, 0);
if (r == 0) {
jobj_hdr1 = parse_and_validate_json(json_area1, be64_to_cpu(hdr_disk1.hdr_size) - LUKS2_HDR_BIN_LEN);
jobj_hdr1 = parse_and_validate_json(cd, json_area1, be64_to_cpu(hdr_disk1.hdr_size) - LUKS2_HDR_BIN_LEN);
state_hdr1 = jobj_hdr1 ? HDR_OK : HDR_OBSOLETE;
} else if (r == -EIO)
state_hdr1 = HDR_FAIL_IO;
@@ -619,9 +638,9 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
*/
state_hdr2 = HDR_FAIL;
if (state_hdr1 != HDR_FAIL && state_hdr1 != HDR_FAIL_IO) {
r = hdr_read_disk(device, &hdr_disk2, &json_area2, be64_to_cpu(hdr_disk1.hdr_size), 1);
r = hdr_read_disk(cd, device, &hdr_disk2, &json_area2, be64_to_cpu(hdr_disk1.hdr_size), 1);
if (r == 0) {
jobj_hdr2 = parse_and_validate_json(json_area2, be64_to_cpu(hdr_disk2.hdr_size) - LUKS2_HDR_BIN_LEN);
jobj_hdr2 = parse_and_validate_json(cd, json_area2, be64_to_cpu(hdr_disk2.hdr_size) - LUKS2_HDR_BIN_LEN);
state_hdr2 = jobj_hdr2 ? HDR_OK : HDR_OBSOLETE;
} else if (r == -EIO)
state_hdr2 = HDR_FAIL_IO;
@@ -630,10 +649,10 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
* No header size, check all known offsets.
*/
for (r = -EINVAL,i = 0; r < 0 && i < ARRAY_SIZE(hdr2_offsets); i++)
r = hdr_read_disk(device, &hdr_disk2, &json_area2, hdr2_offsets[i], 1);
r = hdr_read_disk(cd, device, &hdr_disk2, &json_area2, hdr2_offsets[i], 1);
if (r == 0) {
jobj_hdr2 = parse_and_validate_json(json_area2, be64_to_cpu(hdr_disk2.hdr_size) - LUKS2_HDR_BIN_LEN);
jobj_hdr2 = parse_and_validate_json(cd, json_area2, be64_to_cpu(hdr_disk2.hdr_size) - LUKS2_HDR_BIN_LEN);
state_hdr2 = jobj_hdr2 ? HDR_OK : HDR_OBSOLETE;
} else if (r == -EIO)
state_hdr2 = HDR_FAIL_IO;
@@ -659,7 +678,7 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
goto err;
}
r = LUKS2_check_device_size(cd, device, hdr_size, 0);
r = device_check_size(cd, device, hdr_size, 0);
if (r)
goto err;
@@ -667,9 +686,9 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
* Try to rewrite (recover) bad header. Always regenerate salt for bad header.
*/
if (state_hdr1 == HDR_OK && state_hdr2 != HDR_OK) {
log_dbg("Secondary LUKS2 header requires recovery.");
log_dbg(cd, "Secondary LUKS2 header requires recovery.");
if (do_blkprobe && (r = detect_device_signatures(device_path(device)))) {
if (do_blkprobe && (r = detect_device_signatures(cd, device_path(device)))) {
log_err(cd, _("Device contains ambiguous signatures, cannot auto-recover LUKS2.\n"
"Please run \"cryptsetup repair\" for recovery."));
goto err;
@@ -677,20 +696,20 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
if (do_recovery) {
memcpy(&hdr_disk2, &hdr_disk1, LUKS2_HDR_BIN_LEN);
r = crypt_random_get(NULL, (char*)hdr_disk2.salt, sizeof(hdr_disk2.salt), CRYPT_RND_SALT);
r = crypt_random_get(cd, (char*)hdr_disk2.salt, sizeof(hdr_disk2.salt), CRYPT_RND_SALT);
if (r)
log_dbg("Cannot generate master salt.");
log_dbg(cd, "Cannot generate master salt.");
else {
hdr_from_disk(&hdr_disk1, &hdr_disk2, hdr, 0);
r = hdr_write_disk(device, hdr, json_area1, 1);
r = hdr_write_disk(cd, device, hdr, json_area1, 1);
}
if (r)
log_dbg("Secondary LUKS2 header recovery failed.");
log_dbg(cd, "Secondary LUKS2 header recovery failed.");
}
} else if (state_hdr1 != HDR_OK && state_hdr2 == HDR_OK) {
log_dbg("Primary LUKS2 header requires recovery.");
log_dbg(cd, "Primary LUKS2 header requires recovery.");
if (do_blkprobe && (r = detect_device_signatures(device_path(device)))) {
if (do_blkprobe && (r = detect_device_signatures(cd, device_path(device)))) {
log_err(cd, _("Device contains ambiguous signatures, cannot auto-recover LUKS2.\n"
"Please run \"cryptsetup repair\" for recovery."));
goto err;
@@ -698,15 +717,15 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
if (do_recovery) {
memcpy(&hdr_disk1, &hdr_disk2, LUKS2_HDR_BIN_LEN);
r = crypt_random_get(NULL, (char*)hdr_disk1.salt, sizeof(hdr_disk1.salt), CRYPT_RND_SALT);
r = crypt_random_get(cd, (char*)hdr_disk1.salt, sizeof(hdr_disk1.salt), CRYPT_RND_SALT);
if (r)
log_dbg("Cannot generate master salt.");
log_dbg(cd, "Cannot generate master salt.");
else {
hdr_from_disk(&hdr_disk2, &hdr_disk1, hdr, 1);
r = hdr_write_disk(device, hdr, json_area2, 0);
r = hdr_write_disk(cd, device, hdr, json_area2, 0);
}
if (r)
log_dbg("Primary LUKS2 header recovery failed.");
log_dbg(cd, "Primary LUKS2 header recovery failed.");
}
}
@@ -738,7 +757,7 @@ int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
*/
return 0;
err:
log_dbg("LUKS2 header read failed (%d).", r);
log_dbg(cd, "LUKS2 header read failed (%d).", r);
free(json_area1);
free(json_area2);
@@ -759,7 +778,7 @@ int LUKS2_hdr_version_unlocked(struct crypt_device *cd, const char *backup_file)
if (!backup_file)
device = crypt_metadata_device(cd);
else if (device_alloc(&device, backup_file) < 0)
else if (device_alloc(cd, &device, backup_file) < 0)
return 0;
if (!device)
@@ -773,7 +792,7 @@ int LUKS2_hdr_version_unlocked(struct crypt_device *cd, const char *backup_file)
if (devfd < 0)
goto err;
if ((read_lseek_blockwise(devfd, device_block_size(device),
if ((read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), &hdr, sizeof(hdr), 0) == sizeof(hdr)) &&
!memcmp(hdr.magic, LUKS2_MAGIC_1ST, LUKS2_MAGIC_L))
r = (int)be16_to_cpu(hdr.version);
@@ -782,7 +801,7 @@ err:
close(devfd);
if (backup_file)
device_free(device);
device_free(cd, device);
return r;
}

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -44,7 +44,7 @@
int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
struct device *device, int do_recovery, int do_blkprobe);
int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr,
struct device *device);
struct device *device, bool seqid_check);
/*
* JSON struct access helpers
@@ -54,36 +54,45 @@ json_object *LUKS2_get_token_jobj(struct luks2_hdr *hdr, int token);
json_object *LUKS2_get_digest_jobj(struct luks2_hdr *hdr, int digest);
json_object *LUKS2_get_segment_jobj(struct luks2_hdr *hdr, int segment);
json_object *LUKS2_get_tokens_jobj(struct luks2_hdr *hdr);
json_object *LUKS2_get_segments_jobj(struct luks2_hdr *hdr);
void hexprint_base64(struct crypt_device *cd, json_object *jobj,
const char *sep, const char *line_sep);
json_object *parse_json_len(const char *json_area, uint64_t max_length, int *json_len);
json_object *parse_json_len(struct crypt_device *cd, const char *json_area,
uint64_t max_length, int *json_len);
uint64_t json_object_get_uint64(json_object *jobj);
uint32_t json_object_get_uint32(json_object *jobj);
json_object *json_object_new_uint64(uint64_t value);
void JSON_DBG(json_object *jobj, const char *desc);
int json_object_object_add_by_uint(json_object *jobj, unsigned key, json_object *jobj_val);
void json_object_object_del_by_uint(json_object *jobj, unsigned key);
int json_object_copy(json_object *jobj_src, json_object **jobj_dst);
void JSON_DBG(struct crypt_device *cd, json_object *jobj, const char *desc);
/*
* LUKS2 JSON validation
*/
/* validation helper */
json_object *json_contains(json_object *jobj, const char *name, const char *section,
const char *key, json_type type);
json_bool validate_json_uint32(json_object *jobj);
json_object *json_contains(struct crypt_device *cd, json_object *jobj, const char *name,
const char *section, const char *key, json_type type);
int LUKS2_hdr_validate(json_object *hdr_jobj, uint64_t json_size);
int LUKS2_keyslot_validate(json_object *hdr_jobj, json_object *hdr_keyslot, const char *key);
int LUKS2_check_json_size(const struct luks2_hdr *hdr);
int LUKS2_token_validate(json_object *hdr_jobj, json_object *jobj_token, const char *key);
int LUKS2_hdr_validate(struct crypt_device *cd, json_object *hdr_jobj, uint64_t json_size);
int LUKS2_keyslot_validate(struct crypt_device *cd, json_object *hdr_jobj,
json_object *hdr_keyslot, const char *key);
int LUKS2_check_json_size(struct crypt_device *cd, const struct luks2_hdr *hdr);
int LUKS2_token_validate(struct crypt_device *cd, json_object *hdr_jobj,
json_object *jobj_token, const char *key);
void LUKS2_token_dump(struct crypt_device *cd, int token);
/*
* LUKS2 JSON repair for known glitches
*/
void LUKS2_hdr_repair(json_object *jobj_hdr);
void LUKS2_keyslots_repair(json_object *jobj_hdr);
void LUKS2_hdr_repair(struct crypt_device *cd, json_object *jobj_hdr);
void LUKS2_keyslots_repair(struct crypt_device *cd, json_object *jobj_hdr);
/*
* JSON array helpers
@@ -122,7 +131,7 @@ int placeholder_keyslot_alloc(struct crypt_device *cd,
size_t volume_key_len);
/* validate all keyslot implementations in hdr json */
int LUKS2_keyslots_validate(json_object *hdr_jobj);
int LUKS2_keyslots_validate(struct crypt_device *cd, json_object *hdr_jobj);
typedef struct {
const char *name;
@@ -136,6 +145,12 @@ typedef struct {
keyslot_repair_func repair;
} keyslot_handler;
/* can not fit prototype alloc function */
int reenc_keyslot_alloc(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const struct crypt_params_reencrypt *params);
/**
* LUKS2 digest handlers (EXPERIMENTAL)
*/
@@ -173,5 +188,11 @@ int token_keyring_get(json_object *, void *);
int LUKS2_find_area_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
size_t keylength, uint64_t *area_offset, uint64_t *area_length);
int LUKS2_find_area_max_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
uint64_t *area_offset, uint64_t *area_length);
int LUKS2_check_cipher(struct crypt_device *cd,
size_t keylength,
const char *cipher,
const char *cipher_mode);
#endif

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2, LUKS2 header format code
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -21,6 +21,7 @@
#include "luks2_internal.h"
#include <uuid/uuid.h>
#include <assert.h>
struct area {
uint64_t offset;
@@ -38,9 +39,83 @@ static size_t get_min_offset(struct luks2_hdr *hdr)
return 2 * hdr->hdr_size;
}
static size_t get_max_offset(struct crypt_device *cd)
static size_t get_max_offset(struct luks2_hdr *hdr)
{
return crypt_get_data_offset(cd) * SECTOR_SIZE;
return LUKS2_hdr_and_areas_size(hdr->jobj);
}
int LUKS2_find_area_max_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
uint64_t *area_offset, uint64_t *area_length)
{
struct area areas[LUKS2_KEYSLOTS_MAX], sorted_areas[LUKS2_KEYSLOTS_MAX+1] = {};
int i, j, k, area_i;
size_t valid_offset, offset, length;
/* fill area offset + length table */
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
if (!LUKS2_keyslot_area(hdr, i, &areas[i].offset, &areas[i].length))
continue;
areas[i].length = 0;
areas[i].offset = 0;
}
/* sort table */
k = 0; /* index in sorted table */
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
offset = get_max_offset(hdr) ?: UINT64_MAX;
area_i = -1;
/* search for the smallest offset in table */
for (j = 0; j < LUKS2_KEYSLOTS_MAX; j++)
if (areas[j].offset && areas[j].offset <= offset) {
area_i = j;
offset = areas[j].offset;
}
if (area_i >= 0) {
sorted_areas[k].length = areas[area_i].length;
sorted_areas[k].offset = areas[area_i].offset;
areas[area_i].length = 0;
areas[area_i].offset = 0;
k++;
}
}
sorted_areas[LUKS2_KEYSLOTS_MAX].offset = get_max_offset(hdr);
sorted_areas[LUKS2_KEYSLOTS_MAX].length = 1;
/* search for the gap we can use */
length = valid_offset = 0;
offset = get_min_offset(hdr);
for (i = 0; i < LUKS2_KEYSLOTS_MAX+1; i++) {
/* skip empty */
if (sorted_areas[i].offset == 0 || sorted_areas[i].length == 0)
continue;
/* found bigger gap than the last one */
if ((offset < sorted_areas[i].offset) && (sorted_areas[i].offset - offset) > length) {
length = sorted_areas[i].offset - offset;
valid_offset = offset;
}
/* move beyond allocated area */
offset = sorted_areas[i].offset + sorted_areas[i].length;
}
/* this search 'algorithm' does not work with unaligned areas */
assert(length == size_round_up(length, 4096));
assert(valid_offset == size_round_up(valid_offset, 4096));
if (!length) {
log_dbg(cd, "Not enough space in header keyslot area.");
return -EINVAL;
}
log_dbg(cd, "Found largest free area %zu -> %zu", valid_offset, length + valid_offset);
*area_offset = valid_offset;
*area_length = length;
return 0;
}
int LUKS2_find_area_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
@@ -61,7 +136,7 @@ int LUKS2_find_area_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
/* sort table */
k = 0; /* index in sorted table */
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
offset = get_max_offset(cd) ?: UINT64_MAX;
offset = get_max_offset(hdr) ?: UINT64_MAX;
area_i = -1;
/* search for the smallest offset in table */
for (j = 0; j < LUKS2_KEYSLOTS_MAX; j++)
@@ -95,20 +170,13 @@ int LUKS2_find_area_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
offset = sorted_areas[i].offset + sorted_areas[i].length;
}
if (get_max_offset(cd) && (offset + length) > get_max_offset(cd)) {
log_err(cd, _("No space for new keyslot."));
if ((offset + length) > get_max_offset(hdr)) {
log_dbg(cd, "Not enough space in header keyslot area.");
return -EINVAL;
}
log_dbg("Found area %zu -> %zu", offset, length + offset);
/*
log_dbg("Area offset min: %zu, max %zu, slots max %u",
get_min_offset(hdr), get_max_offset(cd), LUKS2_KEYSLOTS_MAX);
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++)
log_dbg("SLOT[%02i]: %-8" PRIu64 " -> %-8" PRIu64, i,
sorted_areas[i].offset,
sorted_areas[i].length + sorted_areas[i].offset);
*/
log_dbg(cd, "Found area %zu -> %zu", offset, length + offset);
*area_offset = offset;
*area_length = length;
return 0;
@@ -139,23 +207,71 @@ int LUKS2_generate_hdr(
const char *integrity,
const char *uuid,
unsigned int sector_size, /* in bytes */
unsigned int alignPayload, /* in bytes */
unsigned int alignOffset, /* in bytes */
int detached_metadata_device)
uint64_t data_offset, /* in bytes */
uint64_t align_offset, /* in bytes */
uint64_t required_alignment,
uint64_t metadata_size,
uint64_t keyslots_size)
{
struct json_object *jobj_segment, *jobj_integrity, *jobj_keyslots, *jobj_segments, *jobj_config;
char num[24], cipher[128];
uint64_t offset, json_size, keyslots_size;
char cipher[128];
uuid_t partitionUuid;
int digest;
hdr->hdr_size = LUKS2_HDR_16K_LEN;
if (!metadata_size)
metadata_size = LUKS2_HDR_16K_LEN;
hdr->hdr_size = metadata_size;
if (data_offset && data_offset < get_min_offset(hdr)) {
log_err(cd, _("Requested data offset is too small."));
return -EINVAL;
}
/* Increase keyslot size according to data offset */
if (!keyslots_size && data_offset)
keyslots_size = data_offset - get_min_offset(hdr);
/* keyslots size has to be 4 KiB aligned */
keyslots_size -= (keyslots_size % 4096);
if (keyslots_size > LUKS2_MAX_KEYSLOTS_SIZE)
keyslots_size = LUKS2_MAX_KEYSLOTS_SIZE;
if (!keyslots_size) {
assert(LUKS2_DEFAULT_HDR_SIZE > 2 * LUKS2_HDR_OFFSET_MAX);
keyslots_size = LUKS2_DEFAULT_HDR_SIZE - get_min_offset(hdr);
}
/* Decrease keyslots_size if we have smaller data_offset */
if (data_offset && (keyslots_size + get_min_offset(hdr)) > data_offset) {
keyslots_size = data_offset - get_min_offset(hdr);
log_dbg(cd, "Decreasing keyslot area size to %" PRIu64
" bytes due to the requested data offset %"
PRIu64 " bytes.", keyslots_size, data_offset);
}
/* Data offset has priority */
if (!data_offset && required_alignment) {
data_offset = size_round_up(get_min_offset(hdr) + keyslots_size,
(size_t)required_alignment);
data_offset += align_offset;
}
log_dbg(cd, "Formatting LUKS2 with JSON metadata area %" PRIu64
" bytes and keyslots area %" PRIu64 " bytes.",
metadata_size - LUKS2_HDR_BIN_LEN, keyslots_size);
if (keyslots_size < (LUKS2_HDR_OFFSET_MAX - 2*LUKS2_HDR_16K_LEN))
log_std(cd, _("WARNING: keyslots area (%" PRIu64 " bytes) is very small,"
" available LUKS2 keyslot count is very limited.\n"),
keyslots_size);
hdr->seqid = 1;
hdr->version = 2;
memset(hdr->label, 0, LUKS2_LABEL_L);
strcpy(hdr->checksum_alg, "sha256");
crypt_random_get(NULL, (char*)hdr->salt1, LUKS2_SALT_L, CRYPT_RND_SALT);
crypt_random_get(NULL, (char*)hdr->salt2, LUKS2_SALT_L, CRYPT_RND_SALT);
crypt_random_get(cd, (char*)hdr->salt1, LUKS2_SALT_L, CRYPT_RND_SALT);
crypt_random_get(cd, (char*)hdr->salt2, LUKS2_SALT_L, CRYPT_RND_SALT);
if (uuid && uuid_parse(uuid, partitionUuid) == -1) {
log_err(cd, _("Wrong LUKS UUID format provided."));
@@ -183,34 +299,15 @@ int LUKS2_generate_hdr(
json_object_object_add(hdr->jobj, "config", jobj_config);
digest = LUKS2_digest_create(cd, "pbkdf2", hdr, vk);
if (digest < 0) {
json_object_put(hdr->jobj);
hdr->jobj = NULL;
return -EINVAL;
}
if (digest < 0)
goto err;
if (LUKS2_digest_segment_assign(cd, hdr, CRYPT_DEFAULT_SEGMENT, digest, 1, 0) < 0) {
json_object_put(hdr->jobj);
hdr->jobj = NULL;
return -EINVAL;
}
if (LUKS2_digest_segment_assign(cd, hdr, 0, digest, 1, 0) < 0)
goto err;
jobj_segment = json_object_new_object();
json_object_object_add(jobj_segment, "type", json_object_new_string("crypt"));
if (detached_metadata_device)
offset = (uint64_t)alignPayload;
else {
//FIXME
//offset = size_round_up(areas[7].offset + areas[7].length, alignPayload * SECTOR_SIZE);
offset = size_round_up(LUKS2_HDR_DEFAULT_LEN, (size_t)alignPayload);
offset += alignOffset;
}
json_object_object_add(jobj_segment, "offset", json_object_new_uint64(offset));
json_object_object_add(jobj_segment, "iv_tweak", json_object_new_string("0"));
json_object_object_add(jobj_segment, "size", json_object_new_string("dynamic"));
json_object_object_add(jobj_segment, "encryption", json_object_new_string(cipher));
json_object_object_add(jobj_segment, "sector_size", json_object_new_int(sector_size));
jobj_segment = json_segment_create_crypt(data_offset, 0, NULL, cipher, sector_size, 0);
if (!jobj_segment)
goto err;
if (integrity) {
jobj_integrity = json_object_new_object();
@@ -220,30 +317,17 @@ int LUKS2_generate_hdr(
json_object_object_add(jobj_segment, "integrity", jobj_integrity);
}
snprintf(num, sizeof(num), "%u", CRYPT_DEFAULT_SEGMENT);
json_object_object_add(jobj_segments, num, jobj_segment);
json_size = hdr->hdr_size - LUKS2_HDR_BIN_LEN;
json_object_object_add(jobj_config, "json_size", json_object_new_uint64(json_size));
/* for detached metadata device compute reasonable keyslot areas size */
// FIXME: this is coupled with FIXME above
if (detached_metadata_device && !offset)
keyslots_size = LUKS2_HDR_DEFAULT_LEN - get_min_offset(hdr);
else
keyslots_size = offset - get_min_offset(hdr);
/* keep keyslots_size reasonable for custom data alignments */
if (keyslots_size > LUKS2_MAX_KEYSLOTS_SIZE)
keyslots_size = LUKS2_MAX_KEYSLOTS_SIZE;
/* keyslots size has to be 4 KiB aligned */
keyslots_size -= (keyslots_size % 4096);
json_object_object_add_by_uint(jobj_segments, 0, jobj_segment);
json_object_object_add(jobj_config, "json_size", json_object_new_uint64(metadata_size - LUKS2_HDR_BIN_LEN));
json_object_object_add(jobj_config, "keyslots_size", json_object_new_uint64(keyslots_size));
JSON_DBG(hdr->jobj, "Header JSON");
JSON_DBG(cd, hdr->jobj, "Header JSON:");
return 0;
err:
json_object_put(hdr->jobj);
hdr->jobj = NULL;
return -EINVAL;
}
int LUKS2_wipe_header_areas(struct crypt_device *cd,
@@ -258,7 +342,7 @@ int LUKS2_wipe_header_areas(struct crypt_device *cd,
length = LUKS2_get_data_offset(hdr) * SECTOR_SIZE;
wipe_block = 1024 * 1024;
if (LUKS2_hdr_validate(hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN))
if (LUKS2_hdr_validate(cd, hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN))
return -EINVAL;
/* On detached header wipe at least the first 4k */
@@ -267,7 +351,7 @@ int LUKS2_wipe_header_areas(struct crypt_device *cd,
wipe_block = 4096;
}
log_dbg("Wiping LUKS areas (0x%06" PRIx64 " - 0x%06" PRIx64") with zeroes.",
log_dbg(cd, "Wiping LUKS areas (0x%06" PRIx64 " - 0x%06" PRIx64") with zeroes.",
offset, length + offset);
r = crypt_wipe_device(cd, crypt_metadata_device(cd), CRYPT_WIPE_ZERO,
@@ -280,9 +364,36 @@ int LUKS2_wipe_header_areas(struct crypt_device *cd,
offset = get_min_offset(hdr);
length = LUKS2_keyslots_size(hdr->jobj);
log_dbg("Wiping keyslots area (0x%06" PRIx64 " - 0x%06" PRIx64") with random data.",
log_dbg(cd, "Wiping keyslots area (0x%06" PRIx64 " - 0x%06" PRIx64") with random data.",
offset, length + offset);
return crypt_wipe_device(cd, crypt_metadata_device(cd), CRYPT_WIPE_RANDOM,
offset, length, wipe_block, NULL, NULL);
}
/* FIXME: what if user wanted to keep original keyslots size? */
int LUKS2_set_keyslots_size(struct crypt_device *cd,
struct luks2_hdr *hdr,
uint64_t data_offset)
{
json_object *jobj_config;
uint64_t keyslots_size;
if (data_offset < get_min_offset(hdr))
return 1;
keyslots_size = data_offset - get_min_offset(hdr);
/* keep keyslots_size reasonable for custom data alignments */
if (keyslots_size > LUKS2_MAX_KEYSLOTS_SIZE)
keyslots_size = LUKS2_MAX_KEYSLOTS_SIZE;
/* keyslots size has to be 4 KiB aligned */
keyslots_size -= (keyslots_size % 4096);
if (!json_object_object_get_ex(hdr->jobj, "config", &jobj_config))
return 1;
json_object_object_add(jobj_config, "keyslots_size", json_object_new_uint64(keyslots_size));
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2, keyslot handling
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -23,9 +23,11 @@
/* Internal implementations */
extern const keyslot_handler luks2_keyslot;
extern const keyslot_handler reenc_keyslot;
static const keyslot_handler *keyslot_handlers[LUKS2_KEYSLOTS_MAX] = {
&luks2_keyslot,
&reenc_keyslot,
NULL
};
@@ -63,7 +65,7 @@ static const keyslot_handler
return LUKS2_keyslot_handler_type(cd, json_object_get_string(jobj2));
}
int LUKS2_keyslot_find_empty(struct luks2_hdr *hdr, const char *type)
int LUKS2_keyslot_find_empty(struct luks2_hdr *hdr)
{
int i;
@@ -75,23 +77,55 @@ int LUKS2_keyslot_find_empty(struct luks2_hdr *hdr, const char *type)
}
/* Check if a keyslot is asssigned to specific segment */
static int _keyslot_for_segment(struct luks2_hdr *hdr, int keyslot, int segment)
{
int keyslot_digest, segment_digest, s, count = 0;
keyslot_digest = LUKS2_digest_by_keyslot(hdr, keyslot);
if (keyslot_digest < 0)
return keyslot_digest;
if (segment >= 0) {
segment_digest = LUKS2_digest_by_segment(hdr, segment);
return segment_digest == keyslot_digest;
}
for (s = 0; s < 3; s++) {
segment_digest = LUKS2_digest_by_segment(hdr, s);
if (segment_digest == keyslot_digest)
count++;
}
return count;
}
static int _keyslot_for_digest(struct luks2_hdr *hdr, int keyslot, int digest)
{
int r = -EINVAL;
r = LUKS2_digest_by_keyslot(hdr, keyslot);
if (r < 0)
return r;
return r == digest ? 0 : -ENOENT;
}
int LUKS2_keyslot_for_segment(struct luks2_hdr *hdr, int keyslot, int segment)
{
int keyslot_digest, segment_digest;
int r = -EINVAL;
/* no need to check anything */
if (segment == CRYPT_ANY_SEGMENT)
return 0;
return 0; /* ok */
if (segment == CRYPT_DEFAULT_SEGMENT) {
segment = LUKS2_get_default_segment(hdr);
if (segment < 0)
return segment;
}
keyslot_digest = LUKS2_digest_by_keyslot(NULL, hdr, keyslot);
if (keyslot_digest < 0)
return -EINVAL;
r = _keyslot_for_segment(hdr, keyslot, segment);
if (r < 0)
return r;
segment_digest = LUKS2_digest_by_segment(NULL, hdr, segment);
if (segment_digest < 0)
return segment_digest;
return segment_digest == keyslot_digest ? 0 : -ENOENT;
return r >= 1 ? 0 : -ENOENT;
}
/* Number of keyslots assigned to a segment or all keyslots for CRYPT_ANY_SEGMENT */
@@ -111,13 +145,18 @@ int LUKS2_keyslot_active_count(struct luks2_hdr *hdr, int segment)
return num;
}
int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd)
int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd, const char *cipher_spec)
{
const char *cipher = crypt_get_cipher(cd);
const char *cipher_mode = crypt_get_cipher_mode(cd);
char cipher[MAX_CIPHER_LEN], cipher_mode[MAX_CIPHER_LEN];
if (!cipher_spec || !strcmp(cipher_spec, "null") || !strcmp(cipher_spec, "cipher_null"))
return 1;
if (crypt_parse_name_and_mode(cipher_spec, cipher, NULL, cipher_mode) < 0)
return 1;
/* Keyslot is already authenticated; we cannot use integrity tags here */
if (crypt_get_integrity_tag_size(cd) || !cipher)
if (crypt_get_integrity_tag_size(cd))
return 1;
/* Wrapped key schemes cannot be used for keyslot encryption */
@@ -132,45 +171,75 @@ int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd)
}
int LUKS2_keyslot_params_default(struct crypt_device *cd, struct luks2_hdr *hdr,
size_t key_size, struct luks2_keyslot_params *params)
struct luks2_keyslot_params *params)
{
int r, integrity_key_size = crypt_get_integrity_key_size(cd);
const struct crypt_pbkdf_type *pbkdf = crypt_get_pbkdf_type(cd);
const char *cipher_spec;
size_t key_size;
int r;
if (!hdr || !pbkdf || !params)
return -EINVAL;
params->af_type = LUKS2_KEYSLOT_AF_LUKS1;
/*
* set keyslot area encryption parameters
*/
params->area_type = LUKS2_KEYSLOT_AREA_RAW;
/* set keyslot AF parameters */
/* currently we use hash for AF from pbkdf settings */
r = snprintf(params->af.luks1.hash, sizeof(params->af.luks1.hash),
"%s", pbkdf->hash);
if (r < 0 || (size_t)r >= sizeof(params->af.luks1.hash))
cipher_spec = crypt_keyslot_get_encryption(cd, CRYPT_ANY_SLOT, &key_size);
if (!cipher_spec || !key_size)
return -EINVAL;
params->af.luks1.stripes = 4000;
/* set keyslot area encryption parameters */
/* short circuit authenticated encryption hardcoded defaults */
if (LUKS2_keyslot_cipher_incompatible(cd) || key_size == 0) {
// FIXME: fixed cipher and key size can be wrong
snprintf(params->area.raw.encryption, sizeof(params->area.raw.encryption),
"aes-xts-plain64");
params->area.raw.key_size = 32;
return 0;
}
r = snprintf(params->area.raw.encryption, sizeof(params->area.raw.encryption),
"%s", LUKS2_get_cipher(hdr, CRYPT_DEFAULT_SEGMENT));
params->area.raw.key_size = key_size;
r = snprintf(params->area.raw.encryption, sizeof(params->area.raw.encryption), "%s", cipher_spec);
if (r < 0 || (size_t)r >= sizeof(params->area.raw.encryption))
return -EINVAL;
/* Slot encryption tries to use the same key size as for the main algorithm */
if ((size_t)integrity_key_size > key_size)
/*
* set keyslot AF parameters
*/
params->af_type = LUKS2_KEYSLOT_AF_LUKS1;
/* currently we use hash for AF from pbkdf settings */
r = snprintf(params->af.luks1.hash, sizeof(params->af.luks1.hash), "%s", pbkdf->hash ?: DEFAULT_LUKS1_HASH);
if (r < 0 || (size_t)r >= sizeof(params->af.luks1.hash))
return -EINVAL;
params->area.raw.key_size = key_size - integrity_key_size;
params->af.luks1.stripes = 4000;
return 0;
}
int LUKS2_keyslot_pbkdf(struct luks2_hdr *hdr, int keyslot, struct crypt_pbkdf_type *pbkdf)
{
json_object *jobj_keyslot, *jobj_kdf, *jobj;
if (!hdr || !pbkdf)
return -EINVAL;
if (LUKS2_keyslot_info(hdr, keyslot) == CRYPT_SLOT_INVALID)
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -ENOENT;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf))
return -EINVAL;
if (!json_object_object_get_ex(jobj_kdf, "type", &jobj))
return -EINVAL;
memset(pbkdf, 0, sizeof(*pbkdf));
pbkdf->type = json_object_get_string(jobj);
if (json_object_object_get_ex(jobj_kdf, "hash", &jobj))
pbkdf->hash = json_object_get_string(jobj);
if (json_object_object_get_ex(jobj_kdf, "iterations", &jobj))
pbkdf->iterations = json_object_get_int(jobj);
if (json_object_object_get_ex(jobj_kdf, "time", &jobj))
pbkdf->iterations = json_object_get_int(jobj);
if (json_object_object_get_ex(jobj_kdf, "memory", &jobj))
pbkdf->max_memory_kb = json_object_get_int(jobj);
if (json_object_object_get_ex(jobj_kdf, "cpus", &jobj))
pbkdf->parallel_threads = json_object_get_int(jobj);
return 0;
}
@@ -178,7 +247,7 @@ int LUKS2_keyslot_params_default(struct crypt_device *cd, struct luks2_hdr *hdr,
static int LUKS2_keyslot_unbound(struct luks2_hdr *hdr, int keyslot)
{
json_object *jobj_digest, *jobj_segments;
int digest = LUKS2_digest_by_keyslot(NULL, hdr, keyslot);
int digest = LUKS2_digest_by_keyslot(hdr, keyslot);
if (digest < 0)
return 0;
@@ -231,15 +300,58 @@ int LUKS2_keyslot_area(struct luks2_hdr *hdr,
if (!json_object_object_get_ex(jobj_area, "offset", &jobj))
return -EINVAL;
*offset = json_object_get_int64(jobj);
*offset = json_object_get_uint64(jobj);
if (!json_object_object_get_ex(jobj_area, "size", &jobj))
return -EINVAL;
*length = json_object_get_int64(jobj);
*length = json_object_get_uint64(jobj);
return 0;
}
static int LUKS2_open_and_verify_by_digest(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int digest,
const char *password,
size_t password_len,
struct volume_key **vk)
{
const keyslot_handler *h;
int key_size, r;
if (!(h = LUKS2_keyslot_handler(cd, keyslot)))
return -ENOENT;
r = _keyslot_for_digest(hdr, keyslot, digest);
if (r) {
if (r == -ENOENT)
log_dbg(cd, "Keyslot %d unusable for digest %d.", keyslot, digest);
return r;
}
key_size = LUKS2_get_keyslot_stored_key_size(hdr, keyslot);
if (key_size < 0)
return -EINVAL;
*vk = crypt_alloc_volume_key(key_size, NULL);
if (!*vk)
return -ENOMEM;
r = h->open(cd, keyslot, password, password_len, (*vk)->key, (*vk)->keylength);
if (r < 0)
log_dbg(cd, "Keyslot %d (%s) open failed with %d.", keyslot, h->name, r);
else
r = LUKS2_digest_verify(cd, hdr, *vk, keyslot);
if (r < 0) {
crypt_free_volume_key(*vk);
*vk = NULL;
}
return r < 0 ? r : keyslot;
}
static int LUKS2_open_and_verify(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
@@ -256,20 +368,20 @@ static int LUKS2_open_and_verify(struct crypt_device *cd,
r = h->validate(cd, LUKS2_get_keyslot_jobj(hdr, keyslot));
if (r) {
log_dbg("Keyslot %d validation failed.", keyslot);
log_dbg(cd, "Keyslot %d validation failed.", keyslot);
return r;
}
r = LUKS2_keyslot_for_segment(hdr, keyslot, segment);
if (r) {
if (r == -ENOENT)
log_dbg("Keyslot %d unusable for segment %d.", keyslot, segment);
log_dbg(cd, "Keyslot %d unusable for segment %d.", keyslot, segment);
return r;
}
key_size = LUKS2_get_volume_key_size(hdr, segment);
if (key_size < 0)
key_size = LUKS2_get_keyslot_key_size(hdr, keyslot);
key_size = LUKS2_get_keyslot_stored_key_size(hdr, keyslot);
if (key_size < 0)
return -EINVAL;
@@ -279,18 +391,57 @@ static int LUKS2_open_and_verify(struct crypt_device *cd,
r = h->open(cd, keyslot, password, password_len, (*vk)->key, (*vk)->keylength);
if (r < 0)
log_dbg("Keyslot %d (%s) open failed with %d.", keyslot, h->name, r);
log_dbg(cd, "Keyslot %d (%s) open failed with %d.", keyslot, h->name, r);
else
r = LUKS2_digest_verify(cd, hdr, *vk, keyslot);
if (r < 0) {
crypt_free_volume_key(*vk);
*vk = NULL;
}
} else
crypt_volume_key_set_id(*vk, r);
return r < 0 ? r : keyslot;
}
static int LUKS2_keyslot_open_priority_digest(struct crypt_device *cd,
struct luks2_hdr *hdr,
crypt_keyslot_priority priority,
const char *password,
size_t password_len,
int digest,
struct volume_key **vk)
{
json_object *jobj_keyslots, *jobj;
crypt_keyslot_priority slot_priority;
int keyslot, r = -ENOENT;
json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots);
json_object_object_foreach(jobj_keyslots, slot, val) {
if (!json_object_object_get_ex(val, "priority", &jobj))
slot_priority = CRYPT_SLOT_PRIORITY_NORMAL;
else
slot_priority = json_object_get_int(jobj);
keyslot = atoi(slot);
if (slot_priority != priority) {
log_dbg(cd, "Keyslot %d priority %d != %d (required), skipped.",
keyslot, slot_priority, priority);
continue;
}
r = LUKS2_open_and_verify_by_digest(cd, hdr, keyslot, digest, password, password_len, vk);
/* Do not retry for errors that are no -EPERM or -ENOENT,
former meaning password wrong, latter key slot unusable for segment */
if ((r != -EPERM) && (r != -ENOENT))
break;
}
return r;
}
static int LUKS2_keyslot_open_priority(struct crypt_device *cd,
struct luks2_hdr *hdr,
crypt_keyslot_priority priority,
@@ -313,7 +464,7 @@ static int LUKS2_keyslot_open_priority(struct crypt_device *cd,
keyslot = atoi(slot);
if (slot_priority != priority) {
log_dbg("Keyslot %d priority %d != %d (required), skipped.",
log_dbg(cd, "Keyslot %d priority %d != %d (required), skipped.",
keyslot, slot_priority, priority);
continue;
}
@@ -329,6 +480,76 @@ static int LUKS2_keyslot_open_priority(struct crypt_device *cd,
return r;
}
static int LUKS2_keyslot_open_by_digest(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int digest,
const char *password,
size_t password_len,
struct volume_key **vk)
{
int r_prio, r = -EINVAL;
if (digest < 0)
return r;
if (keyslot == CRYPT_ANY_SLOT) {
r_prio = LUKS2_keyslot_open_priority_digest(cd, hdr, CRYPT_SLOT_PRIORITY_PREFER,
password, password_len, digest, vk);
if (r_prio >= 0)
r = r_prio;
else if (r_prio != -EPERM && r_prio != -ENOENT)
r = r_prio;
else
r = LUKS2_keyslot_open_priority_digest(cd, hdr, CRYPT_SLOT_PRIORITY_NORMAL,
password, password_len, digest, vk);
/* Prefer password wrong to no entry from priority slot */
if (r_prio == -EPERM && r == -ENOENT)
r = r_prio;
} else
r = LUKS2_open_and_verify_by_digest(cd, hdr, keyslot, digest, password, password_len, vk);
return r;
}
int LUKS2_keyslot_open_all_segments(struct crypt_device *cd,
int keyslot_old,
int keyslot_new,
const char *password,
size_t password_len,
struct volume_key **vks)
{
struct volume_key *vk;
int digest_old, digest_new, r = -EINVAL;
struct luks2_hdr *hdr = crypt_get_hdr(cd, CRYPT_LUKS2);
digest_old = LUKS2_reencrypt_digest_old(hdr);
if (digest_old >= 0) {
log_dbg(cd, "Trying to unlock volume key (digest: %d) using keyslot %d.", digest_old, keyslot_old);
r = LUKS2_keyslot_open_by_digest(cd, hdr, keyslot_old, digest_old, password, password_len, &vk);
if (r < 0)
goto out;
crypt_volume_key_set_id(vk, digest_old);
crypt_volume_key_add_next(vks, vk);
}
digest_new = LUKS2_reencrypt_digest_new(hdr);
if (digest_new >= 0 && digest_old != digest_new) {
log_dbg(cd, "Trying to unlock volume key (digest: %d) using keyslot %d.", digest_new, keyslot_new);
r = LUKS2_keyslot_open_by_digest(cd, hdr, keyslot_new, digest_new, password, password_len, &vk);
if (r < 0)
goto out;
crypt_volume_key_set_id(vk, digest_new);
crypt_volume_key_add_next(vks, vk);
}
out:
if (r < 0) {
crypt_free_volume_key(*vks);
*vks = NULL;
}
return r;
}
int LUKS2_keyslot_open(struct crypt_device *cd,
int keyslot,
int segment,
@@ -360,6 +581,64 @@ int LUKS2_keyslot_open(struct crypt_device *cd,
return r;
}
int LUKS2_keyslot_reencrypt_create(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const struct crypt_params_reencrypt *params)
{
const keyslot_handler *h;
int r;
if (keyslot == CRYPT_ANY_SLOT)
return -EINVAL;
/* FIXME: find keyslot by type */
h = LUKS2_keyslot_handler_type(cd, "reencrypt");
if (!h)
return -EINVAL;
r = reenc_keyslot_alloc(cd, hdr, keyslot, params);
if (r < 0)
return r;
r = LUKS2_keyslot_priority_set(cd, hdr, keyslot, CRYPT_SLOT_PRIORITY_IGNORE, 0);
if (r < 0)
return r;
r = h->validate(cd, LUKS2_get_keyslot_jobj(hdr, keyslot));
if (r) {
log_dbg(cd, "Keyslot validation failed.");
return r;
}
if (LUKS2_hdr_validate(cd, hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN))
return -EINVAL;
return 0;
}
int LUKS2_keyslot_reencrypt_store(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const void *buffer,
size_t buffer_length)
{
const keyslot_handler *h;
int r;
if (!(h = LUKS2_keyslot_handler(cd, keyslot)) || strcmp(h->name, "reencrypt"))
return -EINVAL;
r = h->validate(cd, LUKS2_get_keyslot_jobj(hdr, keyslot));
if (r) {
log_dbg(cd, "Keyslot validation failed.");
return r;
}
return h->store(cd, keyslot, NULL, 0,
buffer, buffer_length);
}
int LUKS2_keyslot_store(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
@@ -389,17 +668,20 @@ int LUKS2_keyslot_store(struct crypt_device *cd,
r = h->update(cd, keyslot, params);
if (r) {
log_dbg("Failed to update keyslot %d json.", keyslot);
log_dbg(cd, "Failed to update keyslot %d json.", keyslot);
return r;
}
}
r = h->validate(cd, LUKS2_get_keyslot_jobj(hdr, keyslot));
if (r) {
log_dbg("Keyslot validation failed.");
log_dbg(cd, "Keyslot validation failed.");
return r;
}
if (LUKS2_hdr_validate(cd, hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN))
return -EINVAL;
return h->store(cd, keyslot, password, password_len,
vk->key, vk->keylength);
}
@@ -411,7 +693,6 @@ int LUKS2_keyslot_wipe(struct crypt_device *cd,
{
struct device *device = crypt_metadata_device(cd);
uint64_t area_offset, area_length;
char num[16];
int r;
json_object *jobj_keyslot, *jobj_keyslots;
const keyslot_handler *h;
@@ -426,23 +707,17 @@ int LUKS2_keyslot_wipe(struct crypt_device *cd,
return -ENOENT;
if (wipe_area_only)
log_dbg("Wiping keyslot %d area only.", keyslot);
log_dbg(cd, "Wiping keyslot %d area only.", keyslot);
/* Just check that nobody uses the metadata now */
r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write lock on device %s."),
device_path(device));
r = LUKS2_device_write_lock(cd, hdr, device);
if (r)
return r;
}
device_write_unlock(device);
/* secure deletion of possible key material in keyslot area */
r = crypt_keyslot_area(cd, keyslot, &area_offset, &area_length);
if (r && r != -ENOENT)
return r;
goto out;
/* We can destroy the binary keyslot area now without lock */
if (!r) {
r = crypt_wipe_device(cd, device, CRYPT_WIPE_SPECIAL, area_offset,
area_length, area_length, NULL, NULL);
@@ -453,25 +728,27 @@ int LUKS2_keyslot_wipe(struct crypt_device *cd,
r = -EINVAL;
} else
log_err(cd, _("Cannot wipe device %s."), device_path(device));
return r;
goto out;
}
}
if (wipe_area_only)
return r;
goto out;
/* Slot specific wipe */
if (h) {
r = h->wipe(cd, keyslot);
if (r < 0)
return r;
goto out;
} else
log_dbg("Wiping keyslot %d without specific-slot handler loaded.", keyslot);
log_dbg(cd, "Wiping keyslot %d without specific-slot handler loaded.", keyslot);
snprintf(num, sizeof(num), "%d", keyslot);
json_object_object_del(jobj_keyslots, num);
json_object_object_del_by_uint(jobj_keyslots, keyslot);
return LUKS2_hdr_write(cd, hdr);
r = LUKS2_hdr_write(cd, hdr);
out:
device_write_unlock(cd, crypt_metadata_device(cd));
return r;
}
int LUKS2_keyslot_dump(struct crypt_device *cd, int keyslot)
@@ -523,10 +800,9 @@ int placeholder_keyslot_alloc(struct crypt_device *cd,
size_t volume_key_len)
{
struct luks2_hdr *hdr;
char num[16];
json_object *jobj_keyslots, *jobj_keyslot, *jobj_area;
log_dbg("Allocating placeholder keyslot %d for LUKS1 down conversion.", keyslot);
log_dbg(cd, "Allocating placeholder keyslot %d for LUKS1 down conversion.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
@@ -555,9 +831,7 @@ int placeholder_keyslot_alloc(struct crypt_device *cd,
json_object_object_add(jobj_area, "size", json_object_new_uint64(area_length));
json_object_object_add(jobj_keyslot, "area", jobj_area);
snprintf(num, sizeof(num), "%d", keyslot);
json_object_object_add(jobj_keyslots, num, jobj_keyslot);
json_object_object_add_by_uint(jobj_keyslots, keyslot, jobj_keyslot);
return 0;
}
@@ -585,7 +859,7 @@ static unsigned LUKS2_get_keyslot_digests_count(json_object *hdr_jobj, int keysl
}
/* run only on header that passed basic format validation */
int LUKS2_keyslots_validate(json_object *hdr_jobj)
int LUKS2_keyslots_validate(struct crypt_device *cd, json_object *hdr_jobj)
{
const keyslot_handler *h;
int keyslot;
@@ -597,16 +871,16 @@ int LUKS2_keyslots_validate(json_object *hdr_jobj)
json_object_object_foreach(jobj_keyslots, slot, val) {
keyslot = atoi(slot);
json_object_object_get_ex(val, "type", &jobj_type);
h = LUKS2_keyslot_handler_type(NULL, json_object_get_string(jobj_type));
h = LUKS2_keyslot_handler_type(cd, json_object_get_string(jobj_type));
if (!h)
continue;
if (h->validate && h->validate(NULL, val)) {
log_dbg("Keyslot type %s validation failed on keyslot %d.", h->name, keyslot);
if (h->validate && h->validate(cd, val)) {
log_dbg(cd, "Keyslot type %s validation failed on keyslot %d.", h->name, keyslot);
return -EINVAL;
}
if (!strcmp(h->name, "luks2") && LUKS2_get_keyslot_digests_count(hdr_jobj, keyslot) != 1) {
log_dbg("Keyslot %d is not assigned to exactly 1 digest.", keyslot);
log_dbg(cd, "Keyslot %d is not assigned to exactly 1 digest.", keyslot);
return -EINVAL;
}
}
@@ -614,7 +888,7 @@ int LUKS2_keyslots_validate(json_object *hdr_jobj)
return 0;
}
void LUKS2_keyslots_repair(json_object *jobj_keyslots)
void LUKS2_keyslots_repair(struct crypt_device *cd, json_object *jobj_keyslots)
{
const keyslot_handler *h;
json_object *jobj_type;
@@ -626,8 +900,51 @@ void LUKS2_keyslots_repair(json_object *jobj_keyslots)
!json_object_is_type(jobj_type, json_type_string))
continue;
h = LUKS2_keyslot_handler_type(NULL, json_object_get_string(jobj_type));
h = LUKS2_keyslot_handler_type(cd, json_object_get_string(jobj_type));
if (h && h->repair)
h->repair(NULL, val);
h->repair(cd, val);
}
}
/* assumes valid header */
int LUKS2_find_keyslot(struct luks2_hdr *hdr, const char *type)
{
int i;
json_object *jobj_keyslot, *jobj_type;
if (!type)
return -EINVAL;
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, i);
if (!jobj_keyslot)
continue;
json_object_object_get_ex(jobj_keyslot, "type", &jobj_type);
if (!strcmp(json_object_get_string(jobj_type), type))
return i;
}
return -ENOENT;
}
int LUKS2_find_keyslot_for_segment(struct luks2_hdr *hdr, int segment, const char *type)
{
int i;
json_object *jobj_keyslot, *jobj_type;
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, i);
if (!jobj_keyslot)
continue;
json_object_object_get_ex(jobj_keyslot, "type", &jobj_type);
if (strcmp(json_object_get_string(jobj_type), type))
continue;
if (!LUKS2_keyslot_for_segment(hdr, i, segment))
return i;
}
return -EINVAL;
}

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2, LUKS2 type keyslot handler
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -28,65 +28,51 @@
#define LUKS_SLOT_ITERATIONS_MIN 1000
#define LUKS_STRIPES 4000
/* Serialize memory-hard keyslot access: opttional workaround for parallel processing */
#define MIN_MEMORY_FOR_SERIALIZE_LOCK_KB 32*1024 /* 32MB */
static int luks2_encrypt_to_storage(char *src, size_t srcLength,
const char *cipher, const char *cipher_mode,
struct volume_key *vk, unsigned int sector,
struct crypt_device *cd)
{
struct device *device = crypt_metadata_device(cd);
#ifndef ENABLE_AF_ALG /* Support for old kernel without Crypto API */
int r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write lock on device %s."), device_path(device));
return r;
}
r = LUKS_encrypt_to_storage(src, srcLength, cipher, cipher_mode, vk, sector, cd);
device_write_unlock(crypt_metadata_device(cd));
return r;
return LUKS_encrypt_to_storage(src, srcLength, cipher, cipher_mode, vk, sector, cd);
#else
struct crypt_storage *s;
int devfd = -1, r;
int devfd, r;
struct device *device = crypt_metadata_device(cd);
/* Only whole sector writes supported */
if (MISALIGNED_512(srcLength))
return -EINVAL;
/* Encrypt buffer */
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
r = crypt_storage_init(&s, SECTOR_SIZE, cipher, cipher_mode, vk->key, vk->keylength);
if (r) {
log_dbg("Userspace crypto wrapper cannot use %s-%s (%d).",
log_dbg(cd, "Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
return r;
}
r = crypt_storage_encrypt(s, 0, srcLength / SECTOR_SIZE, src);
r = crypt_storage_encrypt(s, 0, srcLength, src);
crypt_storage_destroy(s);
if (r)
return r;
r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write lock on device %s."),
device_path(device));
return r;
}
devfd = device_open_locked(device, O_RDWR);
devfd = device_open_locked(cd, device, O_RDWR);
if (devfd >= 0) {
if (write_lseek_blockwise(devfd, device_block_size(device),
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), src,
srcLength, sector * SECTOR_SIZE) < 0)
r = -EIO;
else
r = 0;
device_sync(device, devfd);
close(devfd);
device_sync(cd, device);
} else
r = -EIO;
device_write_unlock(device);
if (r)
log_err(cd, _("IO error while encrypting keyslot."));
@@ -106,19 +92,19 @@ static int luks2_decrypt_from_storage(char *dst, size_t dstLength,
return r;
}
r = LUKS_decrypt_from_storage(dst, dstLength, cipher, cipher_mode, vk, sector, cd);
device_read_unlock(crypt_metadata_device(cd));
device_read_unlock(cd, crypt_metadata_device(cd));
return r;
#else
struct crypt_storage *s;
int devfd = -1, r;
int devfd, r;
/* Only whole sector writes supported */
if (MISALIGNED_512(dstLength))
return -EINVAL;
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
r = crypt_storage_init(&s, SECTOR_SIZE, cipher, cipher_mode, vk->key, vk->keylength);
if (r) {
log_dbg("Userspace crypto wrapper cannot use %s-%s (%d).",
log_dbg(cd, "Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
return r;
}
@@ -131,23 +117,22 @@ static int luks2_decrypt_from_storage(char *dst, size_t dstLength,
return r;
}
devfd = device_open_locked(device, O_RDONLY);
devfd = device_open_locked(cd, device, O_RDONLY);
if (devfd >= 0) {
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), dst,
dstLength, sector * SECTOR_SIZE) < 0)
r = -EIO;
else
r = 0;
close(devfd);
} else
r = -EIO;
device_read_unlock(device);
device_read_unlock(cd, device);
/* Decrypt buffer */
if (!r)
r = crypt_storage_decrypt(s, 0, dstLength / SECTOR_SIZE, dst);
r = crypt_storage_decrypt(s, 0, dstLength, dst);
else
log_err(cd, _("IO error while decrypting keyslot."));
@@ -281,10 +266,10 @@ static int luks2_keyslot_set_key(struct crypt_device *cd,
return -ENOMEM;
}
r = AF_split(volume_key, AfKey, volume_key_len, LUKS_STRIPES, af_hash);
r = AF_split(cd, volume_key, AfKey, volume_key_len, LUKS_STRIPES, af_hash);
if (r == 0) {
log_dbg("Updating keyslot area [0x%04x].", (unsigned)area_offset);
log_dbg(cd, "Updating keyslot area [0x%04x].", (unsigned)area_offset);
/* FIXME: sector_offset should be size_t, fix LUKS_encrypt... accordingly */
r = luks2_encrypt_to_storage(AfKey, AFEKSize, cipher, cipher_mode,
derived_key, (unsigned)(area_offset / SECTOR_SIZE), cd);
@@ -312,6 +297,7 @@ static int luks2_keyslot_get_key(struct crypt_device *cd,
json_object *jobj2, *jobj_af, *jobj_area;
uint64_t area_offset;
size_t keyslot_key_len;
bool try_serialize_lock = false;
int r;
if (!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
@@ -339,6 +325,13 @@ static int luks2_keyslot_get_key(struct crypt_device *cd,
return -EINVAL;
keyslot_key_len = json_object_get_int(jobj2);
/*
* If requested, serialize unlocking for memory-hard KDF. Usually NOOP.
*/
if (pbkdf.max_memory_kb > MIN_MEMORY_FOR_SERIALIZE_LOCK_KB)
try_serialize_lock = true;
if (try_serialize_lock && crypt_serialize_lock(cd))
return -EINVAL;
/*
* Allocate derived key storage space.
*/
@@ -361,15 +354,18 @@ static int luks2_keyslot_get_key(struct crypt_device *cd,
pbkdf.iterations, pbkdf.max_memory_kb,
pbkdf.parallel_threads);
if (try_serialize_lock)
crypt_serialize_unlock(cd);
if (r == 0) {
log_dbg("Reading keyslot area [0x%04x].", (unsigned)area_offset);
log_dbg(cd, "Reading keyslot area [0x%04x].", (unsigned)area_offset);
/* FIXME: sector_offset should be size_t, fix LUKS_decrypt... accordingly */
r = luks2_decrypt_from_storage(AfKey, AFEKSize, cipher, cipher_mode,
derived_key, (unsigned)(area_offset / SECTOR_SIZE), cd);
}
if (r == 0)
r = AF_merge(AfKey, volume_key, volume_key_len, LUKS_STRIPES, af_hash);
r = AF_merge(cd, AfKey, volume_key, volume_key_len, LUKS_STRIPES, af_hash);
crypt_free_volume_key(derived_key);
crypt_safe_free(AfKey);
@@ -388,27 +384,25 @@ static int luks2_keyslot_update_json(struct crypt_device *cd,
const struct luks2_keyslot_params *params)
{
const struct crypt_pbkdf_type *pbkdf;
json_object *jobj_af, *jobj_area, *jobj_kdf, *jobj1;
json_object *jobj_af, *jobj_area, *jobj_kdf;
char salt[LUKS_SALTSIZE], *salt_base64 = NULL;
int r, keyslot_key_len;
int r;
/* jobj_keyslot is not yet validated */
if (!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area) ||
!json_object_object_get_ex(jobj_area, "key_size", &jobj1))
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
/* we do not allow any 'area' object modifications yet */
keyslot_key_len = json_object_get_int(jobj1);
if (keyslot_key_len < 0)
return -EINVAL;
/* update area encryption parameters */
json_object_object_add(jobj_area, "encryption", json_object_new_string(params->area.raw.encryption));
json_object_object_add(jobj_area, "key_size", json_object_new_int(params->area.raw.key_size));
pbkdf = crypt_get_pbkdf_type(cd);
if (!pbkdf)
return -EINVAL;
r = crypt_benchmark_pbkdf_internal(cd, CONST_CAST(struct crypt_pbkdf_type *)pbkdf, keyslot_key_len);
r = crypt_benchmark_pbkdf_internal(cd, CONST_CAST(struct crypt_pbkdf_type *)pbkdf, params->area.raw.key_size);
if (r < 0)
return r;
@@ -442,7 +436,7 @@ static int luks2_keyslot_update_json(struct crypt_device *cd,
/* update 'af' hash */
json_object_object_add(jobj_af, "hash", json_object_new_string(params->af.luks1.hash));
JSON_DBG(jobj_keyslot, "Keyslot JSON");
JSON_DBG(cd, jobj_keyslot, "Keyslot JSON:");
return 0;
}
@@ -452,16 +446,15 @@ static int luks2_keyslot_alloc(struct crypt_device *cd,
const struct luks2_keyslot_params *params)
{
struct luks2_hdr *hdr;
char num[16];
uint64_t area_offset, area_length;
json_object *jobj_keyslots, *jobj_keyslot, *jobj_af, *jobj_area;
int r;
log_dbg("Trying to allocate LUKS2 keyslot %d.", keyslot);
log_dbg(cd, "Trying to allocate LUKS2 keyslot %d.", keyslot);
if (!params || params->area_type != LUKS2_KEYSLOT_AREA_RAW ||
params->af_type != LUKS2_KEYSLOT_AF_LUKS1) {
log_dbg("Invalid LUKS2 keyslot parameters.");
log_dbg(cd, "Invalid LUKS2 keyslot parameters.");
return -EINVAL;
}
@@ -469,13 +462,13 @@ static int luks2_keyslot_alloc(struct crypt_device *cd,
return -EINVAL;
if (keyslot == CRYPT_ANY_SLOT)
keyslot = LUKS2_keyslot_find_empty(hdr, "luks2");
keyslot = LUKS2_keyslot_find_empty(hdr);
if (keyslot < 0 || keyslot >= LUKS2_KEYSLOTS_MAX)
return -ENOMEM;
if (LUKS2_get_keyslot_jobj(hdr, keyslot)) {
log_dbg("Cannot modify already active keyslot %d.", keyslot);
log_dbg(cd, "Cannot modify already active keyslot %d.", keyslot);
return -EINVAL;
}
@@ -483,8 +476,10 @@ static int luks2_keyslot_alloc(struct crypt_device *cd,
return -EINVAL;
r = LUKS2_find_area_gap(cd, hdr, volume_key_len, &area_offset, &area_length);
if (r < 0)
if (r < 0) {
log_err(cd, _("No space for new keyslot."));
return r;
}
jobj_keyslot = json_object_new_object();
json_object_object_add(jobj_keyslot, "type", json_object_new_string("luks2"));
@@ -499,25 +494,21 @@ static int luks2_keyslot_alloc(struct crypt_device *cd,
/* Area object */
jobj_area = json_object_new_object();
json_object_object_add(jobj_area, "type", json_object_new_string("raw"));
json_object_object_add(jobj_area, "encryption", json_object_new_string(params->area.raw.encryption));
json_object_object_add(jobj_area, "key_size", json_object_new_int(params->area.raw.key_size));
json_object_object_add(jobj_area, "offset", json_object_new_uint64(area_offset));
json_object_object_add(jobj_area, "size", json_object_new_uint64(area_length));
json_object_object_add(jobj_keyslot, "area", jobj_area);
snprintf(num, sizeof(num), "%d", keyslot);
json_object_object_add(jobj_keyslots, num, jobj_keyslot);
json_object_object_add_by_uint(jobj_keyslots, keyslot, jobj_keyslot);
r = luks2_keyslot_update_json(cd, jobj_keyslot, params);
if (!r && LUKS2_check_json_size(hdr)) {
log_dbg("Not enough space in header json area for new keyslot.");
if (!r && LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for new keyslot.");
r = -ENOSPC;
}
if (r)
json_object_object_del(jobj_keyslots, num);
json_object_object_del_by_uint(jobj_keyslots, keyslot);
return r;
}
@@ -532,7 +523,7 @@ static int luks2_keyslot_open(struct crypt_device *cd,
struct luks2_hdr *hdr;
json_object *jobj_keyslot;
log_dbg("Trying to open LUKS2 keyslot %d.", keyslot);
log_dbg(cd, "Trying to open LUKS2 keyslot %d.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
@@ -561,7 +552,7 @@ static int luks2_keyslot_store(struct crypt_device *cd,
json_object *jobj_keyslot;
int r;
log_dbg("Calculating attributes for LUKS2 keyslot %d.", keyslot);
log_dbg(cd, "Calculating attributes for LUKS2 keyslot %d.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
@@ -570,17 +561,19 @@ static int luks2_keyslot_store(struct crypt_device *cd,
if (!jobj_keyslot)
return -EINVAL;
r = LUKS2_device_write_lock(cd, hdr, crypt_metadata_device(cd));
if(r)
return r;
r = luks2_keyslot_set_key(cd, jobj_keyslot,
password, password_len,
volume_key, volume_key_len);
if (r < 0)
return r;
if (!r)
r = LUKS2_hdr_write(cd, hdr);
r = LUKS2_hdr_write(cd, hdr);
if (r < 0)
return r;
device_write_unlock(cd, crypt_metadata_device(cd));
return keyslot;
return r < 0 ? r : keyslot;
}
static int luks2_keyslot_wipe(struct crypt_device *cd, int keyslot)
@@ -613,6 +606,9 @@ static int luks2_keyslot_dump(struct crypt_device *cd, int keyslot)
json_object_object_get_ex(jobj_area, "encryption", &jobj1);
log_std(cd, "\tCipher: %s\n", json_object_get_string(jobj1));
json_object_object_get_ex(jobj_area, "key_size", &jobj1);
log_std(cd, "\tCipher key: %u bits\n", json_object_get_uint32(jobj1) * 8);
json_object_object_get_ex(jobj_kdf, "type", &jobj1);
log_std(cd, "\tPBKDF: %s\n", json_object_get_string(jobj1));
@@ -640,6 +636,9 @@ static int luks2_keyslot_dump(struct crypt_device *cd, int keyslot)
json_object_object_get_ex(jobj_af, "stripes", &jobj1);
log_std(cd, "\tAF stripes: %u\n", json_object_get_int(jobj1));
json_object_object_get_ex(jobj_af, "hash", &jobj1);
log_std(cd, "\tAF hash: %s\n", json_object_get_string(jobj1));
json_object_object_get_ex(jobj_area, "offset", &jobj1);
log_std(cd, "\tArea offset:%" PRIu64 " [bytes]\n", json_object_get_uint64(jobj1));
@@ -665,31 +664,31 @@ static int luks2_keyslot_validate(struct crypt_device *cd, json_object *jobj_key
count = json_object_object_length(jobj_kdf);
jobj1 = json_contains(jobj_kdf, "", "kdf section", "type", json_type_string);
jobj1 = json_contains(cd, jobj_kdf, "", "kdf section", "type", json_type_string);
if (!jobj1)
return -EINVAL;
type = json_object_get_string(jobj1);
if (!strcmp(type, CRYPT_KDF_PBKDF2)) {
if (count != 4 || /* type, salt, hash, iterations only */
!json_contains(jobj_kdf, "kdf type", type, "hash", json_type_string) ||
!json_contains(jobj_kdf, "kdf type", type, "iterations", json_type_int) ||
!json_contains(jobj_kdf, "kdf type", type, "salt", json_type_string))
!json_contains(cd, jobj_kdf, "kdf type", type, "hash", json_type_string) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "iterations", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "salt", json_type_string))
return -EINVAL;
} else if (!strcmp(type, CRYPT_KDF_ARGON2I) || !strcmp(type, CRYPT_KDF_ARGON2ID)) {
if (count != 5 || /* type, salt, time, memory, cpus only */
!json_contains(jobj_kdf, "kdf type", type, "time", json_type_int) ||
!json_contains(jobj_kdf, "kdf type", type, "memory", json_type_int) ||
!json_contains(jobj_kdf, "kdf type", type, "cpus", json_type_int) ||
!json_contains(jobj_kdf, "kdf type", type, "salt", json_type_string))
!json_contains(cd, jobj_kdf, "kdf type", type, "time", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "memory", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "cpus", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "salt", json_type_string))
return -EINVAL;
}
if (!json_object_object_get_ex(jobj_af, "type", &jobj1))
return -EINVAL;
if (!strcmp(json_object_get_string(jobj1), "luks1")) {
if (!json_contains(jobj_af, "", "luks1 af", "hash", json_type_string) ||
!json_contains(jobj_af, "", "luks1 af", "stripes", json_type_int))
if (!json_contains(cd, jobj_af, "", "luks1 af", "hash", json_type_string) ||
!json_contains(cd, jobj_af, "", "luks1 af", "stripes", json_type_int))
return -EINVAL;
} else
return -EINVAL;
@@ -698,10 +697,10 @@ static int luks2_keyslot_validate(struct crypt_device *cd, json_object *jobj_key
if (!json_object_object_get_ex(jobj_area, "type", &jobj1))
return -EINVAL;
if (!strcmp(json_object_get_string(jobj1), "raw")) {
if (!json_contains(jobj_area, "area", "raw type", "encryption", json_type_string) ||
!json_contains(jobj_area, "area", "raw type", "key_size", json_type_int) ||
!json_contains(jobj_area, "area", "raw type", "offset", json_type_string) ||
!json_contains(jobj_area, "area", "raw type", "size", json_type_string))
if (!json_contains(cd, jobj_area, "area", "raw type", "encryption", json_type_string) ||
!json_contains(cd, jobj_area, "area", "raw type", "key_size", json_type_int) ||
!json_contains(cd, jobj_area, "area", "raw type", "offset", json_type_string) ||
!json_contains(cd, jobj_area, "area", "raw type", "size", json_type_string))
return -EINVAL;
} else
return -EINVAL;
@@ -717,7 +716,7 @@ static int luks2_keyslot_update(struct crypt_device *cd,
json_object *jobj_keyslot;
int r;
log_dbg("Updating LUKS2 keyslot %d.", keyslot);
log_dbg(cd, "Updating LUKS2 keyslot %d.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
@@ -728,8 +727,8 @@ static int luks2_keyslot_update(struct crypt_device *cd,
r = luks2_keyslot_update_json(cd, jobj_keyslot, params);
if (!r && LUKS2_check_json_size(hdr)) {
log_dbg("Not enough space in header json area for updated keyslot %d.", keyslot);
if (!r && LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for updated keyslot %d.", keyslot);
r = -ENOSPC;
}

View File

@@ -0,0 +1,329 @@
/*
* LUKS - Linux Unified Key Setup v2, reencryption keyslot handler
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
static int reenc_keyslot_open(struct crypt_device *cd,
int keyslot,
const char *password,
size_t password_len,
char *volume_key,
size_t volume_key_len)
{
return -ENOENT;
}
int reenc_keyslot_alloc(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const struct crypt_params_reencrypt *params)
{
int r;
json_object *jobj_keyslots, *jobj_keyslot, *jobj_area;
uint64_t area_offset, area_length;
log_dbg(cd, "Allocating reencrypt keyslot %d.", keyslot);
if (keyslot < 0 || keyslot >= LUKS2_KEYSLOTS_MAX)
return -ENOMEM;
if (!json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots))
return -EINVAL;
/* encryption doesn't require area (we shift data and backup will be available) */
if (!params->data_shift) {
r = LUKS2_find_area_max_gap(cd, hdr, &area_offset, &area_length);
if (r < 0)
return r;
} else { /* we can't have keyslot w/o area...bug? */
r = LUKS2_find_area_gap(cd, hdr, 1, &area_offset, &area_length);
if (r < 0)
return r;
}
jobj_keyslot = json_object_new_object();
if (!jobj_keyslot)
return -ENOMEM;
jobj_area = json_object_new_object();
if (params->data_shift) {
json_object_object_add(jobj_area, "type", json_object_new_string("datashift"));
json_object_object_add(jobj_area, "shift_size", json_object_new_uint64(params->data_shift << SECTOR_SHIFT));
} else
/* except data shift protection, initial setting is irrelevant. Type can be changed during reencryption */
json_object_object_add(jobj_area, "type", json_object_new_string("none"));
json_object_object_add(jobj_area, "offset", json_object_new_uint64(area_offset));
json_object_object_add(jobj_area, "size", json_object_new_uint64(area_length));
json_object_object_add(jobj_keyslot, "type", json_object_new_string("reencrypt"));
json_object_object_add(jobj_keyslot, "key_size", json_object_new_int(1)); /* useless but mandatory */
json_object_object_add(jobj_keyslot, "mode", json_object_new_string(params->mode));
if (params->direction == CRYPT_REENCRYPT_FORWARD)
json_object_object_add(jobj_keyslot, "direction", json_object_new_string("forward"));
else if (params->direction == CRYPT_REENCRYPT_BACKWARD)
json_object_object_add(jobj_keyslot, "direction", json_object_new_string("backward"));
else
return -EINVAL;
json_object_object_add(jobj_keyslot, "area", jobj_area);
json_object_object_add_by_uint(jobj_keyslots, keyslot, jobj_keyslot);
if (LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "New keyslot too large to fit in free metadata space.");
json_object_object_del_by_uint(jobj_keyslots, keyslot);
return -ENOSPC;
}
JSON_DBG(cd, hdr->jobj, "JSON:");
return 0;
}
static int reenc_keyslot_store_data(struct crypt_device *cd,
json_object *jobj_keyslot,
const void *buffer, size_t buffer_len)
{
int devfd, r;
json_object *jobj_area, *jobj_offset, *jobj_length;
uint64_t area_offset, area_length;
struct device *device = crypt_metadata_device(cd);
if (!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area) ||
!json_object_object_get_ex(jobj_area, "offset", &jobj_offset) ||
!json_object_object_get_ex(jobj_area, "size", &jobj_length))
return -EINVAL;
area_offset = json_object_get_uint64(jobj_offset);
area_length = json_object_get_uint64(jobj_length);
if (!area_offset || !area_length || ((uint64_t)buffer_len > area_length))
return -EINVAL;
devfd = device_open_locked(cd, device, O_RDWR);
if (devfd >= 0) {
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), CONST_CAST(void *)buffer,
buffer_len, area_offset) < 0)
r = -EIO;
else
r = 0;
} else
r = -EINVAL;
if (r)
log_err(cd, _("IO error while encrypting keyslot."));
return r;
}
static int reenc_keyslot_store(struct crypt_device *cd,
int keyslot,
const char *password __attribute__((unused)),
size_t password_len __attribute__((unused)),
const char *buffer,
size_t buffer_len)
{
struct luks2_hdr *hdr;
json_object *jobj_keyslot;
int r = 0;
if (!cd || !buffer || !buffer_len)
return -EINVAL;
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
log_dbg(cd, "Reencrypt keyslot %d store.", keyslot);
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -EINVAL;
r = LUKS2_device_write_lock(cd, hdr, crypt_metadata_device(cd));
if (r)
return r;
r = reenc_keyslot_store_data(cd, jobj_keyslot, buffer, buffer_len);
if (r < 0) {
device_write_unlock(cd, crypt_metadata_device(cd));
return r;
}
r = LUKS2_hdr_write(cd, hdr);
device_write_unlock(cd, crypt_metadata_device(cd));
return r < 0 ? r : keyslot;
}
int reenc_keyslot_update(struct crypt_device *cd,
const struct luks2_reenc_context *rh)
{
json_object *jobj_keyslot, *jobj_area, *jobj_area_type;
struct luks2_hdr *hdr;
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, rh->reenc_keyslot);
if (!jobj_keyslot)
return -EINVAL;
json_object_object_get_ex(jobj_keyslot, "area", &jobj_area);
json_object_object_get_ex(jobj_area, "type", &jobj_area_type);
if (rh->rp.type == REENC_PROTECTION_CHECKSUM) {
log_dbg(cd, "Updating reencrypt keyslot for checksum protection.");
json_object_object_add(jobj_area, "type", json_object_new_string("checksum"));
json_object_object_add(jobj_area, "hash", json_object_new_string(rh->rp.p.csum.hash));
json_object_object_add(jobj_area, "sector_size", json_object_new_int64(rh->alignment));
} else if (rh->rp.type == REENC_PROTECTION_NONE) {
log_dbg(cd, "Updating reencrypt keyslot for none protection.");
json_object_object_add(jobj_area, "type", json_object_new_string("none"));
json_object_object_del(jobj_area, "hash");
} else if (rh->rp.type == REENC_PROTECTION_JOURNAL) {
log_dbg(cd, "Updating reencrypt keyslot for journal protection.");
json_object_object_add(jobj_area, "type", json_object_new_string("journal"));
json_object_object_del(jobj_area, "hash");
} else
log_dbg(cd, "No update of reencrypt keyslot needed.");
return 0;
}
static int reenc_keyslot_wipe(struct crypt_device *cd, int keyslot)
{
return 0;
}
static int reenc_keyslot_dump(struct crypt_device *cd, int keyslot)
{
json_object *jobj_keyslot, *jobj_area, *jobj_direction, *jobj_mode, *jobj_resilience,
*jobj1;
jobj_keyslot = LUKS2_get_keyslot_jobj(crypt_get_hdr(cd, CRYPT_LUKS2), keyslot);
if (!jobj_keyslot)
return -EINVAL;
if (!json_object_object_get_ex(jobj_keyslot, "direction", &jobj_direction) ||
!json_object_object_get_ex(jobj_keyslot, "mode", &jobj_mode) ||
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area) ||
!json_object_object_get_ex(jobj_area, "type", &jobj_resilience))
return -EINVAL;
log_std(cd, "\t%-12s%s\n", "Mode:", json_object_get_string(jobj_mode));
log_std(cd, "\t%-12s%s\n", "Direction:", json_object_get_string(jobj_direction));
log_std(cd, "\t%-12s%s\n", "Resilience:", json_object_get_string(jobj_resilience));
if (!strcmp(json_object_get_string(jobj_resilience), "checksum")) {
json_object_object_get_ex(jobj_area, "hash", &jobj1);
log_std(cd, "\t%-12s%s\n", "Hash:", json_object_get_string(jobj1));
json_object_object_get_ex(jobj_area, "sector_size", &jobj1);
log_std(cd, "\t%-12s%d [bytes]\n", "Hash data:", json_object_get_int(jobj1));
} else if (!strcmp(json_object_get_string(jobj_resilience), "datashift")) {
json_object_object_get_ex(jobj_area, "shift_size", &jobj1);
log_std(cd, "\t%-12s%" PRIu64 "[bytes]\n", "Shift size:", json_object_get_uint64(jobj1));
}
json_object_object_get_ex(jobj_area, "offset", &jobj1);
log_std(cd, "\tArea offset:%" PRIu64 " [bytes]\n", json_object_get_uint64(jobj1));
json_object_object_get_ex(jobj_area, "size", &jobj1);
log_std(cd, "\tArea length:%" PRIu64 " [bytes]\n", json_object_get_uint64(jobj1));
return 0;
}
static int reenc_keyslot_validate(struct crypt_device *cd, json_object *jobj_keyslot)
{
json_object *jobj_mode, *jobj_area, *jobj_type, *jobj_shift_size, *jobj_hash, *jobj_sector_size;
const char *mode, *type;
uint32_t sector_size;
uint64_t shift_size;
/* mode (string: encrypt,reencrypt,decrypt)
* direction (string:)
* area {
* type: (string: datashift, journal, checksum, none)
* hash: (string: checksum only)
* sector_size (uint32: checksum only)
* shift_size (uint64: datashift only)
* }
*/
/* area and area type are validated in general validation code */
if (!jobj_keyslot || !json_object_object_get_ex(jobj_keyslot, "area", &jobj_area) ||
!json_object_object_get_ex(jobj_area, "type", &jobj_type))
return -EINVAL;
jobj_mode = json_contains(cd, jobj_keyslot, "", "reencrypt keyslot", "mode", json_type_string);
if (!jobj_mode || !json_contains(cd, jobj_keyslot, "", "reencrypt keyslot", "direction", json_type_string))
return -EINVAL;
mode = json_object_get_string(jobj_mode);
type = json_object_get_string(jobj_type);
if (strcmp(mode, "reencrypt") && strcmp(mode, "encrypt") &&
strcmp(mode, "decrypt")) {
log_dbg(cd, "Illegal reencrypt mode %s.", mode);
return -EINVAL;
}
if (!strcmp(type, "checksum")) {
jobj_hash = json_contains(cd, jobj_area, "type:checksum", "Keyslot area", "hash", json_type_string);
jobj_sector_size = json_contains(cd, jobj_area, "type:checksum", "Keyslot area", "sector_size", json_type_int);
if (!jobj_hash || !jobj_sector_size)
return -EINVAL;
if (!validate_json_uint32(jobj_sector_size))
return -EINVAL;
sector_size = json_object_get_uint32(jobj_sector_size);
if (sector_size < SECTOR_SIZE || NOTPOW2(sector_size)) {
log_dbg(cd, "Invalid sector_size (%" PRIu32 ") for checksum resilience mode.", sector_size);
return -EINVAL;
}
} else if (!strcmp(type, "datashift")) {
if (!(jobj_shift_size = json_contains(cd, jobj_area, "type:datashift", "Keyslot area", "shift_size", json_type_string)))
return -EINVAL;
shift_size = json_object_get_uint64(jobj_shift_size);
if (!shift_size)
return -EINVAL;
if (MISALIGNED_512(shift_size)) {
log_dbg(cd, "Shift size field has to be aligned to sector size: %" PRIu32, SECTOR_SIZE);
return -EINVAL;
}
}
return 0;
}
const keyslot_handler reenc_keyslot = {
.name = "reencrypt",
.open = reenc_keyslot_open,
.store = reenc_keyslot_store, /* initialization only or also per every chunk write */
.wipe = reenc_keyslot_wipe,
.dump = reenc_keyslot_dump,
.validate = reenc_keyslot_validate
};

View File

@@ -1,9 +1,9 @@
/*
* LUKS - Linux Unified Key Setup v2, LUKS1 conversion code
*
* Copyright (C) 2015-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2018, Ondrej Kozina. All rights reserved.
* Copyright (C) 2015-2018, Milan Broz. All rights reserved.
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Ondrej Kozina
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -24,6 +24,14 @@
#include "../luks1/luks.h"
#include "../luks1/af.h"
int LUKS2_check_cipher(struct crypt_device *cd,
size_t keylength,
const char *cipher,
const char *cipher_mode)
{
return LUKS_check_cipher(cd, keylength, cipher, cipher_mode);
}
static int json_luks1_keyslot(const struct luks_phdr *hdr_v1, int keyslot, struct json_object **keyslot_object)
{
char *base64_str, cipher[LUKS_CIPHERNAME_L+LUKS_CIPHERMODE_L];
@@ -93,24 +101,22 @@ static int json_luks1_keyslot(const struct luks_phdr *hdr_v1, int keyslot, struc
static int json_luks1_keyslots(const struct luks_phdr *hdr_v1, struct json_object **keyslots_object)
{
char keyslot_str[2];
int key_slot, r;
int keyslot, r;
struct json_object *keyslot_obj, *field;
keyslot_obj = json_object_new_object();
if (!keyslot_obj)
return -ENOMEM;
for (key_slot = 0; key_slot < LUKS_NUMKEYS; key_slot++) {
if (hdr_v1->keyblock[key_slot].active != LUKS_KEY_ENABLED)
for (keyslot = 0; keyslot < LUKS_NUMKEYS; keyslot++) {
if (hdr_v1->keyblock[keyslot].active != LUKS_KEY_ENABLED)
continue;
r = json_luks1_keyslot(hdr_v1, key_slot, &field);
r = json_luks1_keyslot(hdr_v1, keyslot, &field);
if (r) {
json_object_put(keyslot_obj);
return r;
}
(void) snprintf(keyslot_str, sizeof(keyslot_str), "%d", key_slot);
json_object_object_add(keyslot_obj, keyslot_str, field);
json_object_object_add_by_uint(keyslot_obj, keyslot, field);
}
*keyslots_object = keyslot_obj;
@@ -190,7 +196,6 @@ static int json_luks1_segment(const struct luks_phdr *hdr_v1, struct json_object
static int json_luks1_segments(const struct luks_phdr *hdr_v1, struct json_object **segments_object)
{
char num[16];
int r;
struct json_object *segments_obj, *field;
@@ -203,8 +208,7 @@ static int json_luks1_segments(const struct luks_phdr *hdr_v1, struct json_objec
json_object_put(segments_obj);
return r;
}
snprintf(num, sizeof(num), "%u", CRYPT_DEFAULT_SEGMENT);
json_object_object_add(segments_obj, num, field);
json_object_object_add_by_uint(segments_obj, 0, field);
*segments_object = segments_obj;
return 0;
@@ -423,46 +427,45 @@ static void move_keyslot_offset(json_object *jobj, int offset_add)
static int move_keyslot_areas(struct crypt_device *cd, off_t offset_from,
off_t offset_to, size_t buf_size)
{
int devfd, r = -EIO;
struct device *device = crypt_metadata_device(cd);
void *buf = NULL;
int r = -EIO, devfd = -1;
log_dbg("Moving keyslot areas of size %zu from %jd to %jd.",
log_dbg(cd, "Moving keyslot areas of size %zu from %jd to %jd.",
buf_size, (intmax_t)offset_from, (intmax_t)offset_to);
if (posix_memalign(&buf, crypt_getpagesize(), buf_size))
return -ENOMEM;
devfd = device_open(device, O_RDWR);
if (devfd == -1) {
devfd = device_open(cd, device, O_RDWR);
if (devfd < 0) {
free(buf);
return -EIO;
}
/* This can safely fail (for block devices). It only allocates space if it is possible. */
if (posix_fallocate(devfd, offset_to, buf_size))
log_dbg("Preallocation (fallocate) of new keyslot area not available.");
log_dbg(cd, "Preallocation (fallocate) of new keyslot area not available.");
/* Try to read *new* area to check that area is there (trimmed backup). */
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), buf, buf_size,
offset_to)!= (ssize_t)buf_size)
goto out;
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), buf, buf_size,
offset_from)!= (ssize_t)buf_size)
goto out;
if (write_lseek_blockwise(devfd, device_block_size(device),
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), buf, buf_size,
offset_to) != (ssize_t)buf_size)
goto out;
r = 0;
out:
device_sync(device, devfd);
close(devfd);
device_sync(cd, device);
crypt_memzero(buf, buf_size);
free(buf);
@@ -473,7 +476,7 @@ static int luks_header_in_use(struct crypt_device *cd)
{
int r;
r = lookup_dm_dev_by_uuid(crypt_get_uuid(cd), crypt_get_type(cd));
r = lookup_dm_dev_by_uuid(cd, crypt_get_uuid(cd), crypt_get_type(cd));
if (r < 0)
log_err(cd, _("Can not check status of device with uuid: %s."), crypt_get_uuid(cd));
@@ -483,29 +486,28 @@ static int luks_header_in_use(struct crypt_device *cd)
/* Check if there is a luksmeta area (foreign metadata created by the luksmeta package) */
static int luksmeta_header_present(struct crypt_device *cd, off_t luks1_size)
{
int devfd, r = 0;
static const uint8_t LM_MAGIC[] = { 'L', 'U', 'K', 'S', 'M', 'E', 'T', 'A' };
struct device *device = crypt_metadata_device(cd);
void *buf = NULL;
int devfd, r = 0;
if (posix_memalign(&buf, crypt_getpagesize(), sizeof(LM_MAGIC)))
return -ENOMEM;
devfd = device_open(device, O_RDONLY);
if (devfd == -1) {
devfd = device_open(cd, device, O_RDONLY);
if (devfd < 0) {
free(buf);
return -EIO;
}
/* Note: we must not detect failure as problem here, header can be trimmed. */
if (read_lseek_blockwise(devfd, device_block_size(device), device_alignment(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device), device_alignment(device),
buf, sizeof(LM_MAGIC), luks1_size) == (ssize_t)sizeof(LM_MAGIC) &&
!memcmp(LM_MAGIC, buf, sizeof(LM_MAGIC))) {
log_err(cd, _("Unable to convert header with LUKSMETA additional metadata."));
r = -EBUSY;
}
close(devfd);
free(buf);
return r;
}
@@ -528,14 +530,14 @@ int LUKS2_luks1_to_luks2(struct crypt_device *cd, struct luks_phdr *hdr1, struct
return -EINVAL;
if (LUKS_keyslots_offset(hdr1) != (LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE)) {
log_dbg("Unsupported keyslots material offset: %zu.", LUKS_keyslots_offset(hdr1));
log_dbg(cd, "Unsupported keyslots material offset: %zu.", LUKS_keyslots_offset(hdr1));
return -EINVAL;
}
if (luksmeta_header_present(cd, luks1_size))
return -EINVAL;
log_dbg("Max size: %" PRIu64 ", LUKS1 (full) header size %zu , required shift: %zu",
log_dbg(cd, "Max size: %" PRIu64 ", LUKS1 (full) header size %zu , required shift: %zu",
max_size, luks1_size, luks1_shift);
if ((max_size - luks1_size) < luks1_shift) {
log_err(cd, _("Unable to move keyslot area. Not enough space."));
@@ -585,16 +587,18 @@ int LUKS2_luks1_to_luks2(struct crypt_device *cd, struct luks_phdr *hdr1, struct
// Write JSON hdr2
r = LUKS2_hdr_write(cd, hdr2);
out:
LUKS2_hdr_free(hdr2);
LUKS2_hdr_free(cd, hdr2);
return r;
}
static int keyslot_LUKS1_compatible(struct luks2_hdr *hdr, int keyslot, uint32_t key_size)
static int keyslot_LUKS1_compatible(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, uint32_t key_size, const char *hash)
{
json_object *jobj_keyslot, *jobj, *jobj_kdf, *jobj_af;
uint64_t l2_offset, l2_length;
int ks_key_size;
size_t ks_key_size;
const char *ks_cipher, *data_cipher;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
@@ -608,7 +612,9 @@ static int keyslot_LUKS1_compatible(struct luks2_hdr *hdr, int keyslot, uint32_t
jobj = NULL;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf) ||
!json_object_object_get_ex(jobj_kdf, "type", &jobj) ||
strcmp(json_object_get_string(jobj), CRYPT_KDF_PBKDF2))
strcmp(json_object_get_string(jobj), CRYPT_KDF_PBKDF2) ||
!json_object_object_get_ex(jobj_kdf, "hash", &jobj) ||
strcmp(json_object_get_string(jobj), hash))
return 0;
jobj = NULL;
@@ -619,14 +625,16 @@ static int keyslot_LUKS1_compatible(struct luks2_hdr *hdr, int keyslot, uint32_t
jobj = NULL;
if (!json_object_object_get_ex(jobj_af, "hash", &jobj) ||
crypt_hash_size(json_object_get_string(jobj)) < 0)
(crypt_hash_size(json_object_get_string(jobj)) < 0) ||
strcmp(json_object_get_string(jobj), hash))
return 0;
/* FIXME: should this go to validation code instead (aka invalid luks2 header if assigned to segment 0)? */
/* FIXME: check all keyslots are assigned to segment id 0, and segments count == 1 */
ks_key_size = LUKS2_get_keyslot_key_size(hdr, keyslot);
if (ks_key_size < 0 || (int)key_size != LUKS2_get_keyslot_key_size(hdr, keyslot)) {
log_dbg("Key length in keyslot %d is different from volume key length", keyslot);
ks_cipher = LUKS2_get_keyslot_cipher(hdr, keyslot, &ks_key_size);
data_cipher = LUKS2_get_cipher(hdr, CRYPT_DEFAULT_SEGMENT);
if (!ks_cipher || !data_cipher || key_size != ks_key_size || strcmp(ks_cipher, data_cipher)) {
log_dbg(cd, "Cipher in keyslot %d is different from volume key encryption.", keyslot);
return 0;
}
@@ -634,7 +642,7 @@ static int keyslot_LUKS1_compatible(struct luks2_hdr *hdr, int keyslot, uint32_t
return 0;
if (l2_length != (size_round_up(AF_split_sectors(key_size, LUKS_STRIPES) * SECTOR_SIZE, 4096))) {
log_dbg("Area length in LUKS2 keyslot (%d) is not compatible with LUKS1", keyslot);
log_dbg(cd, "Area length in LUKS2 keyslot (%d) is not compatible with LUKS1", keyslot);
return 0;
}
@@ -647,6 +655,7 @@ int LUKS2_luks2_to_luks1(struct crypt_device *cd, struct luks2_hdr *hdr2, struct
size_t buf_size, buf_offset;
char cipher[LUKS_CIPHERNAME_L-1], cipher_mode[LUKS_CIPHERMODE_L-1];
char digest[LUKS_DIGESTSIZE], digest_salt[LUKS_SALTSIZE];
const char *hash;
size_t len;
json_object *jobj_keyslot, *jobj_digest, *jobj_segment, *jobj_kdf, *jobj_area, *jobj1, *jobj2;
uint32_t key_size;
@@ -662,6 +671,11 @@ int LUKS2_luks2_to_luks1(struct crypt_device *cd, struct luks2_hdr *hdr2, struct
if (!jobj_segment)
return -EINVAL;
if (json_segment_get_sector_size(jobj_segment) != SECTOR_SIZE) {
log_err(cd, _("Cannot convert to LUKS1 format - default segment encryption sector size is not 512 bytes."));
return -EINVAL;
}
json_object_object_get_ex(hdr2->jobj, "digests", &jobj1);
if (!json_object_object_get_ex(jobj_digest, "type", &jobj2) ||
strcmp(json_object_get_string(jobj2), "pbkdf2") ||
@@ -669,6 +683,9 @@ int LUKS2_luks2_to_luks1(struct crypt_device *cd, struct luks2_hdr *hdr2, struct
log_err(cd, _("Cannot convert to LUKS1 format - key slot digests are not LUKS1 compatible."));
return -EINVAL;
}
if (!json_object_object_get_ex(jobj_digest, "hash", &jobj2))
return -EINVAL;
hash = json_object_get_string(jobj2);
r = crypt_parse_name_and_mode(LUKS2_get_cipher(hdr2, CRYPT_DEFAULT_SEGMENT), cipher, NULL, cipher_mode);
if (r < 0)
@@ -706,7 +723,7 @@ int LUKS2_luks2_to_luks1(struct crypt_device *cd, struct luks2_hdr *hdr2, struct
return -EINVAL;
}
if (!keyslot_LUKS1_compatible(hdr2, i, key_size)) {
if (!keyslot_LUKS1_compatible(cd, hdr2, i, key_size, hash)) {
log_err(cd, _("Cannot convert to LUKS1 format - keyslot %u is not LUKS1 compatible."), i);
return -EINVAL;
}

3310
lib/luks2/luks2_reencrypt.c Normal file

File diff suppressed because it is too large Load Diff

454
lib/luks2/luks2_segment.c Normal file
View File

@@ -0,0 +1,454 @@
/*
* LUKS - Linux Unified Key Setup v2, internal segment handling
*
* Copyright (C) 2018-2019, Red Hat, Inc. All rights reserved.
* Copyright (C) 2018-2019, Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
json_object *json_get_segments_jobj(json_object *hdr_jobj)
{
json_object *jobj_segments;
if (!hdr_jobj || !json_object_object_get_ex(hdr_jobj, "segments", &jobj_segments))
return NULL;
return jobj_segments;
}
/* use only on already validated 'segments' object */
uint64_t json_segments_get_minimal_offset(json_object *jobj_segments, unsigned blockwise)
{
uint64_t tmp, min = blockwise ? UINT64_MAX >> SECTOR_SHIFT : UINT64_MAX;
if (!jobj_segments)
return 0;
json_object_object_foreach(jobj_segments, key, val) {
UNUSED(key);
if (json_segment_is_backup(val))
continue;
tmp = json_segment_get_offset(val, blockwise);
if (!tmp)
return tmp;
if (tmp < min)
min = tmp;
}
return min;
}
uint64_t json_segment_get_offset(json_object *jobj_segment, unsigned blockwise)
{
json_object *jobj;
if (!jobj_segment ||
!json_object_object_get_ex(jobj_segment, "offset", &jobj))
return 0;
return blockwise ? json_object_get_uint64(jobj) >> SECTOR_SHIFT : json_object_get_uint64(jobj);
}
const char *json_segment_type(json_object *jobj_segment)
{
json_object *jobj;
if (!jobj_segment ||
!json_object_object_get_ex(jobj_segment, "type", &jobj))
return NULL;
return json_object_get_string(jobj);
}
uint64_t json_segment_get_iv_offset(json_object *jobj_segment)
{
json_object *jobj;
if (!jobj_segment ||
!json_object_object_get_ex(jobj_segment, "iv_tweak", &jobj))
return 0;
return json_object_get_uint64(jobj);
}
uint64_t json_segment_get_size(json_object *jobj_segment, unsigned blockwise)
{
json_object *jobj;
if (!jobj_segment ||
!json_object_object_get_ex(jobj_segment, "size", &jobj))
return 0;
return blockwise ? json_object_get_uint64(jobj) >> SECTOR_SHIFT : json_object_get_uint64(jobj);
}
const char *json_segment_get_cipher(json_object *jobj_segment)
{
json_object *jobj;
/* FIXME: Pseudo "null" cipher should be handled elsewhere */
if (!jobj_segment ||
!json_object_object_get_ex(jobj_segment, "encryption", &jobj))
return "null";
return json_object_get_string(jobj);
}
int json_segment_get_sector_size(json_object *jobj_segment)
{
json_object *jobj;
if (!jobj_segment ||
!json_object_object_get_ex(jobj_segment, "sector_size", &jobj))
return -1;
return json_object_get_int(jobj);
}
json_object *json_segment_get_flags(json_object *jobj_segment)
{
json_object *jobj;
if (!jobj_segment || !(json_object_object_get_ex(jobj_segment, "flags", &jobj)))
return NULL;
return jobj;
}
static bool json_segment_contains_flag(json_object *jobj_segment, const char *flag_str, size_t len)
{
int r, i;
json_object *jobj, *jobj_flags = json_segment_get_flags(jobj_segment);
if (!jobj_flags)
return false;
for (i = 0; i < (int)json_object_array_length(jobj_flags); i++) {
jobj = json_object_array_get_idx(jobj_flags, i);
if (len)
r = strncmp(json_object_get_string(jobj), flag_str, len);
else
r = strcmp(json_object_get_string(jobj), flag_str);
if (!r)
return true;
}
return false;
}
bool json_segment_is_backup(json_object *jobj_segment)
{
return json_segment_contains_flag(jobj_segment, "backup-", 7);
}
bool json_segment_is_reencrypt(json_object *jobj_segment)
{
return json_segment_contains_flag(jobj_segment, "in-reencryption", 0);
}
json_object *json_segments_get_segment(json_object *jobj_segments, int segment)
{
json_object *jobj;
char segment_name[16];
if (snprintf(segment_name, sizeof(segment_name), "%u", segment) < 1)
return NULL;
if (!json_object_object_get_ex(jobj_segments, segment_name, &jobj))
return NULL;
return jobj;
}
int json_segments_count(json_object *jobj_segments)
{
int count = 0;
if (!jobj_segments)
return -EINVAL;
json_object_object_foreach(jobj_segments, slot, val) {
UNUSED(slot);
if (!json_segment_is_backup(val))
count++;
}
return count;
}
static void _get_segment_or_id_by_flag(json_object *jobj_segments, const char *flag, unsigned id, void *retval)
{
json_object *jobj_flags, **jobj_ret = (json_object **)retval;
int *ret = (int *)retval;
if (!flag)
return;
json_object_object_foreach(jobj_segments, key, value) {
if (!json_object_object_get_ex(value, "flags", &jobj_flags))
continue;
if (LUKS2_array_jobj(jobj_flags, flag)) {
if (id)
*ret = atoi(key);
else
*jobj_ret = value;
return;
}
}
}
json_object *json_segments_get_segment_by_flag(json_object *jobj_segments, const char *flag)
{
json_object *jobj_segment = NULL;
if (jobj_segments)
_get_segment_or_id_by_flag(jobj_segments, flag, 0, &jobj_segment);
return jobj_segment;
}
void json_segment_remove_flag(json_object *jobj_segment, const char *flag)
{
json_object *jobj_flags, *jobj_flags_new;
if (!jobj_segment)
return;
jobj_flags = json_segment_get_flags(jobj_segment);
if (!jobj_flags)
return;
jobj_flags_new = LUKS2_array_remove(jobj_flags, flag);
if (!jobj_flags_new)
return;
if (json_object_array_length(jobj_flags_new) <= 0) {
json_object_put(jobj_flags_new);
json_object_object_del(jobj_segment, "flags");
} else
json_object_object_add(jobj_segment, "flags", jobj_flags_new);
}
static json_object *_segment_create_generic(const char *type, uint64_t offset, const uint64_t *length)
{
json_object *jobj = json_object_new_object();
if (!jobj)
return NULL;
json_object_object_add(jobj, "type", json_object_new_string(type));
json_object_object_add(jobj, "offset", json_object_new_uint64(offset));
json_object_object_add(jobj, "size", length ? json_object_new_uint64(*length) : json_object_new_string("dynamic"));
return jobj;
}
json_object *json_segment_create_linear(uint64_t offset, const uint64_t *length, unsigned reencryption)
{
json_object *jobj = _segment_create_generic("linear", offset, length);
if (reencryption)
LUKS2_segment_set_flag(jobj, "in-reencryption");
return jobj;
}
json_object *json_segment_create_crypt(uint64_t offset,
uint64_t iv_offset, const uint64_t *length,
const char *cipher, uint32_t sector_size,
unsigned reencryption)
{
json_object *jobj = _segment_create_generic("crypt", offset, length);
if (!jobj)
return NULL;
json_object_object_add(jobj, "iv_tweak", json_object_new_uint64(iv_offset));
json_object_object_add(jobj, "encryption", json_object_new_string(cipher));
json_object_object_add(jobj, "sector_size", json_object_new_int(sector_size));
if (reencryption)
LUKS2_segment_set_flag(jobj, "in-reencryption");
return jobj;
}
uint64_t LUKS2_segment_offset(struct luks2_hdr *hdr, int segment, unsigned blockwise)
{
return json_segment_get_offset(LUKS2_get_segment_jobj(hdr, segment), blockwise);
}
int json_segments_segment_in_reencrypt(json_object *jobj_segments)
{
json_object *jobj_flags;
json_object_object_foreach(jobj_segments, slot, val) {
if (!json_object_object_get_ex(val, "flags", &jobj_flags) ||
!LUKS2_array_jobj(jobj_flags, "in-reencryption"))
continue;
return atoi(slot);
}
return -1;
}
uint64_t LUKS2_segment_size(struct luks2_hdr *hdr, int segment, unsigned blockwise)
{
return json_segment_get_size(LUKS2_get_segment_jobj(hdr, segment), blockwise);
}
int LUKS2_segment_is_type(struct luks2_hdr *hdr, int segment, const char *type)
{
return !strcmp(json_segment_type(LUKS2_get_segment_jobj(hdr, segment)) ?: "", type);
}
int LUKS2_last_segment_by_type(struct luks2_hdr *hdr, const char *type)
{
json_object *jobj_segments;
int last_found = -1;
if (!type)
return -1;
if (!json_object_object_get_ex(hdr->jobj, "segments", &jobj_segments))
return -1;
json_object_object_foreach(jobj_segments, slot, val) {
if (json_segment_is_backup(val))
continue;
if (strcmp(type, json_segment_type(val) ?: ""))
continue;
if (atoi(slot) > last_found)
last_found = atoi(slot);
}
return last_found;
}
int LUKS2_segment_by_type(struct luks2_hdr *hdr, const char *type)
{
json_object *jobj_segments;
int first_found = -1;
if (!type)
return -EINVAL;
if (!json_object_object_get_ex(hdr->jobj, "segments", &jobj_segments))
return -EINVAL;
json_object_object_foreach(jobj_segments, slot, val) {
if (json_segment_is_backup(val))
continue;
if (strcmp(type, json_segment_type(val) ?: ""))
continue;
if (first_found < 0)
first_found = atoi(slot);
else if (atoi(slot) < first_found)
first_found = atoi(slot);
}
return first_found;
}
int LUKS2_segment_first_unused_id(struct luks2_hdr *hdr)
{
json_object *jobj_segments;
int id, last_id = -1;
if (!json_object_object_get_ex(hdr->jobj, "segments", &jobj_segments))
return -EINVAL;
json_object_object_foreach(jobj_segments, slot, val) {
UNUSED(val);
id = atoi(slot);
if (id > last_id)
last_id = id;
}
return last_id + 1;
}
int LUKS2_segment_set_flag(json_object *jobj_segment, const char *flag)
{
json_object *jobj_flags;
if (!jobj_segment || !flag)
return -EINVAL;
if (!json_object_object_get_ex(jobj_segment, "flags", &jobj_flags)) {
jobj_flags = json_object_new_array();
if (!jobj_flags)
return -ENOMEM;
json_object_object_add(jobj_segment, "flags", jobj_flags);
}
if (LUKS2_array_jobj(jobj_flags, flag))
return 0;
json_object_array_add(jobj_flags, json_object_new_string(flag));
return 0;
}
int LUKS2_segments_set(struct crypt_device *cd, struct luks2_hdr *hdr,
json_object *jobj_segments, int commit)
{
json_object_object_add(hdr->jobj, "segments", jobj_segments);
return commit ? LUKS2_hdr_write(cd, hdr) : 0;
}
int LUKS2_get_segment_id_by_flag(struct luks2_hdr *hdr, const char *flag)
{
int ret = -ENOENT;
json_object *jobj_segments = LUKS2_get_segments_jobj(hdr);
if (jobj_segments)
_get_segment_or_id_by_flag(jobj_segments, flag, 1, &ret);
return ret;
}
json_object *LUKS2_get_segment_by_flag(struct luks2_hdr *hdr, const char *flag)
{
json_object *jobj_segment = NULL,
*jobj_segments = LUKS2_get_segments_jobj(hdr);
if (jobj_segments)
_get_segment_or_id_by_flag(jobj_segments, flag, 0, &jobj_segment);
return jobj_segment;
}
json_object *LUKS2_get_ignored_segments(struct luks2_hdr *hdr)
{
json_object *jobj_segments, *jobj = json_object_new_object();
int i = 0;
if (!jobj || !json_object_object_get_ex(hdr->jobj, "segments", &jobj_segments))
return NULL;
json_object_object_foreach(jobj_segments, key, value) {
UNUSED(key);
if (json_segment_is_backup(value))
json_object_object_add_by_uint(jobj, i++, json_object_get(value));
}
return jobj;
}

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2, token handling
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Milan Broz. All rights reserved.
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -45,21 +45,19 @@ int crypt_token_register(const crypt_token_handler *handler)
int i;
if (is_builtin_candidate(handler->name)) {
log_dbg("'" LUKS2_BUILTIN_TOKEN_PREFIX "' is reserved prefix for builtin tokens.");
log_dbg(NULL, "'" LUKS2_BUILTIN_TOKEN_PREFIX "' is reserved prefix for builtin tokens.");
return -EINVAL;
}
for (i = 0; i < LUKS2_TOKENS_MAX && token_handlers[i].h; i++) {
if (!strcmp(token_handlers[i].h->name, handler->name)) {
log_dbg("Keyslot handler %s is already registered.", handler->name);
log_dbg(NULL, "Keyslot handler %s is already registered.", handler->name);
return -EINVAL;
}
}
if (i == LUKS2_TOKENS_MAX) {
log_dbg("No more space for another token handler.");
if (i == LUKS2_TOKENS_MAX)
return -EINVAL;
}
token_handlers[i].h = handler;
return 0;
@@ -149,21 +147,20 @@ int LUKS2_token_create(struct crypt_device *cd,
if (!json_object_object_get_ex(hdr->jobj, "tokens", &jobj_tokens))
return -EINVAL;
snprintf(num, sizeof(num), "%d", token);
/* Remove token */
if (!json) {
snprintf(num, sizeof(num), "%d", token);
if (!json)
json_object_object_del(jobj_tokens, num);
} else {
else {
jobj = json_tokener_parse_verbose(json, &jerr);
if (!jobj) {
log_dbg("Token JSON parse failed.");
log_dbg(cd, "Token JSON parse failed.");
return -EINVAL;
}
snprintf(num, sizeof(num), "%d", token);
if (LUKS2_token_validate(hdr->jobj, jobj, num)) {
if (LUKS2_token_validate(cd, hdr->jobj, jobj, num)) {
json_object_put(jobj);
return -EINVAL;
}
@@ -172,7 +169,7 @@ int LUKS2_token_create(struct crypt_device *cd,
if (is_builtin_candidate(json_object_get_string(jobj_type))) {
th = LUKS2_token_handler_type_internal(cd, json_object_get_string(jobj_type));
if (!th || !th->set) {
log_dbg("%s is builtin token candidate with missing handler", json_object_get_string(jobj_type));
log_dbg(cd, "%s is builtin token candidate with missing handler", json_object_get_string(jobj_type));
json_object_put(jobj);
return -EINVAL;
}
@@ -182,13 +179,13 @@ int LUKS2_token_create(struct crypt_device *cd,
if (h && h->validate && h->validate(cd, json)) {
json_object_put(jobj);
log_dbg("Token type %s validation failed.", h->name);
log_dbg(cd, "Token type %s validation failed.", h->name);
return -EINVAL;
}
json_object_object_add(jobj_tokens, num, jobj);
if (LUKS2_check_json_size(hdr)) {
log_dbg("Not enough space in header json area for new token.");
if (LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for new token.");
json_object_object_del(jobj_tokens, num);
return -ENOSPC;
}
@@ -252,7 +249,6 @@ int LUKS2_builtin_token_create(struct crypt_device *cd,
int commit)
{
const token_handler *th;
char num[16];
int r;
json_object *jobj_token, *jobj_tokens;
@@ -267,7 +263,6 @@ int LUKS2_builtin_token_create(struct crypt_device *cd,
}
if (token < 0 || token >= LUKS2_TOKENS_MAX)
return -EINVAL;
snprintf(num, sizeof(num), "%u", token);
r = th->set(&jobj_token, params);
if (r) {
@@ -276,17 +271,17 @@ int LUKS2_builtin_token_create(struct crypt_device *cd,
}
// builtin tokens must produce valid json
r = LUKS2_token_validate(hdr->jobj, jobj_token, "new");
r = LUKS2_token_validate(cd, hdr->jobj, jobj_token, "new");
assert(!r);
r = th->h->validate(cd, json_object_to_json_string_ext(jobj_token,
JSON_C_TO_STRING_PLAIN | JSON_C_TO_STRING_NOSLASHESCAPE));
assert(!r);
json_object_object_get_ex(hdr->jobj, "tokens", &jobj_tokens);
json_object_object_add(jobj_tokens, num, jobj_token);
if (LUKS2_check_json_size(hdr)) {
log_dbg("Not enough space in header json area for new %s token.", type);
json_object_object_del(jobj_tokens, num);
json_object_object_add_by_uint(jobj_tokens, token, jobj_token);
if (LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for new %s token.", type);
json_object_object_del_by_uint(jobj_tokens, token);
return -ENOSPC;
}
@@ -315,14 +310,14 @@ static int LUKS2_token_open(struct crypt_device *cd,
return -EINVAL;
if (h->validate(cd, json)) {
log_dbg("Token %d (%s) validation failed.", token, h->name);
log_dbg(cd, "Token %d (%s) validation failed.", token, h->name);
return -EINVAL;
}
}
r = h->open(cd, token, buffer, buffer_len, usrptr);
if (r < 0)
log_dbg("Token %d (%s) open failed with %d.", token, h->name, r);
log_dbg(cd, "Token %d (%s) open failed with %d.", token, h->name, r);
return r;
}
@@ -352,7 +347,7 @@ static int LUKS2_keyslot_open_by_token(struct crypt_device *cd,
{
const crypt_token_handler *h;
json_object *jobj_token, *jobj_token_keyslots, *jobj;
const char *num = NULL;
unsigned int num = 0;
int i, r;
if (!(h = LUKS2_token_handler(cd, token)))
@@ -370,15 +365,15 @@ static int LUKS2_keyslot_open_by_token(struct crypt_device *cd,
r = -EINVAL;
for (i = 0; i < (int) json_object_array_length(jobj_token_keyslots) && r < 0; i++) {
jobj = json_object_array_get_idx(jobj_token_keyslots, i);
num = json_object_get_string(jobj);
log_dbg("Trying to open keyslot %s with token %d (type %s).", num, token, h->name);
r = LUKS2_keyslot_open(cd, atoi(num), segment, buffer, buffer_len, vk);
num = atoi(json_object_get_string(jobj));
log_dbg(cd, "Trying to open keyslot %u with token %d (type %s).", num, token, h->name);
r = LUKS2_keyslot_open(cd, num, segment, buffer, buffer_len, vk);
}
if (r >= 0 && num)
return atoi(num);
if (r < 0)
return r;
return r;
return num;
}
int LUKS2_token_open_and_activate(struct crypt_device *cd,
@@ -409,14 +404,16 @@ int LUKS2_token_open_and_activate(struct crypt_device *cd,
keyslot = r;
if ((name || (flags & CRYPT_ACTIVATE_KEYRING_KEY)) && crypt_use_keyring_for_vk(cd))
r = LUKS2_volume_key_load_in_keyring_by_keyslot(cd, hdr, vk, keyslot);
if ((name || (flags & CRYPT_ACTIVATE_KEYRING_KEY)) && crypt_use_keyring_for_vk(cd)) {
if (!(r = LUKS2_volume_key_load_in_keyring_by_keyslot(cd, hdr, vk, keyslot)))
flags |= CRYPT_ACTIVATE_KEYRING_KEY;
}
if (r >= 0 && name)
r = LUKS2_activate(cd, name, vk, flags);
if (r < 0 && vk)
crypt_drop_keyring_key(cd, vk->key_description);
if (r < 0)
crypt_drop_keyring_key(cd, vk);
crypt_free_volume_key(vk);
return r < 0 ? r : keyslot;
@@ -454,14 +451,16 @@ int LUKS2_token_open_and_activate_any(struct crypt_device *cd,
keyslot = r;
if (r >= 0 && (name || (flags & CRYPT_ACTIVATE_KEYRING_KEY)) && crypt_use_keyring_for_vk(cd))
r = LUKS2_volume_key_load_in_keyring_by_keyslot(cd, hdr, vk, keyslot);
if (r >= 0 && (name || (flags & CRYPT_ACTIVATE_KEYRING_KEY)) && crypt_use_keyring_for_vk(cd)) {
if (!(r = LUKS2_volume_key_load_in_keyring_by_keyslot(cd, hdr, vk, keyslot)))
flags |= CRYPT_ACTIVATE_KEYRING_KEY;
}
if (r >= 0 && name)
r = LUKS2_activate(cd, name, vk, flags);
if (r < 0 && vk)
crypt_drop_keyring_key(cd, vk->key_description);
if (r < 0)
crypt_drop_keyring_key(cd, vk);
crypt_free_volume_key(vk);
return r < 0 ? r : keyslot;
@@ -501,7 +500,7 @@ static int assign_one_keyslot(struct crypt_device *cd, struct luks2_hdr *hdr,
json_object *jobj1, *jobj_token, *jobj_token_keyslots;
char num[16];
log_dbg("Keyslot %i %s token %i.", keyslot, assign ? "assigned to" : "unassigned from", token);
log_dbg(cd, "Keyslot %i %s token %i.", keyslot, assign ? "assigned to" : "unassigned from", token);
jobj_token = LUKS2_get_token_jobj(hdr, token);
if (!jobj_token)

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup v2, kernel keyring token
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Ondrej Kozina. All rights reserved.
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -44,10 +44,10 @@ static int keyring_open(struct crypt_device *cd,
r = keyring_get_passphrase(json_object_get_string(jobj_key), buffer, buffer_len);
if (r == -ENOTSUP) {
log_dbg("Kernel keyring features disabled.");
log_dbg(cd, "Kernel keyring features disabled.");
return -EINVAL;
} else if (r < 0) {
log_dbg("keyring_get_passphrase failed (error %d)", r);
log_dbg(cd, "keyring_get_passphrase failed (error %d)", r);
return -EINVAL;
}
@@ -61,26 +61,26 @@ static int keyring_validate(struct crypt_device *cd __attribute__((unused)),
json_object *jobj_token, *jobj_key;
int r = 1;
log_dbg("Validating keyring token json");
log_dbg(cd, "Validating keyring token json");
jobj_token = json_tokener_parse_verbose(json, &jerr);
if (!jobj_token) {
log_dbg("Keyring token JSON parse failed.");
log_dbg(cd, "Keyring token JSON parse failed.");
return r;
}
if (json_object_object_length(jobj_token) != 3) {
log_dbg("Keyring token is expected to have exactly 3 fields.");
log_dbg(cd, "Keyring token is expected to have exactly 3 fields.");
goto out;
}
if (!json_object_object_get_ex(jobj_token, "key_description", &jobj_key)) {
log_dbg("missing key_description field.");
log_dbg(cd, "missing key_description field.");
goto out;
}
if (!json_object_is_type(jobj_key, json_type_string)) {
log_dbg("key_description is not a string.");
log_dbg(cd, "key_description is not a string.");
goto out;
}

View File

@@ -1,7 +1,7 @@
/*
* cryptsetup kernel RNG access functions
*
* Copyright (C) 2010-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,8 @@
/*
* TCRYPT (TrueCrypt-compatible) and VeraCrypt volume handling
*
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2018, Milan Broz
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -202,7 +202,8 @@ static struct tcrypt_algs tcrypt_cipher[] = {
{}
};
static int TCRYPT_hdr_from_disk(struct tcrypt_phdr *hdr,
static int TCRYPT_hdr_from_disk(struct crypt_device *cd,
struct tcrypt_phdr *hdr,
struct crypt_params_tcrypt *params,
int kdf_index, int cipher_index)
{
@@ -214,14 +215,14 @@ static int TCRYPT_hdr_from_disk(struct tcrypt_phdr *hdr,
crc32 = crypt_crc32(~0, (unsigned char*)&hdr->d, size) ^ ~0;
if (be16_to_cpu(hdr->d.version) > 3 &&
crc32 != be32_to_cpu(hdr->d.header_crc32)) {
log_dbg("TCRYPT header CRC32 mismatch.");
log_dbg(cd, "TCRYPT header CRC32 mismatch.");
return -EINVAL;
}
/* Check CRC32 of keys */
crc32 = crypt_crc32(~0, (unsigned char*)hdr->d.keys, sizeof(hdr->d.keys)) ^ ~0;
if (crc32 != be32_to_cpu(hdr->d.keys_crc32)) {
log_dbg("TCRYPT keys CRC32 mismatch.");
log_dbg(cd, "TCRYPT keys CRC32 mismatch.");
return -EINVAL;
}
@@ -433,7 +434,7 @@ static int TCRYPT_decrypt_hdr(struct crypt_device *cd, struct tcrypt_phdr *hdr,
for (i = 0; tcrypt_cipher[i].chain_count; i++) {
if (!(flags & CRYPT_TCRYPT_LEGACY_MODES) && tcrypt_cipher[i].legacy)
continue;
log_dbg("TCRYPT: trying cipher %s-%s",
log_dbg(cd, "TCRYPT: trying cipher %s-%s",
tcrypt_cipher[i].long_name, tcrypt_cipher[i].mode);
memcpy(&hdr2.e, &hdr->e, TCRYPT_HDR_LEN);
@@ -450,7 +451,7 @@ static int TCRYPT_decrypt_hdr(struct crypt_device *cd, struct tcrypt_phdr *hdr,
}
if (r < 0) {
log_dbg("TCRYPT: returned error %d, skipped.", r);
log_dbg(cd, "TCRYPT: returned error %d, skipped.", r);
if (r == -ENOTSUP)
break;
r = -ENOENT;
@@ -458,14 +459,14 @@ static int TCRYPT_decrypt_hdr(struct crypt_device *cd, struct tcrypt_phdr *hdr,
}
if (!strncmp(hdr2.d.magic, TCRYPT_HDR_MAGIC, TCRYPT_HDR_MAGIC_LEN)) {
log_dbg("TCRYPT: Signature magic detected.");
log_dbg(cd, "TCRYPT: Signature magic detected.");
memcpy(&hdr->e, &hdr2.e, TCRYPT_HDR_LEN);
r = i;
break;
}
if ((flags & CRYPT_TCRYPT_VERA_MODES) &&
!strncmp(hdr2.d.magic, VCRYPT_HDR_MAGIC, TCRYPT_HDR_MAGIC_LEN)) {
log_dbg("TCRYPT: Signature magic detected (Veracrypt).");
log_dbg(cd, "TCRYPT: Signature magic detected (Veracrypt).");
memcpy(&hdr->e, &hdr2.e, TCRYPT_HDR_LEN);
r = i;
break;
@@ -485,7 +486,7 @@ static int TCRYPT_pool_keyfile(struct crypt_device *cd,
int i, j, fd, data_size, r = -EIO;
uint32_t crc;
log_dbg("TCRYPT: using keyfile %s.", keyfile);
log_dbg(cd, "TCRYPT: using keyfile %s.", keyfile);
data = malloc(TCRYPT_KEYFILE_LEN);
if (!data)
@@ -573,7 +574,7 @@ static int TCRYPT_init_hdr(struct crypt_device *cd,
iterations = tcrypt_kdf[i].iterations;
/* Derive header key */
log_dbg("TCRYPT: trying KDF: %s-%s-%d%s.",
log_dbg(cd, "TCRYPT: trying KDF: %s-%s-%d%s.",
tcrypt_kdf[i].name, tcrypt_kdf[i].hash, tcrypt_kdf[i].iterations,
params->veracrypt_pim && tcrypt_kdf[i].veracrypt ? "-PIM" : "");
r = crypt_pbkdf(tcrypt_kdf[i].name, tcrypt_kdf[i].hash,
@@ -608,15 +609,15 @@ static int TCRYPT_init_hdr(struct crypt_device *cd,
if (r < 0)
goto out;
r = TCRYPT_hdr_from_disk(hdr, params, i, r);
r = TCRYPT_hdr_from_disk(cd, hdr, params, i, r);
if (!r) {
log_dbg("TCRYPT: Magic: %s, Header version: %d, req. %d, sector %d"
log_dbg(cd, "TCRYPT: Magic: %s, Header version: %d, req. %d, sector %d"
", mk_offset %" PRIu64 ", hidden_size %" PRIu64
", volume size %" PRIu64, tcrypt_kdf[i].veracrypt ?
VCRYPT_HDR_MAGIC : TCRYPT_HDR_MAGIC,
(int)hdr->d.version, (int)hdr->d.version_tc, (int)hdr->d.sector_size,
hdr->d.mk_offset, hdr->d.hidden_volume_size, hdr->d.volume_size);
log_dbg("TCRYPT: Header cipher %s-%s, key size %zu",
log_dbg(cd, "TCRYPT: Header cipher %s-%s, key size %zu",
params->cipher, params->mode, params->key_size);
}
out:
@@ -634,29 +635,29 @@ int TCRYPT_read_phdr(struct crypt_device *cd,
struct device *base_device, *device = crypt_metadata_device(cd);
ssize_t hdr_size = sizeof(struct tcrypt_phdr);
char *base_device_path;
int devfd = 0, r;
int devfd, r;
assert(sizeof(struct tcrypt_phdr) == 512);
log_dbg("Reading TCRYPT header of size %zu bytes from device %s.",
log_dbg(cd, "Reading TCRYPT header of size %zu bytes from device %s.",
hdr_size, device_path(device));
if (params->flags & CRYPT_TCRYPT_SYSTEM_HEADER &&
crypt_dev_is_partition(device_path(device))) {
base_device_path = crypt_get_base_device(device_path(device));
log_dbg("Reading TCRYPT system header from device %s.", base_device_path ?: "?");
log_dbg(cd, "Reading TCRYPT system header from device %s.", base_device_path ?: "?");
if (!base_device_path)
return -EINVAL;
r = device_alloc(&base_device, base_device_path);
r = device_alloc(cd, &base_device, base_device_path);
free(base_device_path);
if (r < 0)
return r;
devfd = device_open(base_device, O_RDONLY);
device_free(base_device);
devfd = device_open(cd, base_device, O_RDONLY);
device_free(cd, base_device);
} else
devfd = device_open(device, O_RDONLY);
devfd = device_open(cd, device, O_RDONLY);
if (devfd < 0) {
log_err(cd, _("Cannot open device %s."), device_path(device));
@@ -665,37 +666,36 @@ int TCRYPT_read_phdr(struct crypt_device *cd,
r = -EIO;
if (params->flags & CRYPT_TCRYPT_SYSTEM_HEADER) {
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr, hdr_size,
TCRYPT_HDR_SYSTEM_OFFSET) == hdr_size) {
r = TCRYPT_init_hdr(cd, hdr, params);
}
} else if (params->flags & CRYPT_TCRYPT_HIDDEN_HEADER) {
if (params->flags & CRYPT_TCRYPT_BACKUP_HEADER) {
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr, hdr_size,
TCRYPT_HDR_HIDDEN_OFFSET_BCK) == hdr_size)
r = TCRYPT_init_hdr(cd, hdr, params);
} else {
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr, hdr_size,
TCRYPT_HDR_HIDDEN_OFFSET) == hdr_size)
r = TCRYPT_init_hdr(cd, hdr, params);
if (r && read_lseek_blockwise(devfd, device_block_size(device),
if (r && read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr, hdr_size,
TCRYPT_HDR_HIDDEN_OFFSET_OLD) == hdr_size)
r = TCRYPT_init_hdr(cd, hdr, params);
}
} else if (params->flags & CRYPT_TCRYPT_BACKUP_HEADER) {
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr, hdr_size,
TCRYPT_HDR_OFFSET_BCK) == hdr_size)
r = TCRYPT_init_hdr(cd, hdr, params);
} else if (read_blockwise(devfd, device_block_size(device),
device_alignment(device), hdr, hdr_size) == hdr_size)
} else if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr, hdr_size, 0) == hdr_size)
r = TCRYPT_init_hdr(cd, hdr, params);
close(devfd);
if (r < 0)
memset(hdr, 0, sizeof (*hdr));
return r;
@@ -722,28 +722,22 @@ int TCRYPT_activate(struct crypt_device *cd,
struct crypt_params_tcrypt *params,
uint32_t flags)
{
char cipher[MAX_CIPHER_LEN], dm_name[PATH_MAX], dm_dev_name[PATH_MAX];
char dm_name[PATH_MAX], dm_dev_name[PATH_MAX], cipher_spec[MAX_CIPHER_LEN*2+1];
char *part_path;
struct device *device = NULL, *part_device = NULL;
unsigned int i;
int r;
uint32_t req_flags, dmc_flags;
struct tcrypt_algs *algs;
enum devcheck device_check;
uint64_t offset = crypt_get_data_offset(cd);
struct volume_key *vk = NULL;
struct device *ptr_dev = crypt_data_device(cd), *device = NULL, *part_device = NULL;
struct crypt_dm_active_device dmd = {
.target = DM_CRYPT,
.size = 0,
.data_device = crypt_data_device(cd),
.u.crypt = {
.cipher = cipher,
.offset = crypt_get_data_offset(cd),
.iv_offset = crypt_get_iv_offset(cd),
.sector_size = crypt_get_sector_size(cd),
}
.flags = flags
};
if (!hdr->d.version) {
log_dbg("TCRYPT: this function is not supported without encrypted header load.");
log_dbg(cd, "TCRYPT: this function is not supported without encrypted header load.");
return -ENOTSUP;
}
@@ -778,20 +772,20 @@ int TCRYPT_activate(struct crypt_device *cd,
dmd.size = hdr->d.volume_size / hdr->d.sector_size;
if (dmd.flags & CRYPT_ACTIVATE_SHARED)
device_check = DEV_SHARED;
device_check = DEV_OK;
else
device_check = DEV_EXCL;
if ((params->flags & CRYPT_TCRYPT_SYSTEM_HEADER) &&
!crypt_dev_is_partition(device_path(dmd.data_device))) {
part_path = crypt_get_partition_device(device_path(dmd.data_device),
dmd.u.crypt.offset, dmd.size);
!crypt_dev_is_partition(device_path(crypt_data_device(cd)))) {
part_path = crypt_get_partition_device(device_path(crypt_data_device(cd)),
crypt_get_data_offset(cd), dmd.size);
if (part_path) {
if (!device_alloc(&part_device, part_path)) {
if (!device_alloc(cd, &part_device, part_path)) {
log_verbose(cd, _("Activating TCRYPT system encryption for partition %s."),
part_path);
dmd.data_device = part_device;
dmd.u.crypt.offset = 0;
ptr_dev = part_device;
offset = 0;
}
free(part_path);
} else
@@ -799,22 +793,20 @@ int TCRYPT_activate(struct crypt_device *cd,
* System encryption use the whole device mapping, there can
* be active partitions.
*/
device_check = DEV_SHARED;
device_check = DEV_OK;
}
r = device_block_adjust(cd, dmd.data_device, device_check,
dmd.u.crypt.offset, &dmd.size, &dmd.flags);
if (r) {
device_free(part_device);
return r;
}
r = device_block_adjust(cd, ptr_dev, device_check,
offset, &dmd.size, &dmd.flags);
if (r)
goto out;
/* From here, key size for every cipher must be the same */
dmd.u.crypt.vk = crypt_alloc_volume_key(algs->cipher[0].key_size +
algs->cipher[0].key_extra_size, NULL);
if (!dmd.u.crypt.vk) {
device_free(part_device);
return -ENOMEM;
vk = crypt_alloc_volume_key(algs->cipher[0].key_size +
algs->cipher[0].key_extra_size, NULL);
if (!vk) {
r = -ENOMEM;
goto out;
}
for (i = algs->chain_count; i > 0; i--) {
@@ -827,27 +819,39 @@ int TCRYPT_activate(struct crypt_device *cd,
dmd.flags = flags | CRYPT_ACTIVATE_PRIVATE;
}
snprintf(cipher, sizeof(cipher), "%s-%s",
algs->cipher[i-1].name, algs->mode);
TCRYPT_copy_key(&algs->cipher[i-1], algs->mode,
dmd.u.crypt.vk->key, hdr->d.keys);
vk->key, hdr->d.keys);
if (algs->chain_count != i) {
snprintf(dm_dev_name, sizeof(dm_dev_name), "%s/%s_%d",
dm_get_dir(), name, i);
r = device_alloc(&device, dm_dev_name);
r = device_alloc(cd, &device, dm_dev_name);
if (r)
break;
dmd.data_device = device;
dmd.u.crypt.offset = 0;
ptr_dev = device;
offset = 0;
}
log_dbg("Trying to activate TCRYPT device %s using cipher %s.",
dm_name, dmd.u.crypt.cipher);
r = dm_create_device(cd, dm_name, CRYPT_TCRYPT, &dmd, 0);
r = snprintf(cipher_spec, sizeof(cipher_spec), "%s-%s", algs->cipher[i-1].name, algs->mode);
if (r < 0 || (size_t)r >= sizeof(cipher_spec)) {
r = -ENOMEM;
break;
}
device_free(device);
r = dm_crypt_target_set(&dmd.segment, 0, dmd.size, ptr_dev, vk,
cipher_spec, crypt_get_iv_offset(cd), offset,
crypt_get_integrity(cd),
crypt_get_integrity_tag_size(cd),
crypt_get_sector_size(cd));
if (r)
break;
log_dbg(cd, "Trying to activate TCRYPT device %s using cipher %s.",
dm_name, dmd.segment.u.crypt.cipher);
r = dm_create_device(cd, dm_name, CRYPT_TCRYPT, &dmd);
dm_targets_free(cd, &dmd);
device_free(cd, device);
device = NULL;
if (r)
@@ -855,20 +859,22 @@ int TCRYPT_activate(struct crypt_device *cd,
}
if (r < 0 &&
(dm_flags(DM_CRYPT, &dmc_flags) || ((dmc_flags & req_flags) != req_flags))) {
(dm_flags(cd, DM_CRYPT, &dmc_flags) || ((dmc_flags & req_flags) != req_flags))) {
log_err(cd, _("Kernel doesn't support TCRYPT compatible mapping."));
r = -ENOTSUP;
}
device_free(part_device);
crypt_free_volume_key(dmd.u.crypt.vk);
out:
crypt_free_volume_key(vk);
device_free(cd, device);
device_free(cd, part_device);
return r;
}
static int TCRYPT_remove_one(struct crypt_device *cd, const char *name,
const char *base_uuid, int index, uint32_t flags)
{
struct crypt_dm_active_device dmd = {};
struct crypt_dm_active_device dmd;
char dm_name[PATH_MAX];
int r;
@@ -889,7 +895,7 @@ static int TCRYPT_remove_one(struct crypt_device *cd, const char *name,
int TCRYPT_deactivate(struct crypt_device *cd, const char *name, uint32_t flags)
{
struct crypt_dm_active_device dmd = {};
struct crypt_dm_active_device dmd;
int r;
r = dm_query_device(cd, name, DM_ACTIVE_UUID, &dmd);
@@ -907,19 +913,19 @@ int TCRYPT_deactivate(struct crypt_device *cd, const char *name, uint32_t flags)
goto out;
r = TCRYPT_remove_one(cd, name, dmd.uuid, 2, flags);
if (r < 0)
goto out;
out:
free(CONST_CAST(void*)dmd.uuid);
return (r == -ENODEV) ? 0 : r;
}
static int TCRYPT_status_one(struct crypt_device *cd, const char *name,
const char *base_uuid, int index,
size_t *key_size, char *cipher,
uint64_t *data_offset, struct device **device)
const char *base_uuid, int index,
size_t *key_size, char *cipher,
struct tcrypt_phdr *tcrypt_hdr,
struct device **device)
{
struct crypt_dm_active_device dmd = {};
struct crypt_dm_active_device dmd;
struct dm_target *tgt = &dmd.segment;
char dm_name[PATH_MAX], *c;
int r;
@@ -934,30 +940,35 @@ static int TCRYPT_status_one(struct crypt_device *cd, const char *name,
DM_ACTIVE_UUID |
DM_ACTIVE_CRYPT_CIPHER |
DM_ACTIVE_CRYPT_KEYSIZE, &dmd);
if (r > 0)
r = 0;
if (!r && !strncmp(dmd.uuid, base_uuid, strlen(base_uuid))) {
if ((c = strchr(dmd.u.crypt.cipher, '-')))
*c = '\0';
strcat(cipher, "-");
strncat(cipher, dmd.u.crypt.cipher, MAX_CIPHER_LEN);
*key_size += dmd.u.crypt.vk->keylength;
*data_offset = dmd.u.crypt.offset * SECTOR_SIZE;
device_free(*device);
*device = dmd.data_device;
} else {
device_free(dmd.data_device);
r = -ENODEV;
if (r < 0)
return r;
if (!single_segment(&dmd) || tgt->type != DM_CRYPT) {
r = -ENOTSUP;
goto out;
}
r = 0;
if (!strncmp(dmd.uuid, base_uuid, strlen(base_uuid))) {
if ((c = strchr(tgt->u.crypt.cipher, '-')))
*c = '\0';
strcat(cipher, "-");
strncat(cipher, tgt->u.crypt.cipher, MAX_CIPHER_LEN);
*key_size += tgt->u.crypt.vk->keylength;
tcrypt_hdr->d.mk_offset = tgt->u.crypt.offset * SECTOR_SIZE;
device_free(cd, *device);
MOVE_REF(*device, tgt->data_device);
} else
r = -ENODEV;
out:
dm_targets_free(cd, &dmd);
free(CONST_CAST(void*)dmd.uuid);
free(CONST_CAST(void*)dmd.u.crypt.cipher);
crypt_free_volume_key(dmd.u.crypt.vk);
return r;
}
int TCRYPT_init_by_name(struct crypt_device *cd, const char *name,
const struct crypt_dm_active_device *dmd,
const char *uuid,
const struct dm_target *tgt,
struct device **device,
struct crypt_params_tcrypt *tcrypt_params,
struct tcrypt_phdr *tcrypt_hdr)
@@ -970,9 +981,9 @@ int TCRYPT_init_by_name(struct crypt_device *cd, const char *name,
memset(tcrypt_params, 0, sizeof(*tcrypt_params));
memset(tcrypt_hdr, 0, sizeof(*tcrypt_hdr));
tcrypt_hdr->d.sector_size = SECTOR_SIZE;
tcrypt_hdr->d.mk_offset = dmd->u.crypt.offset * SECTOR_SIZE;
tcrypt_hdr->d.mk_offset = tgt->u.crypt.offset * SECTOR_SIZE;
strncpy(cipher, dmd->u.crypt.cipher, MAX_CIPHER_LEN);
strncpy(cipher, tgt->u.crypt.cipher, MAX_CIPHER_LEN);
tmp = strchr(cipher, '-');
if (!tmp)
return -EINVAL;
@@ -980,12 +991,12 @@ int TCRYPT_init_by_name(struct crypt_device *cd, const char *name,
mode[MAX_CIPHER_LEN] = '\0';
strncpy(mode, ++tmp, MAX_CIPHER_LEN);
key_size = dmd->u.crypt.vk->keylength;
r = TCRYPT_status_one(cd, name, dmd->uuid, 1, &key_size,
cipher, &tcrypt_hdr->d.mk_offset, device);
key_size = tgt->u.crypt.vk->keylength;
r = TCRYPT_status_one(cd, name, uuid, 1, &key_size,
cipher, tcrypt_hdr, device);
if (!r)
r = TCRYPT_status_one(cd, name, dmd->uuid, 2, &key_size,
cipher, &tcrypt_hdr->d.mk_offset, device);
r = TCRYPT_status_one(cd, name, uuid, 2, &key_size,
cipher, tcrypt_hdr, device);
if (r < 0 && r != -ENODEV)
return r;

View File

@@ -1,8 +1,8 @@
/*
* TCRYPT (TrueCrypt-compatible) header defitinion
*
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2018, Milan Broz
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -75,6 +75,7 @@ struct tcrypt_phdr {
struct crypt_device;
struct crypt_params_tcrypt;
struct crypt_dm_active_device;
struct dm_target;
struct volume_key;
struct device;
@@ -83,7 +84,8 @@ int TCRYPT_read_phdr(struct crypt_device *cd,
struct crypt_params_tcrypt *params);
int TCRYPT_init_by_name(struct crypt_device *cd, const char *name,
const struct crypt_dm_active_device *dmd,
const char *uuid,
const struct dm_target *tgt,
struct device **device,
struct crypt_params_tcrypt *tcrypt_params,
struct tcrypt_phdr *tcrypt_hdr);

View File

@@ -1,10 +1,10 @@
/*
* utils - miscellaneous device utilities for cryptsetup
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -70,9 +70,9 @@ static int _memlock_count = 0;
int crypt_memlock_inc(struct crypt_device *ctx)
{
if (!_memlock_count++) {
log_dbg("Locking memory.");
log_dbg(ctx, "Locking memory.");
if (mlockall(MCL_CURRENT | MCL_FUTURE) == -1) {
log_dbg("Cannot lock memory with mlockall.");
log_dbg(ctx, "Cannot lock memory with mlockall.");
_memlock_count--;
return 0;
}
@@ -81,7 +81,7 @@ int crypt_memlock_inc(struct crypt_device *ctx)
log_err(ctx, _("Cannot get process priority."));
else
if (setpriority(PRIO_PROCESS, 0, DEFAULT_PROCESS_PRIORITY))
log_dbg("setpriority %d failed: %s",
log_dbg(ctx, "setpriority %d failed: %s",
DEFAULT_PROCESS_PRIORITY, strerror(errno));
}
return _memlock_count ? 1 : 0;
@@ -90,11 +90,11 @@ int crypt_memlock_inc(struct crypt_device *ctx)
int crypt_memlock_dec(struct crypt_device *ctx)
{
if (_memlock_count && (!--_memlock_count)) {
log_dbg("Unlocking memory.");
log_dbg(ctx, "Unlocking memory.");
if (munlockall() == -1)
log_err(ctx, _("Cannot unlock memory."));
if (setpriority(PRIO_PROCESS, 0, _priority))
log_dbg("setpriority %d failed: %s", _priority, strerror(errno));
log_dbg(ctx, "setpriority %d failed: %s", _priority, strerror(errno));
}
return _memlock_count ? 1 : 0;
}

View File

@@ -1,8 +1,8 @@
/*
* libcryptsetup - cryptsetup library, cipher benchmark
*
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2018, Milan Broz
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -21,165 +21,9 @@
#include <stdlib.h>
#include <errno.h>
#include <time.h>
#include "internal.h"
/*
* This is not simulating storage, so using disk block causes extreme overhead.
* Let's use some fixed block size where results are more reliable...
*/
#define CIPHER_BLOCK_BYTES 65536
/*
* If the measured value is lower, encrypted buffer is probably too small
* and calculated values are not reliable.
*/
#define CIPHER_TIME_MIN_MS 0.001
/*
* The whole test depends on Linux kernel usermode crypto API for now.
* (The same implementations are used in dm-crypt though.)
*/
struct cipher_perf {
char name[32];
char mode[32];
char *key;
size_t key_length;
char *iv;
size_t iv_length;
size_t buffer_size;
};
static int time_ms(struct timespec *start, struct timespec *end, double *ms)
{
double start_ms, end_ms;
start_ms = start->tv_sec * 1000.0 + start->tv_nsec / (1000.0 * 1000);
end_ms = end->tv_sec * 1000.0 + end->tv_nsec / (1000.0 * 1000);
*ms = end_ms - start_ms;
return 0;
}
static int cipher_perf_one(struct cipher_perf *cp, char *buf,
size_t buf_size, int enc)
{
struct crypt_cipher *cipher = NULL;
size_t done = 0, block = CIPHER_BLOCK_BYTES;
int r;
if (buf_size < block)
block = buf_size;
r = crypt_cipher_init(&cipher, cp->name, cp->mode, cp->key, cp->key_length);
if (r < 0) {
log_dbg("Cannot initialise cipher %s, mode %s.", cp->name, cp->mode);
return r;
}
while (done < buf_size) {
if ((done + block) > buf_size)
block = buf_size - done;
if (enc)
r = crypt_cipher_encrypt(cipher, &buf[done], &buf[done],
block, cp->iv, cp->iv_length);
else
r = crypt_cipher_decrypt(cipher, &buf[done], &buf[done],
block, cp->iv, cp->iv_length);
if (r < 0)
break;
done += block;
}
crypt_cipher_destroy(cipher);
return r;
}
static int cipher_measure(struct cipher_perf *cp, char *buf,
size_t buf_size, int encrypt, double *ms)
{
struct timespec start, end;
int r;
/*
* Using getrusage would be better here but the precision
* is not adequate, so better stick with CLOCK_MONOTONIC
*/
if (clock_gettime(CLOCK_MONOTONIC, &start) < 0)
return -EINVAL;
r = cipher_perf_one(cp, buf, buf_size, encrypt);
if (r < 0)
return r;
if (clock_gettime(CLOCK_MONOTONIC, &end) < 0)
return -EINVAL;
r = time_ms(&start, &end, ms);
if (r < 0)
return r;
if (*ms < CIPHER_TIME_MIN_MS) {
log_dbg("Measured cipher runtime (%1.6f) is too low.", *ms);
return -ERANGE;
}
return 0;
}
static double speed_mbs(unsigned long bytes, double ms)
{
double speed = bytes, s = ms / 1000.;
return speed / (1024 * 1024) / s;
}
static int cipher_perf(struct cipher_perf *cp,
double *encryption_mbs, double *decryption_mbs)
{
double ms_enc, ms_dec, ms;
int r, repeat_enc, repeat_dec;
void *buf = NULL;
if (posix_memalign(&buf, crypt_getpagesize(), cp->buffer_size))
return -ENOMEM;
ms_enc = 0.0;
repeat_enc = 1;
while (ms_enc < 1000.0) {
r = cipher_measure(cp, buf, cp->buffer_size, 1, &ms);
if (r < 0) {
free(buf);
return r;
}
ms_enc += ms;
repeat_enc++;
}
ms_dec = 0.0;
repeat_dec = 1;
while (ms_dec < 1000.0) {
r = cipher_measure(cp, buf, cp->buffer_size, 0, &ms);
if (r < 0) {
free(buf);
return r;
}
ms_dec += ms;
repeat_dec++;
}
free(buf);
*encryption_mbs = speed_mbs(cp->buffer_size * repeat_enc, ms_enc);
*decryption_mbs = speed_mbs(cp->buffer_size * repeat_dec, ms_dec);
return 0;
}
int crypt_benchmark(struct crypt_device *cd,
const char *cipher,
const char *cipher_mode,
@@ -189,12 +33,8 @@ int crypt_benchmark(struct crypt_device *cd,
double *encryption_mbs,
double *decryption_mbs)
{
struct cipher_perf cp = {
.key_length = volume_key_size,
.iv_length = iv_size,
.buffer_size = buffer_size,
};
char *c;
void *buffer = NULL;
char *iv = NULL, *key = NULL, mode[MAX_CIPHER_LEN], *c;
int r;
if (!cipher || !cipher_mode || !volume_key_size || !encryption_mbs || !decryption_mbs)
@@ -205,29 +45,40 @@ int crypt_benchmark(struct crypt_device *cd,
return r;
r = -ENOMEM;
if (iv_size) {
cp.iv = malloc(iv_size);
if (!cp.iv)
goto out;
crypt_random_get(cd, cp.iv, iv_size, CRYPT_RND_NORMAL);
}
cp.key = malloc(volume_key_size);
if (!cp.key)
if (posix_memalign(&buffer, crypt_getpagesize(), buffer_size))
goto out;
crypt_random_get(cd, cp.key, volume_key_size, CRYPT_RND_NORMAL);
strncpy(cp.name, cipher, sizeof(cp.name)-1);
strncpy(cp.mode, cipher_mode, sizeof(cp.mode)-1);
if (iv_size) {
iv = malloc(iv_size);
if (!iv)
goto out;
crypt_random_get(cd, iv, iv_size, CRYPT_RND_NORMAL);
}
key = malloc(volume_key_size);
if (!key)
goto out;
crypt_random_get(cd, key, volume_key_size, CRYPT_RND_NORMAL);
strncpy(mode, cipher_mode, sizeof(mode)-1);
/* Ignore IV generator */
if ((c = strchr(cp.mode, '-')))
if ((c = strchr(mode, '-')))
*c = '\0';
r = cipher_perf(&cp, encryption_mbs, decryption_mbs);
r = crypt_cipher_perf_kernel(cipher, cipher_mode, buffer, buffer_size, key, volume_key_size,
iv, iv_size, encryption_mbs, decryption_mbs);
if (r == -ERANGE)
log_dbg(cd, "Measured cipher runtime is too low.");
else if (r == -ENOTSUP || r == -ENOENT)
log_dbg(cd, "Cannot initialise cipher %s, mode %s.", cipher, cipher_mode);
out:
free(cp.key);
free(cp.iv);
free(buffer);
free(key);
free(iv);
return r;
}
@@ -253,7 +104,7 @@ int crypt_benchmark_pbkdf(struct crypt_device *cd,
kdf_opt = !strcmp(pbkdf->type, CRYPT_KDF_PBKDF2) ? pbkdf->hash : "";
log_dbg("Running %s(%s) benchmark.", pbkdf->type, kdf_opt);
log_dbg(cd, "Running %s(%s) benchmark.", pbkdf->type, kdf_opt);
r = crypt_pbkdf_perf(pbkdf->type, pbkdf->hash, password, password_size,
salt, salt_size, volume_key_size, pbkdf->time_ms,
@@ -261,19 +112,24 @@ int crypt_benchmark_pbkdf(struct crypt_device *cd,
&pbkdf->iterations, &pbkdf->max_memory_kb, progress, usrptr);
if (!r)
log_dbg("Benchmark returns %s(%s) %u iterations, %u memory, %u threads (for %zu-bits key).",
log_dbg(cd, "Benchmark returns %s(%s) %u iterations, %u memory, %u threads (for %zu-bits key).",
pbkdf->type, kdf_opt, pbkdf->iterations, pbkdf->max_memory_kb,
pbkdf->parallel_threads, volume_key_size * 8);
return r;
}
struct benchmark_usrptr {
struct crypt_device *cd;
struct crypt_pbkdf_type *pbkdf;
};
static int benchmark_callback(uint32_t time_ms, void *usrptr)
{
struct crypt_pbkdf_type *pbkdf = usrptr;
struct benchmark_usrptr *u = usrptr;
log_dbg("PBKDF benchmark: memory cost = %u, iterations = %u, "
"threads = %u (took %u ms)", pbkdf->max_memory_kb,
pbkdf->iterations, pbkdf->parallel_threads, time_ms);
log_dbg(u->cd, "PBKDF benchmark: memory cost = %u, iterations = %u, "
"threads = %u (took %u ms)", u->pbkdf->max_memory_kb,
u->pbkdf->iterations, u->pbkdf->parallel_threads, time_ms);
return 0;
}
@@ -293,6 +149,10 @@ int crypt_benchmark_pbkdf_internal(struct crypt_device *cd,
double PBKDF2_tmp;
uint32_t ms_tmp;
int r = -EINVAL;
struct benchmark_usrptr u = {
.cd = cd,
.pbkdf = pbkdf
};
r = crypt_pbkdf_get_limits(pbkdf->type, &pbkdf_limits);
if (r)
@@ -300,7 +160,7 @@ int crypt_benchmark_pbkdf_internal(struct crypt_device *cd,
if (pbkdf->flags & CRYPT_PBKDF_NO_BENCHMARK) {
if (pbkdf->iterations) {
log_dbg("Reusing PBKDF values (no benchmark flag is set).");
log_dbg(cd, "Reusing PBKDF values (no benchmark flag is set).");
return 0;
}
log_err(cd, _("PBKDF benchmark disabled but iterations not set."));
@@ -319,7 +179,7 @@ int crypt_benchmark_pbkdf_internal(struct crypt_device *cd,
pbkdf->max_memory_kb = 0; /* N/A in PBKDF2 */
r = crypt_benchmark_pbkdf(cd, pbkdf, "foo", 3, "bar", 3,
volume_key_size, &benchmark_callback, pbkdf);
volume_key_size, &benchmark_callback, &u);
pbkdf->time_ms = ms_tmp;
if (r < 0) {
log_err(cd, _("Not compatible PBKDF2 options (using hash algorithm %s)."),
@@ -334,13 +194,13 @@ int crypt_benchmark_pbkdf_internal(struct crypt_device *cd,
} else {
/* Already benchmarked */
if (pbkdf->iterations) {
log_dbg("Reusing PBKDF values.");
log_dbg(cd, "Reusing PBKDF values.");
return 0;
}
r = crypt_benchmark_pbkdf(cd, pbkdf, "foo", 3,
"0123456789abcdef0123456789abcdef", 32,
volume_key_size, &benchmark_callback, pbkdf);
volume_key_size, &benchmark_callback, &u);
if (r < 0)
log_err(cd, _("Not compatible PBKDF options."));
}

View File

@@ -1,7 +1,7 @@
/*
* blkid probe utilities
*
* Copyright (C) 2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2018-2019 Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,7 +1,7 @@
/*
* blkid probe utilities
*
* Copyright (C) 2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2018-2019 Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,9 +1,9 @@
/*
* utils_crypt - cipher utilities for cryptsetup
*
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,9 +1,9 @@
/*
* utils_crypt - cipher utilities for cryptsetup
*
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,10 +1,10 @@
/*
* device backend utilities
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -46,10 +46,14 @@ struct device {
char *file_path;
int loop_fd;
int ro_dev_fd;
int dev_fd;
int dev_fd_excl;
struct crypt_lock_handle *lh;
unsigned int o_direct:1;
unsigned int init_done:1;
unsigned int init_done:1; /* path is bdev or loop already initialized */
/* cached values */
size_t alignment;
@@ -153,14 +157,14 @@ static int device_read_test(int devfd)
* The read test is needed to detect broken configurations (seen with remote
* block devices) that allow open with direct-io but then fails on read.
*/
static int device_ready(struct device *device)
static int device_ready(struct crypt_device *cd, struct device *device)
{
int devfd = -1, r = 0;
struct stat st;
size_t tmp_size;
if (device->o_direct) {
log_dbg("Trying to open and read device %s with direct-io.",
log_dbg(cd, "Trying to open and read device %s with direct-io.",
device_path(device));
device->o_direct = 0;
devfd = open(device_path(device), O_RDONLY | O_DIRECT);
@@ -175,13 +179,13 @@ static int device_ready(struct device *device)
}
if (devfd < 0) {
log_dbg("Trying to open device %s without direct-io.",
log_dbg(cd, "Trying to open device %s without direct-io.",
device_path(device));
devfd = open(device_path(device), O_RDONLY);
}
if (devfd < 0) {
log_err(NULL, _("Device %s doesn't exist or access denied."),
log_err(cd, _("Device %s doesn't exist or access denied."),
device_path(device));
return -EINVAL;
}
@@ -191,7 +195,7 @@ static int device_ready(struct device *device)
else if (!S_ISBLK(st.st_mode))
r = S_ISREG(st.st_mode) ? -ENOTBLK : -EINVAL;
if (r == -EINVAL) {
log_err(NULL, _("Device %s is not compatible."),
log_err(cd, _("Device %s is not compatible."),
device_path(device));
close(devfd);
return r;
@@ -210,14 +214,14 @@ static int device_ready(struct device *device)
return r;
}
static int _open_locked(struct device *device, int flags)
static int _open_locked(struct crypt_device *cd, struct device *device, int flags)
{
int fd;
log_dbg("Opening locked device %s", device_path(device));
log_dbg(cd, "Opening locked device %s", device_path(device));
if ((flags & O_ACCMODE) != O_RDONLY && device_locked_readonly(device->lh)) {
log_dbg("Can not open locked device %s in write mode. Read lock held.", device_path(device));
log_dbg(cd, "Can not open locked device %s in write mode. Read lock held.", device_path(device));
return -EAGAIN;
}
@@ -225,10 +229,10 @@ static int _open_locked(struct device *device, int flags)
if (fd < 0)
return -errno;
if (device_locked_verify(fd, device->lh)) {
if (device_locked_verify(cd, fd, device->lh)) {
/* fd doesn't correspond to a locked resource */
close(fd);
log_dbg("Failed to verify lock resource for device %s.", device_path(device));
log_dbg(cd, "Failed to verify lock resource for device %s.", device_path(device));
return -EINVAL;
}
@@ -237,12 +241,14 @@ static int _open_locked(struct device *device, int flags)
/*
* Common wrapper for device sync.
* FIXME: file descriptor will be in struct later.
*/
void device_sync(struct device *device, int devfd)
void device_sync(struct crypt_device *cd, struct device *device)
{
if (fsync(devfd) == -1)
log_dbg("Cannot sync device %s.", device_path(device));
if (!device || device->dev_fd < 0)
return;
if (fsync(device->dev_fd) == -1)
log_dbg(cd, "Cannot sync device %s.", device_path(device));
}
/*
@@ -254,36 +260,103 @@ void device_sync(struct device *device, int devfd)
* -EINVAL : invalid lock fd state
* -1 : all other errors
*/
static int device_open_internal(struct device *device, int flags)
static int device_open_internal(struct crypt_device *cd, struct device *device, int flags)
{
int devfd;
int access, devfd;
if (device->o_direct)
flags |= O_DIRECT;
access = flags & O_ACCMODE;
if (access == O_WRONLY)
access = O_RDWR;
if (access == O_RDONLY && device->ro_dev_fd >= 0) {
log_dbg(cd, "Reusing open r%c fd on device %s", 'o', device_path(device));
return device->ro_dev_fd;
} else if (access == O_RDWR && device->dev_fd >= 0) {
log_dbg(cd, "Reusing open r%c fd on device %s", 'w', device_path(device));
return device->dev_fd;
}
if (device_locked(device->lh))
devfd = _open_locked(device, flags);
devfd = _open_locked(cd, device, flags);
else
devfd = open(device_path(device), flags);
if (devfd < 0)
log_dbg("Cannot open device %s%s.",
if (devfd < 0) {
log_dbg(cd, "Cannot open device %s%s.",
device_path(device),
(flags & O_ACCMODE) != O_RDONLY ? " for write" : "");
access != O_RDONLY ? " for write" : "");
return devfd;
}
if (access == O_RDONLY)
device->ro_dev_fd = devfd;
else
device->dev_fd = devfd;
return devfd;
}
int device_open(struct device *device, int flags)
int device_open(struct crypt_device *cd, struct device *device, int flags)
{
assert(!device_locked(device->lh));
return device_open_internal(device, flags);
return device_open_internal(cd, device, flags);
}
int device_open_locked(struct device *device, int flags)
int device_open_excl(struct crypt_device *cd, struct device *device, int flags)
{
const char *path;
struct stat st;
if (!device)
return -EINVAL;
assert(!device_locked(device->lh));
if (device->dev_fd_excl < 0) {
path = device_path(device);
if (stat(path, &st))
return -EINVAL;
if (!S_ISBLK(st.st_mode))
log_dbg(cd, "%s is not a block device. Can't open in exclusive mode.",
path);
else {
/* open(2) with O_EXCL (w/o O_CREAT) on regular file is undefined behaviour according to man page */
/* coverity[toctou] */
device->dev_fd_excl = open(path, O_RDONLY | O_EXCL);
if (device->dev_fd_excl < 0)
return errno == EBUSY ? -EBUSY : device->dev_fd_excl;
if (fstat(device->dev_fd_excl, &st) || !S_ISBLK(st.st_mode)) {
log_dbg(cd, "%s is not a block device. Can't open in exclusive mode.",
path);
close(device->dev_fd_excl);
device->dev_fd_excl = -1;
} else
log_dbg(cd, "Device %s is blocked for exclusive open.", path);
}
}
return device_open_internal(cd, device, flags);
}
void device_release_excl(struct crypt_device *cd, struct device *device)
{
if (device && device->dev_fd_excl >= 0) {
if (close(device->dev_fd_excl))
log_dbg(cd, "Failed to release exclusive handle on device %s.",
device_path(device));
else
log_dbg(cd, "Closed exclusive fd for %s.", device_path(device));
device->dev_fd_excl = -1;
}
}
int device_open_locked(struct crypt_device *cd, struct device *device, int flags)
{
assert(!crypt_metadata_locking_enabled() || device_locked(device->lh));
return device_open_internal(device, flags);
return device_open_internal(cd, device, flags);
}
/* Avoid any read from device, expects direct-io to work. */
@@ -307,13 +380,16 @@ int device_alloc_no_check(struct device **device, const char *path)
return -ENOMEM;
}
dev->loop_fd = -1;
dev->ro_dev_fd = -1;
dev->dev_fd = -1;
dev->dev_fd_excl = -1;
dev->o_direct = 1;
*device = dev;
return 0;
}
int device_alloc(struct device **device, const char *path)
int device_alloc(struct crypt_device *cd, struct device **device, const char *path)
{
struct device *dev;
int r;
@@ -323,7 +399,7 @@ int device_alloc(struct device **device, const char *path)
return r;
if (dev) {
r = device_ready(dev);
r = device_ready(cd, dev);
if (!r) {
dev->init_done = 1;
} else if (r == -ENOTBLK) {
@@ -339,17 +415,24 @@ int device_alloc(struct device **device, const char *path)
return 0;
}
void device_free(struct device *device)
void device_free(struct crypt_device *cd, struct device *device)
{
if (!device)
return;
device_close(cd, device);
if (device->dev_fd_excl != -1) {
log_dbg(cd, "Closed exclusive fd for %s.", device_path(device));
close(device->dev_fd_excl);
}
if (device->loop_fd != -1) {
log_dbg("Closed loop %s (%s).", device->path, device->file_path);
log_dbg(cd, "Closed loop %s (%s).", device->path, device->file_path);
close(device->loop_fd);
}
assert (!device_locked(device->lh));
assert(!device_locked(device->lh));
free(device->file_path);
free(device->path);
@@ -399,10 +482,11 @@ const char *device_path(const struct device *device)
#define BLKALIGNOFF _IO(0x12,122)
#endif
void device_topology_alignment(struct device *device,
unsigned long *required_alignment, /* bytes */
unsigned long *alignment_offset, /* bytes */
unsigned long default_alignment)
void device_topology_alignment(struct crypt_device *cd,
struct device *device,
unsigned long *required_alignment, /* bytes */
unsigned long *alignment_offset, /* bytes */
unsigned long default_alignment)
{
int dev_alignment_offset = 0;
unsigned int min_io_size = 0, opt_io_size = 0;
@@ -421,7 +505,7 @@ void device_topology_alignment(struct device *device,
/* minimum io size */
if (ioctl(fd, BLKIOMIN, &min_io_size) == -1) {
log_dbg("Topology info for %s not supported, using default offset %lu bytes.",
log_dbg(cd, "Topology info for %s not supported, using default offset %lu bytes.",
device->path, default_alignment);
goto out;
}
@@ -446,13 +530,13 @@ void device_topology_alignment(struct device *device,
if (temp_alignment && (default_alignment % temp_alignment))
*required_alignment = temp_alignment;
log_dbg("Topology: IO (%u/%u), offset = %lu; Required alignment is %lu bytes.",
log_dbg(cd, "Topology: IO (%u/%u), offset = %lu; Required alignment is %lu bytes.",
min_io_size, opt_io_size, *alignment_offset, *required_alignment);
out:
(void)close(fd);
}
size_t device_block_size(struct device *device)
size_t device_block_size(struct crypt_device *cd, struct device *device)
{
int fd;
@@ -469,7 +553,7 @@ size_t device_block_size(struct device *device)
}
if (!device->block_size)
log_dbg("Cannot get block size for device %s.", device_path(device));
log_dbg(cd, "Cannot get block size for device %s.", device_path(device));
return device->block_size;
}
@@ -524,11 +608,11 @@ int device_fallocate(struct device *device, uint64_t size)
int devfd, r = -EINVAL;
devfd = open(device_path(device), O_RDWR);
if(devfd == -1)
if (devfd == -1)
return -EINVAL;
if (!fstat(devfd, &st) && S_ISREG(st.st_mode) &&
!posix_fallocate(devfd, 0, size)) {
((uint64_t)st.st_size >= size || !posix_fallocate(devfd, 0, size))) {
r = 0;
if (device->file_path && crypt_loop_resize(device->path))
r = -EINVAL;
@@ -538,6 +622,32 @@ int device_fallocate(struct device *device, uint64_t size)
return r;
}
int device_check_size(struct crypt_device *cd,
struct device *device,
uint64_t req_offset, int falloc)
{
uint64_t dev_size;
if (device_size(device, &dev_size)) {
log_dbg(cd, "Cannot get device size for device %s.", device_path(device));
return -EIO;
}
log_dbg(cd, "Device size %" PRIu64 ", offset %" PRIu64 ".", dev_size, req_offset);
if (req_offset > dev_size) {
/* If it is header file, increase its size */
if (falloc && !device_fallocate(device, req_offset))
return 0;
log_err(cd, _("Device %s is too small. Need at least %" PRIu64 " bytes."),
device_path(device), req_offset);
return -EINVAL;
}
return 0;
}
static int device_info(struct crypt_device *cd,
struct device *device,
enum devcheck device_check,
@@ -646,7 +756,7 @@ static int device_internal_prepare(struct crypt_device *cd, struct device *devic
return -ENOTSUP;
}
log_dbg("Allocating a free loop device.");
log_dbg(cd, "Allocating a free loop device.");
/* Keep the loop open, dettached on last close. */
loop_fd = crypt_loop_attach(&loop_device, device->path, 0, 1, &readonly);
@@ -660,7 +770,7 @@ static int device_internal_prepare(struct crypt_device *cd, struct device *devic
file_path = device->path;
device->path = loop_device;
r = device_ready(device);
r = device_ready(cd, device);
if (r < 0) {
device->path = file_path;
crypt_loop_detach(loop_device);
@@ -713,7 +823,7 @@ int device_block_adjust(struct crypt_device *cd,
/* in case of size is set by parameter */
if (size && ((real_size - device_offset) < *size)) {
log_dbg("Device %s: offset = %" PRIu64 " requested size = %" PRIu64
log_dbg(cd, "Device %s: offset = %" PRIu64 " requested size = %" PRIu64
", backing device size = %" PRIu64,
device->path, device_offset, *size, real_size);
log_err(cd, _("Device %s is too small."), device_path(device));
@@ -724,7 +834,7 @@ int device_block_adjust(struct crypt_device *cd,
*flags |= CRYPT_ACTIVATE_READONLY;
if (size)
log_dbg("Calculated device size is %" PRIu64" sectors (%s), offset %" PRIu64 ".",
log_dbg(cd, "Calculated device size is %" PRIu64" sectors (%s), offset %" PRIu64 ".",
*size, real_readonly ? "RO" : "RW", device_offset);
return 0;
}
@@ -745,15 +855,29 @@ int device_direct_io(const struct device *device)
return device->o_direct;
}
static dev_t device_devno(const struct device *device)
{
struct stat st;
if (stat(device->path, &st) || !S_ISBLK(st.st_mode))
return 0;
return st.st_rdev;
}
int device_is_identical(struct device *device1, struct device *device2)
{
if (!device1 || !device2)
return 0;
if (device1 == device2)
return 1;
if (!device1 || !device2 || !device_path(device1) || !device_path(device2))
if (device1->init_done && device2->init_done)
return (device_devno(device1) == device_devno(device2));
else if (device1->init_done || device2->init_done)
return 0;
/* This should be better check - major/minor for block device etc */
if (!strcmp(device_path(device1), device_path(device2)))
return 1;
@@ -788,21 +912,25 @@ size_t device_alignment(struct device *device)
return device->alignment;
}
void device_set_lock_handle(struct device *device, struct crypt_lock_handle *h)
{
device->lh = h;
}
struct crypt_lock_handle *device_get_lock_handle(struct device *device)
{
return device->lh;
}
int device_read_lock(struct crypt_device *cd, struct device *device)
{
if (!crypt_metadata_locking_enabled())
return 0;
assert(!device_locked(device->lh));
if (device_read_lock_internal(cd, device))
return -EBUSY;
device->lh = device_read_lock_handle(cd, device_path(device));
if (device_locked(device->lh)) {
log_dbg("Device %s READ lock taken.", device_path(device));
return 0;
}
return -EBUSY;
return 0;
}
int device_write_lock(struct crypt_device *cd, struct device *device)
@@ -810,42 +938,52 @@ int device_write_lock(struct crypt_device *cd, struct device *device)
if (!crypt_metadata_locking_enabled())
return 0;
assert(!device_locked(device->lh));
assert(!device_locked(device->lh) || !device_locked_readonly(device->lh));
device->lh = device_write_lock_handle(cd, device_path(device));
if (device_locked(device->lh)) {
log_dbg("Device %s WRITE lock taken.", device_path(device));
return 0;
}
return -EBUSY;
return device_write_lock_internal(cd, device);
}
void device_read_unlock(struct device *device)
void device_read_unlock(struct crypt_device *cd, struct device *device)
{
if (!crypt_metadata_locking_enabled())
return;
assert(device_locked(device->lh) && device_locked_readonly(device->lh));
assert(device_locked(device->lh));
device_unlock_handle(device->lh);
log_dbg("Device %s READ lock released.", device_path(device));
device->lh = NULL;
device_unlock_internal(cd, device);
}
void device_write_unlock(struct device *device)
void device_write_unlock(struct crypt_device *cd, struct device *device)
{
if (!crypt_metadata_locking_enabled())
return;
assert(device_locked(device->lh) && !device_locked_readonly(device->lh));
device_unlock_handle(device->lh);
log_dbg("Device %s WRITE lock released.", device_path(device));
device->lh = NULL;
device_unlock_internal(cd, device);
}
bool device_is_locked(struct device *device)
{
return device ? device_locked(device->lh) : 0;
}
void device_close(struct crypt_device *cd, struct device *device)
{
if (!device)
return;
if (device->ro_dev_fd != -1) {
log_dbg(cd, "Closing read only fd for %s.", device_path(device));
if (close(device->ro_dev_fd))
log_dbg(cd, "Failed to close read only fd for %s.", device_path(device));
device->ro_dev_fd = -1;
}
if (device->dev_fd != -1) {
log_dbg(cd, "Closing read write fd for %s.", device_path(device));
if (close(device->dev_fd))
log_dbg(cd, "Failed to close read write fd for %s.", device_path(device));
device->dev_fd = -1;
}
}

View File

@@ -1,8 +1,8 @@
/*
* Metadata on-disk locking for processes serialization
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Ondrej Kozina. All rights reserved.
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -33,6 +33,7 @@
# include <sys/sysmacros.h> /* for major, minor */
#endif
#include <libgen.h>
#include <assert.h>
#include "internal.h"
#include "utils_device_locking.h"
@@ -41,22 +42,44 @@
((buf1).st_ino == (buf2).st_ino && \
(buf1).st_dev == (buf2).st_dev)
#ifndef __GNUC__
# define __typeof__ typeof
#endif
enum lock_type {
DEV_LOCK_READ = 0,
DEV_LOCK_WRITE
};
enum lock_mode {
DEV_LOCK_FILE = 0,
DEV_LOCK_BDEV,
DEV_LOCK_NAME
};
struct crypt_lock_handle {
dev_t devno;
unsigned refcnt;
int flock_fd;
enum lock_type type;
__typeof__( ((struct stat*)0)->st_mode) mode;
enum lock_mode mode;
union {
struct {
dev_t devno;
} bdev;
struct {
char *name;
} name;
} u;
};
static int resource_by_name(char *res, size_t res_size, const char *name, bool fullpath)
{
int r;
if (fullpath)
r = snprintf(res, res_size, "%s/LN_%s", DEFAULT_LUKS2_LOCK_PATH, name);
else
r = snprintf(res, res_size, "LN_%s", name);
return (r < 0 || (size_t)r >= res_size) ? -EINVAL : 0;
}
static int resource_by_devno(char *res, size_t res_size, dev_t devno, unsigned fullpath)
{
int r;
@@ -75,7 +98,7 @@ static int open_lock_dir(struct crypt_device *cd, const char *dir, const char *b
dirfd = open(dir, O_RDONLY | O_DIRECTORY | O_CLOEXEC);
if (dirfd < 0) {
log_dbg("Failed to open directory %s: (%d: %s).", dir, errno, strerror(errno));
log_dbg(cd, "Failed to open directory %s: (%d: %s).", dir, errno, strerror(errno));
if (errno == ENOTDIR || errno == ENOENT)
log_err(cd, _("Locking aborted. The locking path %s/%s is unusable (not a directory or missing)."), dir, base);
return -EINVAL;
@@ -88,11 +111,11 @@ static int open_lock_dir(struct crypt_device *cd, const char *dir, const char *b
/* success or failure w/ errno == EEXIST either way just try to open the 'base' directory again */
if (mkdirat(dirfd, base, DEFAULT_LUKS2_LOCK_DIR_PERMS) && errno != EEXIST)
log_dbg("Failed to create directory %s in %s (%d: %s).", base, dir, errno, strerror(errno));
log_dbg(cd, "Failed to create directory %s in %s (%d: %s).", base, dir, errno, strerror(errno));
else
lockdfd = openat(dirfd, base, O_RDONLY | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
} else {
log_dbg("Failed to open directory %s/%s: (%d: %s)", dir, base, errno, strerror(errno));
log_dbg(cd, "Failed to open directory %s/%s: (%d: %s)", dir, base, errno, strerror(errno));
if (errno == ENOTDIR || errno == ELOOP)
log_err(cd, _("Locking aborted. The locking path %s/%s is unusable (%s is not a directory)."), dir, base, base);
}
@@ -112,7 +135,7 @@ static int open_resource(struct crypt_device *cd, const char *res)
if (lockdir_fd < 0)
return -EINVAL;
log_dbg("Opening lock resource file %s/%s", DEFAULT_LUKS2_LOCK_PATH, res);
log_dbg(cd, "Opening lock resource file %s/%s", DEFAULT_LUKS2_LOCK_PATH, res);
r = openat(lockdir_fd, res, O_CREAT | O_NOFOLLOW | O_RDWR | O_CLOEXEC, 0777);
err = errno;
@@ -121,13 +144,13 @@ static int open_resource(struct crypt_device *cd, const char *res)
return r < 0 ? -err : r;
}
static int acquire_lock_handle(struct crypt_device *cd, const char *device_path, struct crypt_lock_handle *h)
static int acquire_lock_handle(struct crypt_device *cd, struct device *device, struct crypt_lock_handle *h)
{
char res[PATH_MAX];
int dev_fd, fd;
struct stat st;
dev_fd = open(device_path, O_RDONLY | O_NONBLOCK | O_CLOEXEC);
dev_fd = open(device_path(device), O_RDONLY | O_NONBLOCK | O_CLOEXEC);
if (dev_fd < 0)
return -EINVAL;
@@ -148,45 +171,85 @@ static int acquire_lock_handle(struct crypt_device *cd, const char *device_path,
return fd;
h->flock_fd = fd;
h->devno = st.st_rdev;
h->u.bdev.devno = st.st_rdev;
h->mode = DEV_LOCK_BDEV;
} else if (S_ISREG(st.st_mode)) {
// FIXME: workaround for nfsv4
fd = open(device_path, O_RDWR | O_NONBLOCK | O_CLOEXEC);
fd = open(device_path(device), O_RDWR | O_NONBLOCK | O_CLOEXEC);
if (fd < 0)
h->flock_fd = dev_fd;
else {
h->flock_fd = fd;
close(dev_fd);
}
h->mode = DEV_LOCK_FILE;
} else {
/* Wrong device type */
close(dev_fd);
return -EINVAL;
}
h->mode = st.st_mode;
return 0;
}
static int acquire_lock_handle_by_name(struct crypt_device *cd, const char *name, struct crypt_lock_handle *h)
{
char res[PATH_MAX];
int fd;
h->u.name.name = strdup(name);
if (!h->u.name.name)
return -ENOMEM;
if (resource_by_name(res, sizeof(res), name, false)) {
free(h->u.name.name);
return -EINVAL;
}
fd = open_resource(cd, res);
if (fd < 0) {
free(h->u.name.name);
return fd;
}
h->flock_fd = fd;
h->mode = DEV_LOCK_NAME;
return 0;
}
static void release_lock_handle(struct crypt_lock_handle *h)
static void release_lock_handle(struct crypt_device *cd, struct crypt_lock_handle *h)
{
char res[PATH_MAX];
struct stat buf_a, buf_b;
if (S_ISBLK(h->mode) && /* was it block device */
if ((h->mode == DEV_LOCK_NAME) && /* was it name lock */
!flock(h->flock_fd, LOCK_EX | LOCK_NB) && /* lock to drop the file */
!resource_by_devno(res, sizeof(res), h->devno, 1) && /* acquire lock resource name */
!resource_by_name(res, sizeof(res), h->u.name.name, true) && /* acquire lock resource name */
!fstat(h->flock_fd, &buf_a) && /* read inode id referred by fd */
!stat(res, &buf_b) && /* does path file still exist? */
same_inode(buf_a, buf_b)) { /* is it same id as the one referenced by fd? */
/* coverity[toctou] */
if (unlink(res)) /* yes? unlink the file */
log_dbg("Failed to unlink resource file: %s", res);
log_dbg(cd, "Failed to unlink resource file: %s", res);
}
if ((h->mode == DEV_LOCK_BDEV) && /* was it block device */
!flock(h->flock_fd, LOCK_EX | LOCK_NB) && /* lock to drop the file */
!resource_by_devno(res, sizeof(res), h->u.bdev.devno, 1) && /* acquire lock resource name */
!fstat(h->flock_fd, &buf_a) && /* read inode id referred by fd */
!stat(res, &buf_b) && /* does path file still exist? */
same_inode(buf_a, buf_b)) { /* is it same id as the one referenced by fd? */
/* coverity[toctou] */
if (unlink(res)) /* yes? unlink the file */
log_dbg(cd, "Failed to unlink resource file: %s", res);
}
if (h->mode == DEV_LOCK_NAME)
free(h->u.name.name);
if (close(h->flock_fd))
log_dbg("Failed to close resource fd (%d).", h->flock_fd);
log_dbg(cd, "Failed to close lock resource fd (%d).", h->flock_fd);
}
int device_locked(struct crypt_lock_handle *h)
@@ -205,10 +268,16 @@ static int verify_lock_handle(const char *device_path, struct crypt_lock_handle
struct stat lck_st, res_st;
/* we locked a regular file, check during device_open() instead. No reason to check now */
if (S_ISREG(h->mode))
if (h->mode == DEV_LOCK_FILE)
return 0;
if (resource_by_devno(res, sizeof(res), h->devno, 1))
if (h->mode == DEV_LOCK_NAME) {
if (resource_by_name(res, sizeof(res), h->u.name.name, true))
return -EINVAL;
} else if (h->mode == DEV_LOCK_BDEV) {
if (resource_by_devno(res, sizeof(res), h->u.bdev.devno, true))
return -EINVAL;
} else
return -EINVAL;
if (fstat(h->flock_fd, &lck_st))
@@ -217,109 +286,217 @@ static int verify_lock_handle(const char *device_path, struct crypt_lock_handle
return (stat(res, &res_st) || !same_inode(lck_st, res_st)) ? -EAGAIN : 0;
}
struct crypt_lock_handle *device_read_lock_handle(struct crypt_device *cd, const char *device_path)
static unsigned device_lock_inc(struct crypt_lock_handle *h)
{
return ++h->refcnt;
}
static unsigned device_lock_dec(struct crypt_lock_handle *h)
{
assert(h->refcnt);
return --h->refcnt;
}
static int acquire_and_verify(struct crypt_device *cd, struct device *device, const char *resource, int flock_op, struct crypt_lock_handle **lock)
{
int r;
struct crypt_lock_handle *h = malloc(sizeof(*h));
struct crypt_lock_handle *h;
if (!h)
return NULL;
if (device && resource)
return -EINVAL;
if (!(h = malloc(sizeof(*h))))
return -ENOMEM;
do {
r = acquire_lock_handle(cd, device_path, h);
if (r)
r = device ? acquire_lock_handle(cd, device, h) : acquire_lock_handle_by_name(cd, resource, h);
if (r < 0)
break;
log_dbg("Acquiring read lock for device %s.", device_path);
if (flock(h->flock_fd, LOCK_SH)) {
log_dbg("Shared flock failed with errno %d.", errno);
r = -EINVAL;
release_lock_handle(h);
if (flock(h->flock_fd, flock_op)) {
log_dbg(cd, "Flock on fd %d failed with errno %d.", h->flock_fd, errno);
r = (errno == EWOULDBLOCK) ? -EBUSY : -EINVAL;
release_lock_handle(cd, h);
break;
}
log_dbg("Verifying read lock handle for device %s.", device_path);
log_dbg(cd, "Verifying lock handle for %s.", device ? device_path(device) : resource);
/*
* check whether another libcryptsetup process removed resource file before this
* one managed to flock() it. See release_lock_handle() for details
*/
r = verify_lock_handle(device_path, h);
if (r) {
flock(h->flock_fd, LOCK_UN);
release_lock_handle(h);
log_dbg("Read lock handle verification failed.");
r = verify_lock_handle(device_path(device), h);
if (r < 0) {
if (flock(h->flock_fd, LOCK_UN))
log_dbg(cd, "flock on fd %d failed.", h->flock_fd);
release_lock_handle(cd, h);
log_dbg(cd, "Lock handle verification failed.");
}
} while (r == -EAGAIN);
if (r) {
if (r < 0) {
free(h);
return NULL;
return r;
}
*lock = h;
return 0;
}
int device_read_lock_internal(struct crypt_device *cd, struct device *device)
{
int r;
struct crypt_lock_handle *h;
if (!device)
return -EINVAL;
h = device_get_lock_handle(device);
if (device_locked(h)) {
device_lock_inc(h);
log_dbg(cd, "Device %s READ lock (or higher) already held.", device_path(device));
return 0;
}
log_dbg(cd, "Acquiring read lock for device %s.", device_path(device));
r = acquire_and_verify(cd, device, NULL, LOCK_SH, &h);
if (r < 0)
return r;
h->type = DEV_LOCK_READ;
h->refcnt = 1;
device_set_lock_handle(device, h);
return h;
log_dbg(cd, "Device %s READ lock taken.", device_path(device));
return 0;
}
struct crypt_lock_handle *device_write_lock_handle(struct crypt_device *cd, const char *device_path)
int device_write_lock_internal(struct crypt_device *cd, struct device *device)
{
int r;
struct crypt_lock_handle *h = malloc(sizeof(*h));
struct crypt_lock_handle *h;
if (!h)
return NULL;
if (!device)
return -EINVAL;
do {
r = acquire_lock_handle(cd, device_path, h);
if (r)
break;
h = device_get_lock_handle(device);
log_dbg("Acquiring write lock for device %s.", device_path);
if (flock(h->flock_fd, LOCK_EX)) {
log_dbg("Exclusive flock failed with errno %d.", errno);
r = -EINVAL;
release_lock_handle(h);
break;
}
log_dbg("Verifying write lock handle for device %s.", device_path);
/*
* check whether another libcryptsetup process removed resource file before this
* one managed to flock() it. See release_lock_handle() for details
*/
r = verify_lock_handle(device_path, h);
if (r) {
flock(h->flock_fd, LOCK_UN);
release_lock_handle(h);
log_dbg("Write lock handle verification failed.");
}
} while (r == -EAGAIN);
if (r) {
free(h);
return NULL;
if (device_locked(h)) {
log_dbg(cd, "Device %s WRITE lock already held.", device_path(device));
return device_lock_inc(h);
}
h->type = DEV_LOCK_WRITE;
log_dbg(cd, "Acquiring write lock for device %s.", device_path(device));
return h;
r = acquire_and_verify(cd, device, NULL, LOCK_EX, &h);
if (r < 0)
return r;
h->type = DEV_LOCK_WRITE;
h->refcnt = 1;
device_set_lock_handle(device, h);
log_dbg(cd, "Device %s WRITE lock taken.", device_path(device));
return 1;
}
void device_unlock_handle(struct crypt_lock_handle *h)
int crypt_read_lock(struct crypt_device *cd, const char *resource, bool blocking, struct crypt_lock_handle **lock)
{
int r;
struct crypt_lock_handle *h;
if (!resource)
return -EINVAL;
log_dbg(cd, "Acquiring %sblocking read lock for resource %s.", blocking ? "" : "non", resource);
r = acquire_and_verify(cd, NULL, resource, LOCK_SH | (blocking ? 0 : LOCK_NB), &h);
if (r < 0)
return r;
h->type = DEV_LOCK_READ;
h->refcnt = 1;
log_dbg(cd, "READ lock for resource %s taken.", resource);
*lock = h;
return 0;
}
int crypt_write_lock(struct crypt_device *cd, const char *resource, bool blocking, struct crypt_lock_handle **lock)
{
int r;
struct crypt_lock_handle *h;
if (!resource)
return -EINVAL;
log_dbg(cd, "Acquiring %sblocking write lock for resource %s.", blocking ? "" : "non", resource);
r = acquire_and_verify(cd, NULL, resource, LOCK_EX | (blocking ? 0 : LOCK_NB), &h);
if (r < 0)
return r;
h->type = DEV_LOCK_WRITE;
h->refcnt = 1;
log_dbg(cd, "WRITE lock for resource %s taken.", resource);
*lock = h;
return 0;
}
static void unlock_internal(struct crypt_device *cd, struct crypt_lock_handle *h)
{
if (flock(h->flock_fd, LOCK_UN))
log_dbg("flock on fd %d failed.", h->flock_fd);
release_lock_handle(h);
log_dbg(cd, "flock on fd %d failed.", h->flock_fd);
release_lock_handle(cd, h);
free(h);
}
int device_locked_verify(int dev_fd, struct crypt_lock_handle *h)
void crypt_unlock_internal(struct crypt_device *cd, struct crypt_lock_handle *h)
{
if (!h)
return;
/* nested locks are illegal */
assert(!device_lock_dec(h));
log_dbg(cd, "Unlocking %s lock for resource %s.",
device_locked_readonly(h) ? "READ" : "WRITE", h->u.name.name);
unlock_internal(cd, h);
}
void device_unlock_internal(struct crypt_device *cd, struct device *device)
{
bool readonly;
struct crypt_lock_handle *h = device_get_lock_handle(device);
unsigned u = device_lock_dec(h);
if (u)
return;
readonly = device_locked_readonly(h);
unlock_internal(cd, h);
log_dbg(cd, "Device %s %s lock released.", device_path(device),
readonly ? "READ" : "WRITE");
device_set_lock_handle(device, NULL);
}
int device_locked_verify(struct crypt_device *cd, int dev_fd, struct crypt_lock_handle *h)
{
char res[PATH_MAX];
struct stat dev_st, lck_st, st;
@@ -329,11 +506,11 @@ int device_locked_verify(int dev_fd, struct crypt_lock_handle *h)
/* if device handle is regular file the handle must match the lock handle */
if (S_ISREG(dev_st.st_mode)) {
log_dbg("Veryfing locked device handle (regular file)");
log_dbg(cd, "Veryfing locked device handle (regular file)");
if (!same_inode(dev_st, lck_st))
return 1;
} else if (S_ISBLK(dev_st.st_mode)) {
log_dbg("Veryfing locked device handle (bdev)");
log_dbg(cd, "Veryfing locked device handle (bdev)");
if (resource_by_devno(res, sizeof(res), dev_st.st_rdev, 1) ||
stat(res, &st) ||
!same_inode(lck_st, st))

View File

@@ -1,8 +1,8 @@
/*
* Metadata on-disk locking for processes serialization
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Ondrej Kozina. All rights reserved.
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -24,14 +24,24 @@
struct crypt_device;
struct crypt_lock_handle;
struct device;
int device_locked_readonly(struct crypt_lock_handle *h);
int device_locked(struct crypt_lock_handle *h);
struct crypt_lock_handle *device_read_lock_handle(struct crypt_device *cd, const char *device_path);
struct crypt_lock_handle *device_write_lock_handle(struct crypt_device *cd, const char *device_path);
void device_unlock_handle(struct crypt_lock_handle *h);
int device_read_lock_internal(struct crypt_device *cd, struct device *device);
int device_write_lock_internal(struct crypt_device *cd, struct device *device);
void device_unlock_internal(struct crypt_device *cd, struct device *device);
int device_locked_verify(int fd, struct crypt_lock_handle *h);
int device_locked_verify(struct crypt_device *cd, int fd, struct crypt_lock_handle *h);
int crypt_read_lock(struct crypt_device *cd, const char *name, bool blocking, struct crypt_lock_handle **lock);
int crypt_write_lock(struct crypt_device *cd, const char *name, bool blocking, struct crypt_lock_handle **lock);
void crypt_unlock_internal(struct crypt_device *cd, struct crypt_lock_handle *h);
/* Used only in device internal allocation */
void device_set_lock_handle(struct device *device, struct crypt_lock_handle *h);
struct crypt_lock_handle *device_get_lock_handle(struct device *device);
#endif

View File

@@ -1,10 +1,10 @@
/*
* devname - search for device name
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -111,7 +111,7 @@ static char *lookup_dev_old(int major, int minor)
return result;
/* If it is dm, try DM dir */
if (dm_is_dm_device(major, minor)) {
if (dm_is_dm_device(major)) {
strncpy(buf, dm_get_dir(), PATH_MAX);
if ((result = __lookup_dev(buf, dev, 0, 0)))
return result;

View File

@@ -1,10 +1,10 @@
/*
* libdevmapper - device-mapper backend for cryptsetup
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -31,6 +31,18 @@ struct crypt_device;
struct volume_key;
struct crypt_params_verity;
struct device;
struct crypt_params_integrity;
/* Device mapper internal flags */
#define DM_RESUME_PRIVATE (1 << 4) /* CRYPT_ACTIVATE_PRIVATE */
#define DM_SUSPEND_SKIP_LOCKFS (1 << 5)
#define DM_SUSPEND_WIPE_KEY (1 << 6)
#define DM_SUSPEND_NOFLUSH (1 << 7)
static inline uint32_t act2dmflags(uint32_t act_flags)
{
return (act_flags & DM_RESUME_PRIVATE);
}
/* Device mapper backend - kernel support flags */
#define DM_KEY_WIPE_SUPPORTED (1 << 0) /* key wipe message */
@@ -49,10 +61,13 @@ struct device;
#define DM_SECTOR_SIZE_SUPPORTED (1 << 13) /* support for sector size setting in dm-crypt/dm-integrity */
#define DM_CAPI_STRING_SUPPORTED (1 << 14) /* support for cryptoapi format cipher definition */
#define DM_DEFERRED_SUPPORTED (1 << 15) /* deferred removal of device */
#define DM_INTEGRITY_RECALC_SUPPORTED (1 << 16) /* dm-integrity automatic recalculation supported */
#define DM_INTEGRITY_BITMAP_SUPPORTED (1 << 17) /* dm-integrity bitmap mode supported */
typedef enum { DM_CRYPT = 0, DM_VERITY, DM_INTEGRITY, DM_UNKNOWN } dm_target_type;
typedef enum { DM_CRYPT = 0, DM_VERITY, DM_INTEGRITY, DM_LINEAR, DM_ERROR, DM_UNKNOWN } dm_target_type;
enum tdirection { TARGET_SET = 1, TARGET_QUERY };
int dm_flags(dm_target_type target, uint32_t *flags);
int dm_flags(struct crypt_device *cd, dm_target_type target, uint32_t *flags);
#define DM_ACTIVE_DEVICE (1 << 0)
#define DM_ACTIVE_UUID (1 << 1)
@@ -68,13 +83,12 @@ int dm_flags(dm_target_type target, uint32_t *flags);
#define DM_ACTIVE_INTEGRITY_PARAMS (1 << 9)
struct crypt_dm_active_device {
dm_target_type target;
uint64_t size; /* active device size */
uint32_t flags; /* activation flags */
const char *uuid;
struct dm_target {
dm_target_type type;
enum tdirection direction;
uint64_t offset;
uint64_t size;
struct device *data_device;
unsigned holders:1;
union {
struct {
const char *cipher;
@@ -121,12 +135,55 @@ struct crypt_dm_active_device {
const char *journal_crypt;
struct volume_key *journal_crypt_key;
struct device *meta_device;
} integrity;
struct {
uint64_t offset;
} linear;
} u;
char *params;
struct dm_target *next;
};
void dm_backend_init(void);
void dm_backend_exit(void);
struct crypt_dm_active_device {
uint64_t size; /* active device size */
uint32_t flags; /* activation flags */
const char *uuid;
unsigned holders:1; /* device holders detected (on query only) */
struct dm_target segment;
};
static inline bool single_segment(const struct crypt_dm_active_device *dmd)
{
return dmd && !dmd->segment.next;
}
void dm_backend_init(struct crypt_device *cd);
void dm_backend_exit(struct crypt_device *cd);
int dm_targets_allocate(struct dm_target *first, unsigned count);
void dm_targets_free(struct crypt_device *cd, struct crypt_dm_active_device *dmd);
int dm_crypt_target_set(struct dm_target *tgt, size_t seg_offset, size_t seg_size,
struct device *data_device, struct volume_key *vk, const char *cipher,
size_t iv_offset, size_t data_offset, const char *integrity,
uint32_t tag_size, uint32_t sector_size);
int dm_verity_target_set(struct dm_target *tgt, size_t seg_offset, size_t seg_size,
struct device *data_device, struct device *hash_device, struct device *fec_device,
const char *root_hash, uint32_t root_hash_size, uint64_t hash_offset_block,
uint64_t hash_blocks, struct crypt_params_verity *vp);
int dm_integrity_target_set(struct dm_target *tgt, size_t seg_offset, size_t seg_size,
struct device *meta_device,
struct device *data_device, uint64_t tag_size, uint64_t offset, uint32_t sector_size,
struct volume_key *vk,
struct volume_key *journal_crypt_key, struct volume_key *journal_mac_key,
const struct crypt_params_integrity *ip);
int dm_linear_target_set(struct dm_target *tgt, size_t seg_offset, size_t seg_size,
struct device *data_device, size_t data_offset);
int dm_remove_device(struct crypt_device *cd, const char *name, uint32_t flags);
int dm_status_device(struct crypt_device *cd, const char *name);
@@ -135,20 +192,25 @@ int dm_status_verity_ok(struct crypt_device *cd, const char *name);
int dm_status_integrity_failures(struct crypt_device *cd, const char *name, uint64_t *count);
int dm_query_device(struct crypt_device *cd, const char *name,
uint32_t get_flags, struct crypt_dm_active_device *dmd);
int dm_device_deps(struct crypt_device *cd, const char *name, const char *prefix,
char **names, size_t names_length);
int dm_create_device(struct crypt_device *cd, const char *name,
const char *type, struct crypt_dm_active_device *dmd,
int reload);
int dm_suspend_device(struct crypt_device *cd, const char *name);
int dm_suspend_and_wipe_key(struct crypt_device *cd, const char *name);
const char *type, struct crypt_dm_active_device *dmd);
int dm_reload_device(struct crypt_device *cd, const char *name,
struct crypt_dm_active_device *dmd, uint32_t dmflags, unsigned resume);
int dm_suspend_device(struct crypt_device *cd, const char *name, uint32_t dmflags);
int dm_resume_device(struct crypt_device *cd, const char *name, uint32_t dmflags);
int dm_resume_and_reinstate_key(struct crypt_device *cd, const char *name,
const struct volume_key *vk);
int dm_error_device(struct crypt_device *cd, const char *name);
int dm_clear_device(struct crypt_device *cd, const char *name);
const char *dm_get_dir(void);
int lookup_dm_dev_by_uuid(const char *uuid, const char *type);
int lookup_dm_dev_by_uuid(struct crypt_device *cd, const char *uuid, const char *type);
/* These are DM helpers used only by utils_devpath file */
int dm_is_dm_device(int major, int minor);
int dm_is_dm_device(int major);
int dm_is_dm_kernel_name(const char *name);
char *dm_device_path(const char *prefix, int major, int minor);

View File

@@ -1,7 +1,7 @@
/*
* FIPS mode utilities
*
* Copyright (C) 2011-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,7 +1,7 @@
/*
* FIPS mode utilities
*
* Copyright (C) 2011-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,10 +1,10 @@
/*
* utils - miscellaneous I/O utilities for cryptsetup
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,10 +1,10 @@
/*
* utils - miscellaneous I/O utilities for cryptsetup
*
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,8 +1,8 @@
/*
* kernel keyring utilities
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Ondrej Kozina. All rights reserved.
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -34,8 +34,20 @@ typedef int32_t key_serial_t;
#include "utils_crypt.h"
#include "utils_keyring.h"
#ifndef ARRAY_SIZE
# define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
#endif
#ifdef KERNEL_KEYRING
static const struct {
key_type_t type;
const char *type_name;
} key_types[] = {
{ LOGON_KEY, "logon" },
{ USER_KEY, "user" },
};
#include <linux/keyctl.h>
/* request_key */
@@ -86,12 +98,16 @@ int keyring_check(void)
#endif
}
int keyring_add_key_in_thread_keyring(const char *key_desc, const void *key, size_t key_size)
int keyring_add_key_in_thread_keyring(key_type_t ktype, const char *key_desc, const void *key, size_t key_size)
{
#ifdef KERNEL_KEYRING
key_serial_t kid;
const char *type_name = key_type_name(ktype);
kid = add_key("logon", key_desc, key, key_size, KEY_SPEC_THREAD_KEYRING);
if (!type_name || !key_desc)
return -EINVAL;
kid = add_key(type_name, key_desc, key, key_size, KEY_SPEC_THREAD_KEYRING);
if (kid < 0)
return -errno;
@@ -101,6 +117,34 @@ int keyring_add_key_in_thread_keyring(const char *key_desc, const void *key, siz
#endif
}
/* currently used in client utilities only */
int keyring_add_key_in_user_keyring(key_type_t ktype, const char *key_desc, const void *key, size_t key_size)
{
#ifdef KERNEL_KEYRING
const char *type_name = key_type_name(ktype);
key_serial_t kid;
if (!type_name || !key_desc)
return -EINVAL;
kid = add_key(type_name, key_desc, key, key_size, KEY_SPEC_USER_KEYRING);
if (kid < 0)
return -errno;
return 0;
#else
return -ENOTSUP;
#endif
}
/* alias for the same code */
int keyring_get_key(const char *key_desc,
char **key,
size_t *key_size)
{
return keyring_get_passphrase(key_desc, key, key_size);
}
int keyring_get_passphrase(const char *key_desc,
char **passphrase,
size_t *passphrase_len)
@@ -113,7 +157,7 @@ int keyring_get_passphrase(const char *key_desc,
size_t len = 0;
do
kid = request_key("user", key_desc, NULL, 0);
kid = request_key(key_type_name(USER_KEY), key_desc, NULL, 0);
while (kid < 0 && errno == EINTR);
if (kid < 0)
@@ -148,13 +192,16 @@ int keyring_get_passphrase(const char *key_desc,
#endif
}
int keyring_revoke_and_unlink_key(const char *key_desc)
static int keyring_revoke_and_unlink_key_type(const char *type_name, const char *key_desc)
{
#ifdef KERNEL_KEYRING
key_serial_t kid;
if (!type_name || !key_desc)
return -EINVAL;
do
kid = request_key("logon", key_desc, NULL, 0);
kid = request_key(type_name, key_desc, NULL, 0);
while (kid < 0 && errno == EINTR);
if (kid < 0)
@@ -177,3 +224,20 @@ int keyring_revoke_and_unlink_key(const char *key_desc)
return -ENOTSUP;
#endif
}
const char *key_type_name(key_type_t type)
{
#ifdef KERNEL_KEYRING
unsigned int i;
for (i = 0; i < ARRAY_SIZE(key_types); i++)
if (type == key_types[i].type)
return key_types[i].type_name;
#endif
return NULL;
}
int keyring_revoke_and_unlink_key(key_type_t ktype, const char *key_desc)
{
return keyring_revoke_and_unlink_key_type(key_type_name(ktype), key_desc);
}

View File

@@ -1,8 +1,8 @@
/*
* kernel keyring syscall wrappers
*
* Copyright (C) 2016-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2018, Ondrej Kozina. All rights reserved.
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -24,17 +24,32 @@
#include <stddef.h>
typedef enum { LOGON_KEY = 0, USER_KEY } key_type_t;
const char *key_type_name(key_type_t ktype);
int keyring_check(void);
int keyring_get_key(const char *key_desc,
char **key,
size_t *key_size);
int keyring_get_passphrase(const char *key_desc,
char **passphrase,
size_t *passphrase_len);
int keyring_add_key_in_thread_keyring(
key_type_t ktype,
const char *key_desc,
const void *key,
size_t key_size);
int keyring_revoke_and_unlink_key(const char *key_desc);
int keyring_add_key_in_user_keyring(
key_type_t ktype,
const char *key_desc,
const void *key,
size_t key_size);
int keyring_revoke_and_unlink_key(key_type_t ktype, const char *key_desc);
#endif

View File

@@ -1,8 +1,8 @@
/*
* loopback block device utilities
*
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,8 +1,8 @@
/*
* loopback block device utilities
*
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License

View File

@@ -1,8 +1,8 @@
/*
* utils_pbkdf - PBKDF ssettings for libcryptsetup
* utils_pbkdf - PBKDF settings for libcryptsetup
*
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -24,20 +24,43 @@
#include "internal.h"
const struct crypt_pbkdf_type default_luks2 = {
.type = DEFAULT_LUKS2_PBKDF,
const struct crypt_pbkdf_type default_pbkdf2 = {
.type = CRYPT_KDF_PBKDF2,
.hash = DEFAULT_LUKS1_HASH,
.time_ms = DEFAULT_LUKS1_ITER_TIME
};
const struct crypt_pbkdf_type default_argon2i = {
.type = CRYPT_KDF_ARGON2I,
.hash = DEFAULT_LUKS1_HASH,
.time_ms = DEFAULT_LUKS2_ITER_TIME,
.max_memory_kb = DEFAULT_LUKS2_MEMORY_KB,
.parallel_threads = DEFAULT_LUKS2_PARALLEL_THREADS
};
const struct crypt_pbkdf_type default_luks1 = {
.type = CRYPT_KDF_PBKDF2,
const struct crypt_pbkdf_type default_argon2id = {
.type = CRYPT_KDF_ARGON2ID,
.hash = DEFAULT_LUKS1_HASH,
.time_ms = DEFAULT_LUKS1_ITER_TIME
.time_ms = DEFAULT_LUKS2_ITER_TIME,
.max_memory_kb = DEFAULT_LUKS2_MEMORY_KB,
.parallel_threads = DEFAULT_LUKS2_PARALLEL_THREADS
};
const struct crypt_pbkdf_type *crypt_get_pbkdf_type_params(const char *pbkdf_type)
{
if (!pbkdf_type)
return NULL;
if (!strcmp(pbkdf_type, CRYPT_KDF_PBKDF2))
return &default_pbkdf2;
else if (!strcmp(pbkdf_type, CRYPT_KDF_ARGON2I))
return &default_argon2i;
else if (!strcmp(pbkdf_type, CRYPT_KDF_ARGON2ID))
return &default_argon2id;
return NULL;
}
static uint32_t adjusted_phys_memory(void)
{
uint64_t memory_kb = crypt_getphysmemory_kb();
@@ -156,10 +179,19 @@ int init_pbkdf_type(struct crypt_device *cd,
uint32_t old_flags, memory_kb;
int r;
if (crypt_fips_mode()) {
if (pbkdf && strcmp(pbkdf->type, CRYPT_KDF_PBKDF2)) {
log_err(cd, _("Only PBKDF2 is supported in FIPS mode."));
return -EINVAL;
}
if (!pbkdf)
pbkdf = crypt_get_pbkdf_type_params(CRYPT_KDF_PBKDF2);
}
if (!pbkdf && dev_type && !strcmp(dev_type, CRYPT_LUKS2))
pbkdf = &default_luks2;
pbkdf = crypt_get_pbkdf_type_params(DEFAULT_LUKS2_PBKDF);
else if (!pbkdf)
pbkdf = &default_luks1;
pbkdf = crypt_get_pbkdf_type_params(CRYPT_KDF_PBKDF2);
r = verify_pbkdf_params(cd, pbkdf);
if (r)
@@ -201,7 +233,7 @@ int init_pbkdf_type(struct crypt_device *cd,
cd_pbkdf->parallel_threads = pbkdf->parallel_threads;
if (cd_pbkdf->parallel_threads > pbkdf_limits.max_parallel) {
log_dbg("Maximum PBKDF threads is %d (requested %d).",
log_dbg(cd, "Maximum PBKDF threads is %d (requested %d).",
pbkdf_limits.max_parallel, cd_pbkdf->parallel_threads);
cd_pbkdf->parallel_threads = pbkdf_limits.max_parallel;
}
@@ -209,7 +241,7 @@ int init_pbkdf_type(struct crypt_device *cd,
if (cd_pbkdf->parallel_threads) {
cpus = crypt_cpusonline();
if (cd_pbkdf->parallel_threads > cpus) {
log_dbg("Only %u active CPUs detected, "
log_dbg(cd, "Only %u active CPUs detected, "
"PBKDF threads decreased from %d to %d.",
cpus, cd_pbkdf->parallel_threads, cpus);
cd_pbkdf->parallel_threads = cpus;
@@ -219,16 +251,20 @@ int init_pbkdf_type(struct crypt_device *cd,
if (cd_pbkdf->max_memory_kb) {
memory_kb = adjusted_phys_memory();
if (cd_pbkdf->max_memory_kb > memory_kb) {
log_dbg("Not enough physical memory detected, "
log_dbg(cd, "Not enough physical memory detected, "
"PBKDF max memory decreased from %dkB to %dkB.",
cd_pbkdf->max_memory_kb, memory_kb);
cd_pbkdf->max_memory_kb = memory_kb;
}
}
log_dbg("PBKDF %s, hash %s, time_ms %u (iterations %u), max_memory_kb %u, parallel_threads %u.",
cd_pbkdf->type ?: "(none)", cd_pbkdf->hash ?: "(none)", cd_pbkdf->time_ms,
cd_pbkdf->iterations, cd_pbkdf->max_memory_kb, cd_pbkdf->parallel_threads);
if (!strcmp(pbkdf->type, CRYPT_KDF_PBKDF2))
log_dbg(cd, "PBKDF %s-%s, time_ms %u (iterations %u).",
cd_pbkdf->type, cd_pbkdf->hash, cd_pbkdf->time_ms, cd_pbkdf->iterations);
else
log_dbg(cd, "PBKDF %s, time_ms %u (iterations %u), max_memory_kb %u, parallel_threads %u.",
cd_pbkdf->type, cd_pbkdf->time_ms, cd_pbkdf->iterations,
cd_pbkdf->max_memory_kb, cd_pbkdf->parallel_threads);
return 0;
}
@@ -241,7 +277,7 @@ int crypt_set_pbkdf_type(struct crypt_device *cd, const struct crypt_pbkdf_type
return -EINVAL;
if (!pbkdf)
log_dbg("Resetting pbkdf type to default");
log_dbg(cd, "Resetting pbkdf type to default");
crypt_get_pbkdf(cd)->flags = 0;
@@ -261,10 +297,10 @@ const struct crypt_pbkdf_type *crypt_get_pbkdf_default(const char *type)
if (!type)
return NULL;
if (!strcmp(type, CRYPT_LUKS1))
return &default_luks1;
if (!strcmp(type, CRYPT_LUKS1) || crypt_fips_mode())
return crypt_get_pbkdf_type_params(CRYPT_KDF_PBKDF2);
else if (!strcmp(type, CRYPT_LUKS2))
return &default_luks2;
return crypt_get_pbkdf_type_params(DEFAULT_LUKS2_PBKDF);
return NULL;
}
@@ -283,7 +319,7 @@ void crypt_set_iteration_time(struct crypt_device *cd, uint64_t iteration_time_m
if (pbkdf->type && verify_pbkdf_params(cd, pbkdf)) {
pbkdf->time_ms = old_time_ms;
log_dbg("Invalid iteration time.");
log_dbg(cd, "Invalid iteration time.");
return;
}
@@ -293,5 +329,5 @@ void crypt_set_iteration_time(struct crypt_device *cd, uint64_t iteration_time_m
pbkdf->flags &= ~(CRYPT_PBKDF_NO_BENCHMARK);
pbkdf->iterations = 0;
log_dbg("Iteration time set to %" PRIu64 " milliseconds.", iteration_time_ms);
log_dbg(cd, "Iteration time set to %" PRIu64 " milliseconds.", iteration_time_ms);
}

View File

@@ -0,0 +1,395 @@
/*
* Generic wrapper for storage functions
* (experimental only)
*
* Copyright (C) 2018, Ondrej Kozina
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
#include <stdlib.h>
#include <limits.h>
#include <sys/stat.h>
#include <sys/types.h>
#include "utils_storage_wrappers.h"
#include "internal.h"
struct crypt_storage_wrapper {
crypt_storage_wrapper_type type;
int dev_fd;
int block_size;
size_t mem_alignment;
uint64_t data_offset;
union {
struct {
struct crypt_storage *s;
uint64_t iv_start;
} cb;
struct {
int dmcrypt_fd;
char name[PATH_MAX];
} dm;
} u;
};
static int crypt_storage_backend_init(struct crypt_device *cd,
struct crypt_storage_wrapper *w,
uint64_t iv_start,
int sector_size,
const char *cipher,
const char *cipher_mode,
const struct volume_key *vk,
uint32_t flags)
{
int r;
struct crypt_storage *s;
/* iv_start, sector_size */
r = crypt_storage_init(&s, sector_size, cipher, cipher_mode, vk->key, vk->keylength);
if (r)
return r;
if ((flags & DISABLE_KCAPI) && crypt_storage_kernel_only(s)) {
log_dbg(cd, "Could not initialize userspace block cipher and kernel fallback is disabled.");
crypt_storage_destroy(s);
return -ENOTSUP;
}
w->type = USPACE;
w->u.cb.s = s;
w->u.cb.iv_start = iv_start;
return 0;
}
static int crypt_storage_dmcrypt_init(
struct crypt_device *cd,
struct crypt_storage_wrapper *cw,
struct device *device,
uint64_t device_offset,
uint64_t iv_start,
int sector_size,
const char *cipher_spec,
struct volume_key *vk,
int open_flags)
{
static int counter = 0;
char path[PATH_MAX];
struct crypt_dm_active_device dmd = {
.flags = CRYPT_ACTIVATE_PRIVATE,
};
int mode, r, fd = -1;
log_dbg(cd, "Using temporary dmcrypt to access data.");
if (snprintf(cw->u.dm.name, sizeof(cw->u.dm.name), "temporary-cryptsetup-%d-%d", getpid(), counter++) < 0)
return -ENOMEM;
if (snprintf(path, sizeof(path), "%s/%s", dm_get_dir(), cw->u.dm.name) < 0)
return -ENOMEM;
r = device_block_adjust(cd, device, DEV_OK,
device_offset, &dmd.size, &dmd.flags);
if (r < 0) {
log_err(cd, _("Device %s doesn't exist or access denied."),
device_path(device));
return -EIO;
}
mode = open_flags | O_DIRECT;
if (dmd.flags & CRYPT_ACTIVATE_READONLY)
mode = (open_flags & ~O_ACCMODE) | O_RDONLY;
if (vk->key_description)
dmd.flags |= CRYPT_ACTIVATE_KEYRING_KEY;
r = dm_crypt_target_set(&dmd.segment, 0, dmd.size,
device,
vk,
cipher_spec,
iv_start,
device_offset,
NULL,
0,
sector_size);
if (r)
return r;
r = dm_create_device(cd, cw->u.dm.name, "TEMP", &dmd);
if (r < 0) {
if (r != -EACCES && r != -ENOTSUP)
log_dbg(cd, "error hint would be nice");
r = -EIO;
}
dm_targets_free(cd, &dmd);
if (r)
return r;
fd = open(path, mode);
if (fd < 0) {
log_dbg(cd, "Failed to open %s", path);
dm_remove_device(cd, cw->u.dm.name, CRYPT_DEACTIVATE_FORCE);
return -EINVAL;
}
cw->type = DMCRYPT;
cw->u.dm.dmcrypt_fd = fd;
return 0;
}
int crypt_storage_wrapper_init(struct crypt_device *cd,
struct crypt_storage_wrapper **cw,
struct device *device,
uint64_t data_offset,
uint64_t iv_start,
int sector_size,
const char *cipher,
struct volume_key *vk,
uint32_t flags)
{
int open_flags, r;
char _cipher[MAX_CIPHER_LEN], mode[MAX_CIPHER_LEN];
struct crypt_storage_wrapper *w;
/* device-mapper restrictions */
if (data_offset & ((1 << SECTOR_SHIFT) - 1))
return -EINVAL;
if (crypt_parse_name_and_mode(cipher, _cipher, NULL, mode))
return -EINVAL;
open_flags = O_CLOEXEC | ((flags & OPEN_READONLY) ? O_RDONLY : O_RDWR);
w = malloc(sizeof(*w));
if (!w)
return -ENOMEM;
memset(w, 0, sizeof(*w));
w->data_offset = data_offset;
w->mem_alignment = device_alignment(device);
w->block_size = device_block_size(cd, device);
if (!w->block_size || !w->mem_alignment) {
log_dbg(cd, "block size or alignment error.");
r = -EINVAL;
goto err;
}
w->dev_fd = device_open(cd, device, open_flags);
if (w->dev_fd < 0) {
r = -EINVAL;
goto err;
}
if (!strcmp(_cipher, "cipher_null")) {
log_dbg(cd, "Requested cipher_null, switching to noop wrapper.");
w->type = NONE;
*cw = w;
return 0;
}
if (!vk) {
log_dbg(cd, "no key passed.");
r = -EINVAL;
goto err;
}
r = crypt_storage_backend_init(cd, w, iv_start, sector_size, _cipher, mode, vk, flags);
if (!r) {
*cw = w;
return 0;
}
log_dbg(cd, "Failed to initialize userspace block cipher.");
if ((r != -ENOTSUP && r != -ENOENT) || (flags & DISABLE_DMCRYPT))
goto err;
r = crypt_storage_dmcrypt_init(cd, w, device, data_offset >> SECTOR_SHIFT, iv_start,
sector_size, cipher, vk, open_flags);
if (r) {
log_dbg(cd, "Dm-crypt backend failed to initialize.");
goto err;
}
*cw = w;
return 0;
err:
crypt_storage_wrapper_destroy(w);
/* wrapper destroy */
return r;
}
/* offset is relative to sector_start */
ssize_t crypt_storage_wrapper_read(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length)
{
return read_lseek_blockwise(cw->dev_fd,
cw->block_size,
cw->mem_alignment,
buffer,
buffer_length,
cw->data_offset + offset);
}
ssize_t crypt_storage_wrapper_read_decrypt(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length)
{
int r;
ssize_t read;
if (cw->type == DMCRYPT)
return read_lseek_blockwise(cw->u.dm.dmcrypt_fd,
cw->block_size,
cw->mem_alignment,
buffer,
buffer_length,
offset);
read = read_lseek_blockwise(cw->dev_fd,
cw->block_size,
cw->mem_alignment,
buffer,
buffer_length,
cw->data_offset + offset);
if (cw->type == NONE || read < 0)
return read;
r = crypt_storage_decrypt(cw->u.cb.s,
cw->u.cb.iv_start + (offset >> SECTOR_SHIFT),
read,
buffer);
if (r)
return -EINVAL;
return read;
}
ssize_t crypt_storage_wrapper_decrypt(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length)
{
int r;
ssize_t read;
if (cw->type == NONE)
return 0;
if (cw->type == DMCRYPT) {
/* there's nothing we can do, just read/decrypt via dm-crypt */
read = crypt_storage_wrapper_read_decrypt(cw, offset, buffer, buffer_length);
if (read < 0 || (size_t)read != buffer_length)
return -EINVAL;
return 0;
}
r = crypt_storage_decrypt(cw->u.cb.s,
cw->u.cb.iv_start + (offset >> SECTOR_SHIFT),
buffer_length,
buffer);
if (r)
return r;
return 0;
}
ssize_t crypt_storage_wrapper_write(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length)
{
return write_lseek_blockwise(cw->dev_fd,
cw->block_size,
cw->mem_alignment,
buffer,
buffer_length,
cw->data_offset + offset);
}
ssize_t crypt_storage_wrapper_encrypt_write(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length)
{
if (cw->type == DMCRYPT)
return write_lseek_blockwise(cw->u.dm.dmcrypt_fd,
cw->block_size,
cw->mem_alignment,
buffer,
buffer_length,
offset);
if (cw->type == USPACE &&
crypt_storage_encrypt(cw->u.cb.s,
cw->u.cb.iv_start + (offset >> SECTOR_SHIFT),
buffer_length, buffer))
return -EINVAL;
return write_lseek_blockwise(cw->dev_fd,
cw->block_size,
cw->mem_alignment,
buffer,
buffer_length,
cw->data_offset + offset);
}
ssize_t crypt_storage_wrapper_encrypt(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length)
{
if (cw->type == NONE)
return 0;
if (cw->type == DMCRYPT)
return -ENOTSUP;
if (crypt_storage_encrypt(cw->u.cb.s,
cw->u.cb.iv_start + (offset >> SECTOR_SHIFT),
buffer_length,
buffer))
return -EINVAL;
return 0;
}
void crypt_storage_wrapper_destroy(struct crypt_storage_wrapper *cw)
{
if (!cw)
return;
if (cw->type == USPACE)
crypt_storage_destroy(cw->u.cb.s);
if (cw->type == DMCRYPT) {
close(cw->u.dm.dmcrypt_fd);
dm_remove_device(NULL, cw->u.dm.name, CRYPT_DEACTIVATE_FORCE);
}
free(cw);
}
int crypt_storage_wrapper_datasync(const struct crypt_storage_wrapper *cw)
{
if (!cw)
return -EINVAL;
if (cw->type == DMCRYPT)
return fdatasync(cw->u.dm.dmcrypt_fd);
else
return fdatasync(cw->dev_fd);
}
crypt_storage_wrapper_type crypt_storage_wrapper_get_type(const struct crypt_storage_wrapper *cw)
{
return cw ? cw->type : NONE;
}

View File

@@ -0,0 +1,71 @@
/*
* Generic wrapper for storage functions
* (experimental only)
*
* Copyright (C) 2018, Ondrej Kozina
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef _UTILS_STORAGE_WRAPPERS_H
#define _UTILS_STORAGE_WRAPPERS_H
struct crypt_storage_wrapper;
struct device;
struct volume_key;
struct crypt_device;
#define DISABLE_USPACE (1 << 0)
#define DISABLE_KCAPI (1 << 1)
#define DISABLE_DMCRYPT (1 << 2)
#define OPEN_READONLY (1 << 3)
typedef enum {
NONE = 0,
USPACE,
DMCRYPT
} crypt_storage_wrapper_type;
int crypt_storage_wrapper_init(struct crypt_device *cd,
struct crypt_storage_wrapper **cw,
struct device *device,
uint64_t data_offset,
uint64_t iv_start,
int sector_size,
const char *cipher,
struct volume_key *vk,
uint32_t flags);
void crypt_storage_wrapper_destroy(struct crypt_storage_wrapper *cw);
/* !!! when doing 'read' or 'write' all offset values are RELATIVE to data_offset !!! */
ssize_t crypt_storage_wrapper_read(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length);
ssize_t crypt_storage_wrapper_read_decrypt(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length);
ssize_t crypt_storage_wrapper_decrypt(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length);
ssize_t crypt_storage_wrapper_write(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length);
ssize_t crypt_storage_wrapper_encrypt_write(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length);
ssize_t crypt_storage_wrapper_encrypt(struct crypt_storage_wrapper *cw,
off_t offset, void *buffer, size_t buffer_length);
int crypt_storage_wrapper_datasync(const struct crypt_storage_wrapper *cw);
crypt_storage_wrapper_type crypt_storage_wrapper_get_type(const struct crypt_storage_wrapper *cw);
#endif

View File

@@ -1,9 +1,9 @@
/*
* utils_wipe - wipe a device
*
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2018, Milan Broz
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -52,7 +52,8 @@ static void wipeSpecial(char *buffer, size_t buffer_size, unsigned int turn)
}
}
static int crypt_wipe_special(int fd, size_t bsize, size_t alignment, char *buffer,
static int crypt_wipe_special(struct crypt_device *cd, int fd, size_t bsize,
size_t alignment, char *buffer,
uint64_t offset, size_t size)
{
int r;
@@ -61,12 +62,12 @@ static int crypt_wipe_special(int fd, size_t bsize, size_t alignment, char *buff
for (i = 0; i < 39; ++i) {
if (i < 5) {
r = crypt_random_get(NULL, buffer, size, CRYPT_RND_NORMAL);
r = crypt_random_get(cd, buffer, size, CRYPT_RND_NORMAL);
} else if (i >= 5 && i < 32) {
wipeSpecial(buffer, size, i - 5);
r = 0;
} else if (i >= 32 && i < 38) {
r = crypt_random_get(NULL, buffer, size, CRYPT_RND_NORMAL);
r = crypt_random_get(cd, buffer, size, CRYPT_RND_NORMAL);
} else if (i >= 38 && i < 39) {
memset(buffer, 0xFF, size);
r = 0;
@@ -81,7 +82,7 @@ static int crypt_wipe_special(int fd, size_t bsize, size_t alignment, char *buff
}
/* Rewrite it finally with random */
if (crypt_random_get(NULL, buffer, size, CRYPT_RND_NORMAL) < 0)
if (crypt_random_get(cd, buffer, size, CRYPT_RND_NORMAL) < 0)
return -EIO;
written = write_lseek_blockwise(fd, bsize, alignment, buffer, size, offset);
@@ -91,14 +92,14 @@ static int crypt_wipe_special(int fd, size_t bsize, size_t alignment, char *buff
return 0;
}
static int wipe_block(int devfd, crypt_wipe_pattern pattern, char *sf,
size_t device_block_size, size_t alignment,
static int wipe_block(struct crypt_device *cd, int devfd, crypt_wipe_pattern pattern,
char *sf, size_t device_block_size, size_t alignment,
size_t wipe_block_size, uint64_t offset, bool *need_block_init)
{
int r;
if (pattern == CRYPT_WIPE_SPECIAL)
return crypt_wipe_special(devfd, device_block_size, alignment,
return crypt_wipe_special(cd, devfd, device_block_size, alignment,
sf, offset, wipe_block_size);
if (*need_block_init) {
@@ -107,12 +108,12 @@ static int wipe_block(int devfd, crypt_wipe_pattern pattern, char *sf,
*need_block_init = false;
r = 0;
} else if (pattern == CRYPT_WIPE_RANDOM) {
r = crypt_random_get(NULL, sf, wipe_block_size,
r = crypt_random_get(cd, sf, wipe_block_size,
CRYPT_RND_NORMAL) ? -EIO : 0;
*need_block_init = true;
} else if (pattern == CRYPT_WIPE_ENCRYPTED_ZERO) {
// FIXME
r = crypt_random_get(NULL, sf, wipe_block_size,
r = crypt_random_get(cd, sf, wipe_block_size,
CRYPT_RND_NORMAL) ? -EIO : 0;
*need_block_init = true;
} else
@@ -138,14 +139,14 @@ int crypt_wipe_device(struct crypt_device *cd,
int (*progress)(uint64_t size, uint64_t offset, void *usrptr),
void *usrptr)
{
int r, devfd = -1;
int r, devfd;
size_t bsize, alignment;
char *sf = NULL;
uint64_t dev_size;
bool need_block_init = true;
/* Note: LUKS1 calls it with wipe_block not aligned to multiple of bsize */
bsize = device_block_size(device);
bsize = device_block_size(cd, device);
alignment = device_alignment(device);
if (!bsize || !alignment || !wipe_block_size)
return -EINVAL;
@@ -156,23 +157,24 @@ int crypt_wipe_device(struct crypt_device *cd,
if (MISALIGNED_512(offset) || MISALIGNED_512(length) || MISALIGNED_512(wipe_block_size))
return -EINVAL;
devfd = device_open(device, O_RDWR);
if (device_is_locked(device))
devfd = device_open_locked(cd, device, O_RDWR);
else
devfd = device_open(cd, device, O_RDWR);
if (devfd < 0)
return errno ? -errno : -EINVAL;
r = device_size(device, &dev_size);
if (r || dev_size == 0)
goto out;
if (length)
dev_size = offset + length;
else {
r = device_size(device, &dev_size);
if (r)
goto out;
if (dev_size < length)
length = 0;
if (length) {
if ((dev_size <= offset) || (dev_size - offset) < length) {
if (dev_size <= offset) {
r = -EINVAL;
goto out;
}
dev_size = offset + length;
}
r = posix_memalign((void **)&sf, alignment, wipe_block_size);
@@ -180,7 +182,7 @@ int crypt_wipe_device(struct crypt_device *cd,
goto out;
if (lseek64(devfd, offset, SEEK_SET) < 0) {
log_err(cd, "Cannot seek to device offset.");
log_err(cd, _("Cannot seek to device offset."));
r = -EINVAL;
goto out;
}
@@ -191,7 +193,7 @@ int crypt_wipe_device(struct crypt_device *cd,
}
if (pattern == CRYPT_WIPE_SPECIAL && !device_is_rotational(device)) {
log_dbg("Non-rotational device, using random data wipe mode.");
log_dbg(cd, "Non-rotational device, using random data wipe mode.");
pattern = CRYPT_WIPE_RANDOM;
}
@@ -201,10 +203,10 @@ int crypt_wipe_device(struct crypt_device *cd,
//log_dbg("Wipe %012" PRIu64 "-%012" PRIu64 " bytes", offset, offset + wipe_block_size);
r = wipe_block(devfd, pattern, sf, bsize, alignment,
r = wipe_block(cd, devfd, pattern, sf, bsize, alignment,
wipe_block_size, offset, &need_block_init);
if (r) {
log_err(cd, "Device wipe error, offset %" PRIu64 ".", offset);
log_err(cd,_("Device wipe error, offset %" PRIu64 "."), offset);
break;
}
@@ -216,9 +218,8 @@ int crypt_wipe_device(struct crypt_device *cd,
}
}
device_sync(device, devfd);
device_sync(cd, device);
out:
close(devfd);
free(sf);
return r;
}
@@ -253,14 +254,14 @@ int crypt_wipe(struct crypt_device *cd,
if (!wipe_block_size)
wipe_block_size = 1024*1024;
log_dbg("Wipe [%u] device %s, offset %" PRIu64 ", length %" PRIu64 ", block %zu.",
log_dbg(cd, "Wipe [%u] device %s, offset %" PRIu64 ", length %" PRIu64 ", block %zu.",
(unsigned)pattern, device_path(device), offset, length, wipe_block_size);
r = crypt_wipe_device(cd, device, pattern, offset, length,
wipe_block_size, progress, usrptr);
if (dev_path)
device_free(device);
device_free(cd, device);
return r;
}

View File

@@ -3,7 +3,7 @@
*
* Copyright (C) 2004 Phil Karn, KA9Q
* libcryptsetup modifications
* Copyright (C) 2017-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2017-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -3,7 +3,7 @@
*
* Copyright (C) 2002, Phil Karn, KA9Q
* libcryptsetup modifications
* Copyright (C) 2017-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2017-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -3,7 +3,7 @@
*
* Copyright (C) 2002, Phil Karn, KA9Q
* libcryptsetup modifications
* Copyright (C) 2017-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2017-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -1,7 +1,7 @@
/*
* dm-verity volume handling
*
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -60,9 +60,9 @@ int VERITY_read_sb(struct crypt_device *cd,
struct device *device = crypt_metadata_device(cd);
struct verity_sb sb = {};
ssize_t hdr_size = sizeof(struct verity_sb);
int devfd = 0, sb_version;
int devfd, sb_version;
log_dbg("Reading VERITY header of size %zu on device %s, offset %" PRIu64 ".",
log_dbg(cd, "Reading VERITY header of size %zu on device %s, offset %" PRIu64 ".",
sizeof(struct verity_sb), device_path(device), sb_offset);
if (params->flags & CRYPT_VERITY_NO_HEADER) {
@@ -76,19 +76,16 @@ int VERITY_read_sb(struct crypt_device *cd,
return -EINVAL;
}
devfd = device_open(device, O_RDONLY);
devfd = device_open(cd, device, O_RDONLY);
if (devfd < 0) {
log_err(cd, _("Cannot open device %s."), device_path(device));
return -EINVAL;
}
if (read_lseek_blockwise(devfd, device_block_size(device),
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), &sb, hdr_size,
sb_offset) < hdr_size) {
close(devfd);
sb_offset) < hdr_size)
return -EIO;
}
close(devfd);
if (memcmp(sb.signature, VERITY_SIGNATURE, sizeof(sb.signature))) {
log_err(cd, _("Device %s is not a valid VERITY device."),
@@ -160,9 +157,9 @@ int VERITY_write_sb(struct crypt_device *cd,
ssize_t hdr_size = sizeof(struct verity_sb);
char *algorithm;
uuid_t uuid;
int r, devfd = 0;
int r, devfd;
log_dbg("Updating VERITY header of size %zu on device %s, offset %" PRIu64 ".",
log_dbg(cd, "Updating VERITY header of size %zu on device %s, offset %" PRIu64 ".",
sizeof(struct verity_sb), device_path(device), sb_offset);
if (!uuid_string || uuid_parse(uuid_string, uuid) == -1) {
@@ -177,7 +174,7 @@ int VERITY_write_sb(struct crypt_device *cd,
return -EINVAL;
}
devfd = device_open(device, O_RDWR);
devfd = device_open(cd, device, O_RDWR);
if (devfd < 0) {
log_err(cd, _("Cannot open device %s."), device_path(device));
return -EINVAL;
@@ -196,14 +193,13 @@ int VERITY_write_sb(struct crypt_device *cd,
memcpy(sb.salt, params->salt, params->salt_size);
memcpy(sb.uuid, uuid, sizeof(sb.uuid));
r = write_lseek_blockwise(devfd, device_block_size(device), device_alignment(device),
r = write_lseek_blockwise(devfd, device_block_size(cd, device), device_alignment(device),
(char*)&sb, hdr_size, sb_offset) < hdr_size ? -EIO : 0;
if (r)
log_err(cd, _("Error during update of verity header on device %s."),
device_path(device));
device_sync(device, devfd);
close(devfd);
device_sync(cd, device);
return r;
}
@@ -226,7 +222,8 @@ int VERITY_UUID_generate(struct crypt_device *cd, char **uuid_string)
{
uuid_t uuid;
if (!(*uuid_string = malloc(40)))
*uuid_string = malloc(40);
if (!*uuid_string)
return -ENOMEM;
uuid_generate(uuid);
uuid_unparse(uuid, *uuid_string);
@@ -242,20 +239,24 @@ int VERITY_activate(struct crypt_device *cd,
struct crypt_params_verity *verity_hdr,
uint32_t activation_flags)
{
struct crypt_dm_active_device dmd;
uint32_t dmv_flags;
unsigned int fec_errors = 0;
int r;
struct crypt_dm_active_device dmd = {
.size = verity_hdr->data_size * verity_hdr->data_block_size / 512,
.flags = activation_flags,
.uuid = crypt_get_uuid(cd),
};
log_dbg("Trying to activate VERITY device %s using hash %s.",
log_dbg(cd, "Trying to activate VERITY device %s using hash %s.",
name ?: "[none]", verity_hdr->hash_name);
if (verity_hdr->flags & CRYPT_VERITY_CHECK_HASH) {
log_dbg("Verification of data in userspace required.");
log_dbg(cd, "Verification of data in userspace required.");
r = VERITY_verify(cd, verity_hdr, root_hash, root_hash_size);
if (r == -EPERM && fec_device) {
log_dbg("Verification failed, trying to repair with FEC device.");
log_dbg(cd, "Verification failed, trying to repair with FEC device.");
r = VERITY_FEC_process(cd, verity_hdr, fec_device, 1, &fec_errors);
if (r < 0)
log_err(cd, _("Errors cannot be repaired with FEC device."));
@@ -271,50 +272,48 @@ int VERITY_activate(struct crypt_device *cd,
if (!name)
return 0;
dmd.target = DM_VERITY;
dmd.data_device = crypt_data_device(cd);
dmd.u.verity.hash_device = crypt_metadata_device(cd);
dmd.u.verity.fec_device = fec_device;
dmd.u.verity.root_hash = root_hash;
dmd.u.verity.root_hash_size = root_hash_size;
dmd.u.verity.hash_offset = VERITY_hash_offset_block(verity_hdr);
dmd.u.verity.fec_offset = verity_hdr->fec_area_offset / verity_hdr->hash_block_size;
dmd.u.verity.hash_blocks = VERITY_hash_blocks(cd, verity_hdr);
dmd.flags = activation_flags;
dmd.size = verity_hdr->data_size * verity_hdr->data_block_size / 512;
dmd.uuid = crypt_get_uuid(cd);
dmd.u.verity.vp = verity_hdr;
r = device_block_adjust(cd, dmd.u.verity.hash_device, DEV_OK,
r = device_block_adjust(cd, crypt_metadata_device(cd), DEV_OK,
0, NULL, NULL);
if (r)
return r;
r = device_block_adjust(cd, dmd.data_device, DEV_EXCL,
r = device_block_adjust(cd, crypt_data_device(cd), DEV_EXCL,
0, &dmd.size, &dmd.flags);
if (r)
return r;
if (dmd.u.verity.fec_device) {
r = device_block_adjust(cd, dmd.u.verity.fec_device, DEV_OK,
if (fec_device) {
r = device_block_adjust(cd, fec_device, DEV_OK,
0, NULL, NULL);
if (r)
return r;
}
r = dm_create_device(cd, name, CRYPT_VERITY, &dmd, 0);
if (r < 0 && (dm_flags(DM_VERITY, &dmv_flags) || !(dmv_flags & DM_VERITY_SUPPORTED))) {
r = dm_verity_target_set(&dmd.segment, 0, dmd.size, crypt_data_device(cd),
crypt_metadata_device(cd), fec_device, root_hash,
root_hash_size, VERITY_hash_offset_block(verity_hdr),
VERITY_hash_blocks(cd, verity_hdr), verity_hdr);
if (r)
return r;
r = dm_create_device(cd, name, CRYPT_VERITY, &dmd);
if (r < 0 && (dm_flags(cd, DM_VERITY, &dmv_flags) || !(dmv_flags & DM_VERITY_SUPPORTED))) {
log_err(cd, _("Kernel doesn't support dm-verity mapping."));
return -ENOTSUP;
r = -ENOTSUP;
}
if (r < 0)
return r;
goto out;
r = dm_status_verity_ok(cd, name);
if (r < 0)
return r;
goto out;
if (!r)
log_err(cd, _("Verity device detected corruption after activation."));
return 0;
r = 0;
out:
dm_targets_free(cd, &dmd);
return r;
}

View File

@@ -1,7 +1,7 @@
/*
* dm-verity volume handling
*
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -1,8 +1,8 @@
/*
* dm-verity Forward Error Correction (FEC) support
*
* Copyright (C) 2015, Google, Inc. All rights reserved.
* Copyright (C) 2017-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2015 Google, Inc. All rights reserved.
* Copyright (C) 2017-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -244,7 +244,7 @@ int VERITY_FEC_process(struct crypt_device *cd,
}
if (lseek(fd, params->fec_area_offset, SEEK_SET) < 0) {
log_dbg("Cannot seek to requested position in FEC device.");
log_dbg(cd, "Cannot seek to requested position in FEC device.");
goto out;
}

View File

@@ -1,7 +1,7 @@
/*
* dm-verity volume handling
*
* Copyright (C) 2012-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -51,7 +51,7 @@ static int verify_zero(struct crypt_device *cd, FILE *wr, size_t bytes)
size_t i;
if (fread(block, bytes, 1, wr) != 1) {
log_dbg("EIO while reading spare area.");
log_dbg(cd, "EIO while reading spare area.");
return -EIO;
}
for (i = 0; i < bytes; i++)
@@ -162,12 +162,12 @@ static int create_or_verify(struct crypt_device *cd, FILE *rd, FILE *wr,
}
if (fseeko(rd, seek_rd, SEEK_SET)) {
log_dbg("Cannot seek to requested position in data device.");
log_dbg(cd, "Cannot seek to requested position in data device.");
return -EIO;
}
if (wr && fseeko(wr, seek_wr, SEEK_SET)) {
log_dbg("Cannot seek to requested position in hash device.");
log_dbg(cd, "Cannot seek to requested position in hash device.");
return -EIO;
}
@@ -179,7 +179,7 @@ static int create_or_verify(struct crypt_device *cd, FILE *rd, FILE *wr,
break;
blocks--;
if (fread(data_buffer, data_block_size, 1, rd) != 1) {
log_dbg("Cannot read data device block.");
log_dbg(cd, "Cannot read data device block.");
return -EIO;
}
@@ -193,7 +193,7 @@ static int create_or_verify(struct crypt_device *cd, FILE *rd, FILE *wr,
break;
if (verify) {
if (fread(read_digest, digest_size, 1, wr) != 1) {
log_dbg("Cannot read digest form hash device.");
log_dbg(cd, "Cannot read digest form hash device.");
return -EIO;
}
if (memcmp(read_digest, calculated_digest, digest_size)) {
@@ -203,7 +203,7 @@ static int create_or_verify(struct crypt_device *cd, FILE *rd, FILE *wr,
}
} else {
if (fwrite(calculated_digest, digest_size, 1, wr) != 1) {
log_dbg("Cannot write digest to hash device.");
log_dbg(cd, "Cannot write digest to hash device.");
return -EIO;
}
}
@@ -216,7 +216,7 @@ static int create_or_verify(struct crypt_device *cd, FILE *rd, FILE *wr,
if (r)
return r;
} else if (fwrite(left_block, digest_size_full - digest_size, 1, wr) != 1) {
log_dbg("Cannot write spare area to hash device.");
log_dbg(cd, "Cannot write spare area to hash device.");
return -EIO;
}
}
@@ -229,7 +229,7 @@ static int create_or_verify(struct crypt_device *cd, FILE *rd, FILE *wr,
if (r)
return r;
} else if (fwrite(left_block, left_bytes, 1, wr) != 1) {
log_dbg("Cannot write remaining spare area to hash device.");
log_dbg(cd, "Cannot write remaining spare area to hash device.");
return -EIO;
}
}
@@ -263,7 +263,7 @@ static int VERITY_create_or_verify_hash(struct crypt_device *cd,
uint64_t dev_size;
int levels, i, r;
log_dbg("Hash %s %s, data device %s, data blocks %" PRIu64
log_dbg(cd, "Hash %s %s, data device %s, data blocks %" PRIu64
", hash_device %s, offset %" PRIu64 ".",
verify ? "verification" : "creation", hash_name,
device_path(data_device), data_blocks,
@@ -294,14 +294,14 @@ static int VERITY_create_or_verify_hash(struct crypt_device *cd,
return -EINVAL;
}
log_dbg("Using %d hash levels.", levels);
log_dbg(cd, "Using %d hash levels.", levels);
if (mult_overflow(&hash_device_size, hash_position, hash_block_size)) {
log_err(cd, _("Device offset overflow."));
return -EINVAL;
}
log_dbg("Data device size required: %" PRIu64 " bytes.",
log_dbg(cd, "Data device size required: %" PRIu64 " bytes.",
data_device_size);
data_file = fopen(device_path(data_device), "r");
if (!data_file) {
@@ -312,7 +312,7 @@ static int VERITY_create_or_verify_hash(struct crypt_device *cd,
goto out;
}
log_dbg("Hash device size required: %" PRIu64 " bytes.",
log_dbg(cd, "Hash device size required: %" PRIu64 " bytes.",
hash_device_size);
hash_file = fopen(device_path(hash_device), verify ? "r" : "r+");
if (!hash_file) {
@@ -369,12 +369,12 @@ out:
if (r)
log_err(cd, _("Verification of data area failed."));
else {
log_dbg("Verification of data area succeeded.");
log_dbg(cd, "Verification of data area succeeded.");
r = memcmp(root_hash, calculated_digest, digest_size) ? -EPERM : 0;
if (r)
log_err(cd, _("Verification of root hash failed."));
else
log_dbg("Verification of root hash succeeded.");
log_dbg(cd, "Verification of root hash succeeded.");
}
} else {
if (r == -EIO)

View File

@@ -1,8 +1,8 @@
/*
* cryptsetup volume key implementation
*
* Copyright (C) 2004-2006, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2010-2018, Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2006 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -39,6 +39,8 @@ struct volume_key *crypt_alloc_volume_key(size_t keylength, const char *key)
vk->key_description = NULL;
vk->keylength = keylength;
vk->id = -1;
vk->next = NULL;
/* keylength 0 is valid => no key */
if (vk->keylength) {
@@ -64,13 +66,66 @@ int crypt_volume_key_set_description(struct volume_key *vk, const char *key_desc
return 0;
}
void crypt_volume_key_set_id(struct volume_key *vk, int id)
{
if (vk && id >= 0)
vk->id = id;
}
int crypt_volume_key_get_id(const struct volume_key *vk)
{
return vk ? vk->id : -1;
}
struct volume_key *crypt_volume_key_by_id(struct volume_key *vks, int id)
{
struct volume_key *vk = vks;
if (id < 0)
return NULL;
while (vk && vk->id != id)
vk = vk->next;
return vk;
}
void crypt_volume_key_add_next(struct volume_key **vks, struct volume_key *vk)
{
struct volume_key *tmp;
if (!vks)
return;
if (!*vks) {
*vks = vk;
return;
}
tmp = *vks;
while (tmp->next)
tmp = tmp->next;
tmp->next = vk;
}
struct volume_key *crypt_volume_key_next(struct volume_key *vk)
{
return vk ? vk->next : NULL;
}
void crypt_free_volume_key(struct volume_key *vk)
{
if (vk) {
struct volume_key *vk_next;
while (vk) {
crypt_memzero(vk->key, vk->keylength);
vk->keylength = 0;
free(CONST_CAST(void*)vk->key_description);
vk_next = vk->next;
free(vk);
vk = vk_next;
}
}

View File

@@ -1,4 +1,4 @@
.TH CRYPTSETUP-REENCRYPT "8" "January 2018" "cryptsetup-reencrypt" "Maintenance Commands"
.TH CRYPTSETUP-REENCRYPT "8" "January 2019" "cryptsetup-reencrypt" "Maintenance Commands"
.SH NAME
cryptsetup-reencrypt - tool for offline LUKS device re-encryption
.SH SYNOPSIS
@@ -41,7 +41,7 @@ To start (or continue) re-encryption for <device> use:
\-\-progress-frequency, \-\-use-directio, \-\-use-random | \-\-use-urandom, \-\-use-fsync,
\-\-uuid, \-\-verbose, \-\-write-log]
To encrypt data on (not yet encrypted) device, use \fI\-\-new\fR with combination
To encrypt data on (not yet encrypted) device, use \fI\-\-new\fR in combination
with \fI\-\-reduce-device-size\fR or with \fI\-\-header\fR option for detached header.
To remove encryption from device, use \fI\-\-decrypt\fR.
@@ -281,9 +281,9 @@ Please attach the output of the failed command with the
.SH AUTHORS
Cryptsetup-reencrypt was written by Milan Broz <gmazyland@gmail.com>.
.SH COPYRIGHT
Copyright \(co 2012-2018 Milan Broz
Copyright \(co 2012-2019 Milan Broz
.br
Copyright \(co 2012-2018 Red Hat, Inc.
Copyright \(co 2012-2019 Red Hat, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

View File

@@ -1,4 +1,4 @@
.TH CRYPTSETUP "8" "January 2018" "cryptsetup" "Maintenance Commands"
.TH CRYPTSETUP "8" "January 2019" "cryptsetup" "Maintenance Commands"
.SH NAME
cryptsetup - manage plain dm-crypt and LUKS encrypted volumes
.SH SYNOPSIS
@@ -130,6 +130,71 @@ With LUKS2 device additional \fB<options>\fR can be [\-\-token\-id, \-\-token\-o
\-\-key\-slot, \-\-key\-file, \-\-keyfile\-size, \-\-keyfile\-offset, \-\-timeout,
\-\-disable\-locks, \-\-disable\-keyring].
.PP
\fIrefresh\fR <name>
.IP
Refreshes parameters of active mapping <name>.
Updates parameters of active device <name> without need to deactivate the device
(and umount filesystem). Currently it supports parameters refresh on following
devices: LUKS1, LUKS2 (including authenticated encryption), plain crypt
and loopaes.
Mandatory parametrs are identical to those of an open action for respective
device type.
You may change following parameters on all devices \-\-perf\-same_cpu_crypt,
\-\-perf\-submit_from_crypt_cpus and \-\-allow\-discards.
Refreshing device without any optional parameter will refresh the device
with default setting (respective to device type).
\fBLUKS2 only:\fR
\-\-integrity\-no\-journal parameter affects only LUKS2 devices with
underlying dm-integrity device.
Adding option \-\-persistent stores any combination of device parameters
above in LUKS2 metadata (only after successful refresh operation).
\-\-disable\-keyring parameter refreshes a device with volume key passed
in dm-crypt driver.
.PP
\fIreencrypt\fR <device> or --active-name <name>
.IP
Run resilient reencryption (LUKS2 device only).
There are 3 basic modes of operation:
\(bu device reencryption (\fIreencrypt\fR)
\(bu device encryption (\fIreencrypt\fR \-\-encrypt)
\(bu device decryption (\fIreencrypt\fR \-\-decrypt)
<device> or --active-name <name> is mandatory parameter.
With <device> parameter cryptsetup looks up active <device> dm mapping.
If no active mapping is detected, it starts offline reencryption otherwise online
reencryption takes place.
Reencryption process may be safely interrupted by a user via SIGTERM signal (ctrl+c).
To resume already initialized or interrupted reencryption, just run the cryptsetup
\fIreencrypt\fR command again to continue the reencryption operation.
Reencryption may be resumed with different \-\-resilience or \-\-hotzone\-size unless
implicit datashift resilience mode is used (reencrypt \-\-encrypt with \-\-reduce-device-size
option).
If the reencryption process was interrupted abruptly (reencryption process crash, system crash, poweroff)
it may require recovery. The recovery is currently run automatically on next activation (action \fIopen\fR)
when needed.
Action supports following additional \fB<options>\fR [\-\-encrypt, \-\-decrypt, \-\-device\-size,
\-\-resilience, \-\-resilience-hash, \-\-hotzone-size, \-\-init\-only, \-\-resume\-only,
\-\-reduce\-device\-size].
.SH PLAIN MODE
Plain dm-crypt encrypts the device sector-by-sector with a
single, non-salted hash of the passphrase. No checks
@@ -148,7 +213,8 @@ Opens (creates a mapping with) <name> backed by device <device>.
\fB<options>\fR can be [\-\-hash, \-\-cipher, \-\-verify-passphrase,
\-\-sector\-size, \-\-key-file, \-\-keyfile-offset, \-\-key-size,
\-\-offset, \-\-skip, \-\-size, \-\-readonly, \-\-shared, \-\-allow\-discards]
\-\-offset, \-\-skip, \-\-size, \-\-readonly, \-\-shared, \-\-allow\-discards,
\-\-refresh]
Example: 'cryptsetup open \-\-type plain /dev/sda10 e1' maps the raw
encrypted device /dev/sda10 to the mapped (decrypted) device
@@ -223,7 +289,9 @@ For LUKS2, additional \fB<options>\fR can be
[\-\-integrity, \-\-integrity\-no\-wipe, \-\-sector\-size,
\-\-label, \-\-subsystem,
\-\-pbkdf, \-\-pbkdf\-memory, \-\-pbkdf\-parallel,
\-\-disable\-locks, \-\-disable\-keyring].
\-\-disable\-locks, \-\-disable\-keyring,
\-\-luks2\-metadata\-size, \-\-luks2\-keyslots\-size,
\-\-keyslot\-cipher, \-\-keyslot\-key\-size].
\fBWARNING:\fR Doing a luksFormat on an existing LUKS container will
make all data the old container permanently irretrievable unless
@@ -243,7 +311,8 @@ the command prompts for it interactively.
\fB<options>\fR can be [\-\-key\-file, \-\-keyfile\-offset,
\-\-keyfile\-size, \-\-readonly, \-\-test\-passphrase,
\-\-allow\-discards, \-\-header, \-\-key-slot, \-\-master\-key\-file, \-\-token\-id,
\-\-token\-only, \-\-disable\-keyring, \-\-disable\-locks, \-\-type].
\-\-token\-only, \-\-disable\-keyring, \-\-disable\-locks, \-\-type, \-\-refresh,
\-\-serialize\-memory\-hard\-pbkdf].
.PP
\fIluksSuspend\fR <name>
.IP
@@ -284,8 +353,9 @@ is not required.
\fB<options>\fR can be [\-\-key\-file, \-\-keyfile\-offset,
\-\-keyfile\-size, \-\-new\-keyfile\-offset,
\-\-new\-keyfile\-size, \-\-key\-slot, \-\-master\-key\-file,
\-\-iter\-time, \-\-force\-password, \-\-header, \-\-disable\-locks,
\-\-unbound, \-\-type].
\-\-force\-password, \-\-header, \-\-disable\-locks,
\-\-iter-time, \-\-pbkdf, \-\-pbkdf\-force\-iterations,
\-\-unbound, \-\-type, \-\-keyslot\-cipher, \-\-keyslot\-key\-size].
.PP
\fIluksRemoveKey\fR <device> [<key file with passphrase to be removed>]
.IP
@@ -327,8 +397,9 @@ inaccessible.
\fB<options>\fR can be [\-\-key\-file, \-\-keyfile\-offset,
\-\-keyfile\-size, \-\-new\-keyfile\-offset,
\-\-iter-time, \-\-pbkdf, \-\-pbkdf\-force\-iterations,
\-\-new\-keyfile\-size, \-\-key\-slot, \-\-force\-password, \-\-header,
\-\-disable\-locks, \-\-type].
\-\-disable\-locks, \-\-type, \-\-keyslot\-cipher, \-\-keyslot\-key\-size].
.PP
.PP
\fIluksConvertKey\fR <device>
@@ -352,7 +423,8 @@ parameters have been wiped and make the LUKS container inaccessible.
\fB<options>\fR can be [\-\-key\-file, \-\-keyfile\-offset,
\-\-keyfile\-size, \-\-key\-slot, \-\-header, \-\-disable\-locks,
\-\-iter-time, \-\-pbkdf, \-\-pbkdf\-force\-iterations,
\-\-pbkdf\-memory, \-\-pbkdf\-parallel].
\-\-pbkdf\-memory, \-\-pbkdf\-parallel,
\-\-keyslot\-cipher, \-\-keyslot\-key\-size].
.PP
\fIluksKillSlot\fR <device> <key slot number>
.IP
@@ -551,7 +623,7 @@ passphrase hashing (otherwise it is detected according to key
size).
\fB<options>\fR can be [\-\-key\-file, \-\-key\-size, \-\-offset, \-\-skip,
\-\-hash, \-\-readonly, \-\-allow\-discards].
\-\-hash, \-\-readonly, \-\-allow\-discards, \-\-refresh].
.PP
See also section 7 of the FAQ and \fBhttp://loop-aes.sourceforge.net\fR
for more information regarding loop-AES.
@@ -706,9 +778,10 @@ If you are configuring kernel yourself, enable
.B "\-\-verbose, \-v"
Print more information on command execution.
.TP
.B "\-\-debug"
.B "\-\-debug or \-\-debug\-json"
Run in debug mode with full diagnostic logs. Debug output
lines are always prefixed by '#'.
If \-\-debug\-json is used, additional LUKS2 JSON data structures are printed.
.TP
.B "\-\-type <device-type>
Specifies required device type, for more info
@@ -892,7 +965,11 @@ actions.
.B "\-\-offset, \-o <number of 512 byte sectors>"
Start offset in the backend device in 512-byte sectors.
This option is only relevant for the \fIopen\fR action with plain
or loopaes device types.
or loopaes device types or for LUKS devices in \fIluksFormat\fR.
For LUKS, the \-\-offset option sets the data offset (payload) of data
device and must be be aligned to 4096-byte sectors (must be multiple of 8).
This option cannot be combined with \-\-align\-payload option.
.TP
.B "\-\-skip, \-p <number of 512 byte sectors>"
Start offset used in IV calculation in 512-byte sectors
@@ -904,6 +981,19 @@ Hence, if \-\-offset \fIn\fR, and \-\-skip \fIs\fR, sector \fIn\fR
(the first sector of the encrypted device) will get a sector number
of \fIs\fR for the IV calculation.
.TP
.B "\-\-device-size \fIsize[units]\fR"
Instead of real device size, use specified value.
It means that only specified area (from the start of the device
to the specified size) will be reencrypted.
If no unit suffix is specified, the size is in bytes.
Unit suffix can be S for 512 byte sectors, K/M/G/T (or KiB,MiB,GiB,TiB)
for units with 1024 base or KB/MB/GB/TB for 1000 base (SI scale).
\fBWARNING:\fR This is destructive operation when used with reencrypt command.
.TP
.B "\-\-readonly, \-r"
set up a read-only mapping.
.TP
@@ -1014,6 +1104,11 @@ data is by default aligned to a 1MiB boundary (i.e. 2048 512-byte sectors).
For a detached LUKS header, this option specifies the offset on the
data device. See also the \-\-header option.
\fBWARNING:\fR This option is DEPRECATED and has often unexpected impact
to the data offset and keyslot area size (for LUKS2) due to the complex rounding.
For fixed data device offset use \fI\-\-offset\fR option instead.
.TP
.B "\-\-uuid=\fIUUID\fR"
Use the provided \fIUUID\fR for the \fIluksFormat\fR command
@@ -1115,8 +1210,8 @@ a restricted environment where locking is impossible to perform
(where /run directory cannot be used).
.TP
.B "\-\-disable\-keyring"
Do not load volume key in kernel keyring but use store key directly
in the dm-crypt target.
Do not load volume key in kernel keyring and store it directly
in the dm-crypt target instead.
This option is supported only for the LUKS2 format.
.TP
.B "\-\-key\-description <text>"
@@ -1164,6 +1259,10 @@ Only \fI\-\-allow-discards\fR, \fI\-\-perf\-same_cpu_crypt\fR,
\fI\-\-perf\-submit_from_crypt_cpus\fR and \fI\-\-integrity\-no\-journal\fR
can be stored persistently.
.TP
.B "\-\-refresh"
Refreshes an active device with new set of parameters. See action \fIrefresh\fR description
for more details.
.TP
.B "\-\-label <LABEL>"
.B "\-\-subsystem <SUBSYSTEM>"
Set label and subsystem description for LUKS2 device, can be used
@@ -1181,6 +1280,26 @@ in "Cryptographic API" section (CONFIG_CRYPTO_USER_API_AEAD .config option).
For more info, see \fIAUTHENTICATED DISK ENCRYPTION\fR section.
.TP
.B "\-\-luks2\-metadata\-size <size>"
This option can be used to enlarge the LUKS2 metadata (JSON) area.
The size includes 4096 bytes for binary metadata (usable JSON area is smaller
of the binary area).
According to LUKS2 specification, only these values are valid:
16, 32, 64, 128, 256, 512, 1024, 2048 and 4096 kB
The <size> can be specified with unit suffix (for example 128k).
.TP
.B "\-\-luks2\-keyslots\-size <size>"
This option can be used to set specific size of the LUKS2 binary keyslot area
(key material is encrypted there). The value must be aligned to multiple
of 4096 bytes with maximum size 128MB.
The <size> can be specified with unit suffix (for example 128k).
.TP
.B "\-\-keyslot\-cipher <cipher\-spec>"
This option can be used to set specific cipher encryption for the LUKS2 keyslot area.
.TP
.B "\-\-keyslot\-key\-size <bits>"
This option can be used to set specific key size for the LUKS2 keyslot area.
.TP
.B "\-\-integrity\-no\-journal"
Activate device with integrity protection without using data journal (direct
write of data and integrity tags).
@@ -1216,6 +1335,68 @@ See \fITCRYPT\fR section for more info.
Use a custom Personal Iteration Multiplier (PIM) for VeraCrypt device.
See \fITCRYPT\fR section for more info.
.TP
.B "\-\-serialize\-memory\-hard\-pbkdf"
Use a global lock to serialize unlocking of keyslots using memory-hard PBKDF.
\fBNOTE:\fR This is (ugly) workaround for a specific situation when multiple
devices are activated in parallel and system instead of reporting out of memory
starts unconditionally stop processes using out-of-memory killer.
\fBDO NOT USE\fR this switch until you are implementing boot environment
with parallel devices activation!
.TP
.B "\-\-encrypt"
Initialize (and run) device encryption (\fIreencrypt\fR action parameter)
.TP
.B "\-\-decrypt"
Initialize (and run) device decryption (\fIreencrypt\fR action parameter)
.TP
.B "\-\-init\-only"
Initialize reencryption (any variant) operation in LUKS2 metadata only and exit. If any
reencrypt operation is already initialized in metadata, the command with \-\-init\-only
parameter fails.
.TP
.B "\-\-resume\-only"
Resume reencryption (any variant) operation already described in LUKS2 metadata. If no
reencrypt operation is initialized, the command with \-\-resume\-only
parameter fails. Useful for resuming reencrypt operation without accidentaly trigerring
new reencryption operation.
.TP
.B "\-\-resilience <mode>"
Reencryption resilience mode can be one of \fIchecksum\fR, \fIjournal\fR or \fInone\fR.
\fIchecksum\fR: default mode, where individual checksums of ciphertext hotzone sectors are stored,
so the recovery process can detect which sectors where already reencrypted. It requires that the device sector write is atomic.
\fIjournal\fR: the hotzone is journaled in the binary area (so the data are written twice).
\fInone\fR: performance mode. There is no protection and the only way it's safe to interrupt
the reencryption is similar to old offline reencryption utility. (ctrl+c).
The option is ignored if reencryption with datashift mode is in progress.
.TP
.B "\-\-resilience-hash <hash>"
The hash algorithm used with "\-\-resilience checksum" only. The default hash is sha256. With other resilience modes, the hash parameter is ignored.
.TP
.B "\-\-hotzone-size <size>"
This option can be used to set an upper limit on the size of reencryption area (hotzone).
The <size> can be specified with unit suffix (for example 50M). Note that actual hotzone
size may be less than specified <size> due to other limitations (free space in keyslots area or
available memory).
.TP
.B "\-\-reduce\-device\-size <size>"
Initialize LUKS2 reencryption with data device size reduction (currently only \-\-encrypt variant is supported).
Last <size> sectors of <device> will be used to properly initialize device reencryption. That means any
data at last <size> sectors will be lost.
It could be useful if you added some space to underlying partition or logical volume (so last <size> sectors contains no data).
Recommended minimal size is twice the default LUKS2 header size (\-\-reduce\-device\-size 32M) for \-\-encrypt use case. Be sure to
have enough (at least \-\-reduce\-device\-size value of free space at the end of <device>).
WARNING: This is a destructive operation and cannot be reverted. Use with extreme care - accidentally overwritten filesystems are usually unrecoverable.
.TP
.B "\-\-version"
Show the program version.
.TP
@@ -1446,11 +1627,11 @@ Copyright \(co 2004 Jana Saout
.br
Copyright \(co 2004-2006 Clemens Fruhwirth
.br
Copyright \(co 2009-2018 Red Hat, Inc.
.br
Copyright \(co 2009-2018 Milan Broz
.br
Copyright \(co 2012-2014 Arno Wagner
.br
Copyright \(co 2009-2019 Red Hat, Inc.
.br
Copyright \(co 2009-2019 Milan Broz
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

View File

@@ -1,4 +1,4 @@
.TH INTEGRITYSETUP "8" "January 2018" "integritysetup" "Maintenance Commands"
.TH INTEGRITYSETUP "8" "January 2019" "integritysetup" "Maintenance Commands"
.SH NAME
integritysetup - manage dm-integrity (block level integrity) volumes
.SH SYNOPSIS
@@ -19,9 +19,9 @@ Integritysetup supports these operations:
.IP
Formats <device> (calculates space and dm-integrity superblock and wipes the device).
\fB<options>\fR can be [\-\-batch\-mode, \-\-no\-wipe, \-\-journal\-size, \-\-interleave\-sectors,
\-\-tag\-size, \-\-integrity, \-\-integrity\-key\-size, \-\-integrity\-key\-file, \-\-sector\-size,
\-\-progress-frequency]
\fB<options>\fR can be [\-\-data\-device, \-\-batch\-mode, \-\-no\-wipe, \-\-journal\-size,
\-\-interleave\-sectors, \-\-tag\-size, \-\-integrity, \-\-integrity\-key\-size,
\-\-integrity\-key\-file, \-\-sector\-size, \-\-progress-frequency]
.PP
\fIopen\fR <device> <name>
@@ -30,9 +30,10 @@ Formats <device> (calculates space and dm-integrity superblock and wipes the dev
.IP
Open a mapping with <name> backed by device <device>.
\fB<options>\fR can be [\-\-batch\-mode, \-\-journal\-watermark, \-\-journal\-commit\-time,
\-\-buffer\-sectors, \-\-integrity, \-\-integrity\-key\-size, \-\-integrity\-key\-file,
\-\-integrity\-no\-journal, \-\-integrity\-recovery\-mode]
\fB<options>\fR can be [\-\-data\-device, \-\-batch\-mode, \-\-journal\-watermark,
\-\-journal\-commit\-time, \-\-buffer\-sectors, \-\-integrity, \-\-integrity\-key\-size,
\-\-integrity\-key\-file, \-\-integrity\-no\-journal, \-\-integrity\-recalculate,
\-\-integrity\-recovery\-mode]
.PP
\fIclose\fR <name>
@@ -77,6 +78,12 @@ Size of the journal.
.B "\-\-interleave\-sectors SECTORS"
The number of interleaved sectors.
.TP
.B "\-\-integrity\-recalculate"
Automatically recalculate integrity tags in kernel on activation.
The device can be used during automatic integrity recalculation but becomes fully
integrity protected only after the background operation is finished.
This option is available since the Linux kernel version 4.19.
.TP
.B "\-\-journal\-watermark PERCENT"
Journal watermark in percents. When the size of the journal exceeds this watermark,
the journal flush will be started.
@@ -91,6 +98,10 @@ Size of the integrity tag per-sector (here the integrity function will store aut
\fBNOTE:\fR The size can be smaller that output size of the hash function, in that case only
part of the hash will be stored.
.TP
.B "\-\-data\-device"
Specify a separate data device that contains existing data. The <device> then will contain
calculated integrity tags and journal for this data device.
.TP
.B "\-\-sector\-size, \-s BYTES"
Sector size (power of two: 512, 1024, 2048, 4096).
.TP
@@ -114,6 +125,21 @@ The file with the integrity key.
.TP
.B "\-\-integrity\-no\-journal, \-D"
Disable journal for integrity device.
.TP
.B "\-\-integrity\-bitmap\-mode. \-B"
Use alternate bitmap mode (available since Linux kernel 5.2) where dm-integrity uses bitmap
instead of a journal. If a bit in the bitmap is 1, the corresponding region's data and integrity tags
are not synchronized - if the machine crashes, the unsynchronized regions will be recalculated.
The bitmap mode is faster than the journal mode, because we don't have to write the data
twice, but it is also less reliable, because if data corruption happens
when the machine crashes, it may not be detected.
.TP
.B "\-\-bitmap\-sectors\-per\-bit SECTORS"
Number of 512-byte sectors per bitmap bit, the value must be power of two.
.TP
.B "\-\-bitmap\-flush\-time MS"
Bitmap flush time in milliseconds.
.TP
\fBWARNING:\fR
In case of a crash, it is possible that the data and integrity tag doesn't match
@@ -197,9 +223,9 @@ Please attach the output of the failed command with the
The integritysetup tool is written by Milan Broz <gmazyland@gmail.com>
and is part of the cryptsetup project.
.SH COPYRIGHT
Copyright \(co 2016-2018 Red Hat, Inc.
Copyright \(co 2016-2019 Red Hat, Inc.
.br
Copyright \(co 2016-2018 Milan Broz
Copyright \(co 2016-2019 Milan Broz
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Some files were not shown because too many files have changed in this diff Show More