Compare commits

...

41 Commits

Author SHA1 Message Date
Milan Broz
b4e9252270 Redirect lib API docs. 2018-12-19 11:57:40 +01:00
Milan Broz
21c4d1507a Update README.md. 2018-12-03 10:36:39 +01:00
Milan Broz
3e763e1cd2 Update LUKS2 docs. 2018-12-03 09:34:35 +01:00
Milan Broz
060c807bc8 Add 2.0.6 release notes. 2018-12-03 09:34:27 +01:00
Milan Broz
0f82f90e14 Update po files. 2018-12-02 19:00:56 +01:00
Ondrej Kozina
66b6808cb8 Add validation tests for non-default metadata. 2018-12-02 18:58:36 +01:00
Ondrej Kozina
99b3a69e52 Update LUKS2 test images.
- update test images for validation fixes
  from previous commits

- erase leftover json data in between secondary
  header and keyslot areas.
2018-11-28 17:05:34 +01:00
Ondrej Kozina
1a940a49cb Remove redundant check in keyslot areas validation.
Due to previous fix it's no longer needed to add
all keyslot area lengths and check if result sum
is lower than keyslots_size.

(We already check lower limit, upper limit and
overlapping areas)
2018-11-28 17:05:02 +01:00
Ondrej Kozina
645c8b6026 Fix keyslot areas validation.
This commit fixes two problems:

a) Replace hardcoded 16KiB metadata variant as lower limit
   for keyslot area offset with current value set in config
   section (already validated).

b) Replace segment offset (if not zero) as upper limit for
   keyslot area offset + size with value calculated as
   2 * metadata size + keyslots_size as acquired from
   config section (also already validated)
2018-11-28 17:03:34 +01:00
Ondrej Kozina
00fc4beac1 Reshuffle config and keyslots areas validation code.
Swap config and keyslot areas validation code order.

Also split original keyslots_size validation code in
between config and keyslot areas routines for furhter
changes in the code later. This commit has no funtional
impact.
2018-11-28 17:00:55 +01:00
Ondrej Kozina
b220bef821 Do not validate keyslot areas so frantically.
Keyslot areas were validated from each keyslot
validation routine and later one more time
in general header validation routine. The call
from header validation routine is good enough.
2018-11-28 16:55:20 +01:00
Ondrej Kozina
d1cfdc7fd7 Test cryptsetup can handle all LUKS2 metadata variants.
following tests:

add keyslot
test passphrase
unlock device
store token in metadata
read token from metadata
2018-11-27 22:35:00 +01:00
Ondrej Kozina
ccfbd302bd Add LUKS2 metadata test images.
Test archive contains images with all supported
LUKS2 metadata size configurations. There's
one active keyslot 0 in every image that can be
unlocked with following passphrase (ignore
quotation marks): "Qx3qn46vq0v"
2018-11-27 22:34:51 +01:00
Ondrej Kozina
0dda2b0e33 Add validation tests for non-default json area size.
Test both primary and secondary header validation tests
with non-default LUKS2 json area size.

Check validation rejects config.keyslots_size with zero value.

Check validation rejects mismatching values for metadata size
set in binary header and in config json section.
2018-11-27 22:34:35 +01:00
Ondrej Kozina
4e70b9ce16 Extend baseline LUKS2 validation image to 16 MiBs. 2018-11-27 22:34:10 +01:00
Ondrej Kozina
7d8a62b7d5 Move some validation tests in new section. 2018-11-27 22:33:56 +01:00
Ondrej Kozina
b383e11372 Drop needless size restriction on keyslots size. 2018-11-27 11:54:35 +01:00
Milan Broz
a6e9399f7b Update POTFILES. 2018-11-25 16:03:40 +01:00
Milan Broz
e4fd2fafed Fix signed/unsigned comparison warning. 2018-11-25 15:12:22 +01:00
Milan Broz
e31b20d8d8 Set 2.0.6 version. 2018-11-25 15:04:24 +01:00
Milan Broz
838c91fef3 Update po file. 2018-11-25 15:03:23 +01:00
Milan Broz
be8c39749f Fix setting of integrity persistent flags (no-journal).
We have to query and set flags also for underlying dm-integrity device,
otherwise activation flags applied there are ignored.
2018-11-25 15:01:29 +01:00
Milan Broz
cec5f8a8bf Check for algorithms string lengths in crypt_cipher_check().
The kernel check will fail anyway if string is truncated, but this
make some compilers more happy.
2018-11-25 15:01:14 +01:00
Milan Broz
f6dde0f39e Fix LUKS2_hdr_validate funtion definition. 2018-11-25 15:00:58 +01:00
Milan Broz
2f265f81e7 Properly handle interrupt in cryptsetup-reencrypt and remove log.
Fixes #419.
2018-11-25 15:00:43 +01:00
Milan Broz
9da865e685 Fix sector-size tests for older kernels. 2018-11-25 15:00:28 +01:00
Milan Broz
8d4e794d39 Check for device size and sector size misalignment.
Kernel prevents activation of device that is not aligned
to requested sector size.

Add early check to plain and LUKS2 formats to disallow
creation of such a device.
(Activation will fail in kernel later anyway.)

Fixes #390.
2018-11-25 15:00:12 +01:00
Milan Broz
018486cea0 Add support for Adiantum cipher mode. 2018-11-25 14:57:25 +01:00
Milan Broz
96a3dc0a66 Try to check if AEAD cipher is available through kernel crypto API. 2018-11-25 14:42:50 +01:00
Milan Broz
efeada291a Fix unsigned return value. 2018-11-25 14:29:09 +01:00
Milan Broz
fb6935385c Properly propagate error from AF diffuse function. 2018-11-25 14:28:31 +01:00
Milan Broz
599748bc9f Check hash value in pbkdf setting early. 2018-11-25 14:27:59 +01:00
Milan Broz
d0d507e325 Fallback to default keyslot algorithm if backend does not know the cipher. 2018-11-25 14:27:37 +01:00
Ondrej Kozina
7d8f64fe21 Remove unused crypt_dm_active_device member. 2018-11-25 14:27:11 +01:00
Ondrej Kozina
a52dbc43d3 Secondary header offset must match header size. 2018-11-25 14:26:53 +01:00
Ondrej Kozina
7df458b74e Check json size matches value from binary LUKS2 header.
We have max json area length parameter stored twice. In
LUKS2 binary header and in json metadata. Those two values
must match.
2018-11-25 14:26:38 +01:00
Ondrej Kozina
bcd7527938 Change max json area length type to unsigned.
We use uint64_t for max json length everywhere else
including config.json_size field in LUKS2 metadata.

Also renames some misleading parameter names.
2018-11-25 14:26:23 +01:00
Ondrej Kozina
e7141383e3 Enable all supported metadata sizes in LUKS2 validation code.
LUKS2 specification allows various size of LUKS2 metadata.
The single metadata instance is composed of LUKS2 binary header
(4096 bytes) and immediately following json area. The resulting
assembled metadata size have to be one of following values,
all in KiB:

16, 32, 64, 128, 256, 512, 1024, 2048 or 4096
2018-11-25 14:25:59 +01:00
Milan Broz
cd968551d6 Add workaround for benchmarking Adiantum cipher. 2018-11-25 14:24:37 +01:00
Milan Broz
6a3e585141 Fix ext4 image to work without CONFIG_LBDAF. 2018-11-25 14:24:02 +01:00
Milan Broz
6f48bdf9e5 Add branch v2_0_x to Travis. 2018-11-19 13:26:41 +01:00
64 changed files with 5653 additions and 2913 deletions

View File

@@ -14,6 +14,7 @@ branches:
only:
- master
- wip-luks2
- v2_0_x
before_install:
- uname -a

View File

@@ -41,13 +41,16 @@ Download
--------
All release tarballs and release notes are hosted on [kernel.org](https://www.kernel.org/pub/linux/utils/cryptsetup/).
**The latest cryptsetup version is 2.0.5**
* [cryptsetup-2.0.5.tar.xz](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.xz)
* Signature [cryptsetup-2.0.5.tar.sign](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.sign)
**The latest cryptsetup version is 2.0.6**
* [cryptsetup-2.0.6.tar.xz](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.6.tar.xz)
* Signature [cryptsetup-2.0.6.tar.sign](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.6.tar.sign)
_(You need to decompress file first to check signature.)_
* [Cryptsetup 2.0.5 Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.5-ReleaseNotes).
* [Cryptsetup 2.0.6 Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.6-ReleaseNotes).
Previous versions
* [Version 2.0.5](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.5-ReleaseNotes).
* [Version 2.0.4](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.4.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.4.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.4-ReleaseNotes).
@@ -87,7 +90,7 @@ Source and API docs
For development version code, please refer to [source](https://gitlab.com/cryptsetup/cryptsetup/tree/master) page,
mirror on [kernel.org](https://git.kernel.org/cgit/utils/cryptsetup/cryptsetup.git/) or [GitHub](https://github.com/mbroz/cryptsetup).
For libcryptsetup documentation see [libcryptsetup API](https://gitlab.com/cryptsetup/cryptsetup/wikis/API/index.html) page.
For libcryptsetup documentation see [libcryptsetup API](https://mbroz.fedorapeople.org/libcryptsetup_API/) page.
The libcryptsetup API/ABI changes are tracked in [compatibility report](https://abi-laboratory.pro/tracker/timeline/cryptsetup/).

View File

@@ -1,5 +1,5 @@
AC_PREREQ([2.67])
AC_INIT([cryptsetup],[2.0.5])
AC_INIT([cryptsetup],[2.0.6])
dnl library version from <major>.<minor>.<release>[-<suffix>]
LIBCRYPTSETUP_VERSION=$(echo $PACKAGE_VERSION | cut -f1 -d-)

Binary file not shown.

97
docs/v2.0.6-ReleaseNotes Normal file
View File

@@ -0,0 +1,97 @@
Cryptsetup 2.0.6 Release Notes
==============================
Stable bug-fix release.
All users of cryptsetup 2.0.x should upgrade to this version.
Cryptsetup 2.x version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.5
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Fix support of larger metadata areas in LUKS2 header.
This release properly supports all specified metadata areas, as documented
in LUKS2 format description (see docs/on-disk-format-luks2.pdf in archive).
Currently, only default metadata area size is used (in format or convert).
Later cryptsetup versions will allow increasing this metadata area size.
* If AEAD (authenticated encryption) is used, cryptsetup now tries to check
if the requested AEAD algorithm with specified key size is available
in kernel crypto API.
This change avoids formatting a device that cannot be later activated.
For this function, the kernel must be compiled with the
CONFIG_CRYPTO_USER_API_AEAD option enabled.
Note that kernel user crypto API options (CONFIG_CRYPTO_USER_API and
CONFIG_CRYPTO_USER_API_SKCIPHER) are already mandatory for LUKS2.
* Fix setting of integrity no-journal flag.
Now you can store this flag to metadata using --persistent option.
* Fix cryptsetup-reencrypt to not keep temporary reencryption headers
if interrupted during initial password prompt.
* Adds early check to plain and LUKS2 formats to disallow device format
if device size is not aligned to requested sector size.
Previously it was possible, and the device was rejected to activate by
kernel later.
* Fix checking of hash algorithms availability for PBKDF early.
Previously LUKS2 format allowed non-existent hash algorithm with
invalid keyslot preventing the device from activation.
* Allow Adiantum cipher construction (a non-authenticated length-preserving
fast encryption scheme), so it can be used both for data encryption and
keyslot encryption in LUKS1/2 devices.
For benchmark, use:
# cryptsetup benchmark -c xchacha12,aes-adiantum
# cryptsetup benchmark -c xchacha20,aes-adiantum
For LUKS format:
# cryptsetup luksFormat -c xchacha20,aes-adiantum-plain64 -s 256 <device>
The support for Adiantum will be merged in Linux kernel 4.21.
For more info see the paper https://eprint.iacr.org/2018/720.
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Authenticated encryption should use new algorithms from CAESAR competition
https://competitions.cr.yp.to/caesar-submissions.html.
AEGIS and MORUS are already available in kernel 4.18.
For more info about LUKS2 authenticated encryption, please see our paper
https://arxiv.org/abs/1807.00309
Please note that authenticated encryption is still an experimental feature
and can have performance problems for high-speed devices and device
with larger IO blocks (like RAID).
* Authenticated encryption do not set encryption for a dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* There are examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be REMOVED in version 2.1
in favor of python bindings to the libblockdev library.
See https://github.com/storaged-project/libblockdev/releases that
already supports LUKS2 and VeraCrypt devices handling through libcryptsetup.

View File

@@ -26,53 +26,58 @@
struct cipher_alg {
const char *name;
const char *mode;
int blocksize;
bool wrapped_key;
};
/* FIXME: Getting block size should be dynamic from cipher backend. */
static const struct cipher_alg cipher_algs[] = {
{ "cipher_null", 16, false },
{ "aes", 16, false },
{ "serpent", 16, false },
{ "twofish", 16, false },
{ "anubis", 16, false },
{ "blowfish", 8, false },
{ "camellia", 16, false },
{ "cast5", 8, false },
{ "cast6", 16, false },
{ "des", 8, false },
{ "des3_ede", 8, false },
{ "khazad", 8, false },
{ "seed", 16, false },
{ "tea", 8, false },
{ "xtea", 8, false },
{ "paes", 16, true }, /* protected AES, s390 wrapped key scheme */
{ NULL, 0, false }
{ "cipher_null", NULL, 16, false },
{ "aes", NULL, 16, false },
{ "serpent", NULL, 16, false },
{ "twofish", NULL, 16, false },
{ "anubis", NULL, 16, false },
{ "blowfish", NULL, 8, false },
{ "camellia", NULL, 16, false },
{ "cast5", NULL, 8, false },
{ "cast6", NULL, 16, false },
{ "des", NULL, 8, false },
{ "des3_ede", NULL, 8, false },
{ "khazad", NULL, 8, false },
{ "seed", NULL, 16, false },
{ "tea", NULL, 8, false },
{ "xtea", NULL, 8, false },
{ "paes", NULL, 16, true }, /* protected AES, s390 wrapped key scheme */
{ "xchacha12,aes", "adiantum", 32, false },
{ "xchacha20,aes", "adiantum", 32, false },
{ NULL, NULL, 0, false }
};
static const struct cipher_alg *_get_alg(const char *name)
static const struct cipher_alg *_get_alg(const char *name, const char *mode)
{
int i = 0;
while (name && cipher_algs[i].name) {
if (!strcasecmp(name, cipher_algs[i].name))
return &cipher_algs[i];
if (!mode || !cipher_algs[i].mode ||
!strncasecmp(mode, cipher_algs[i].mode, strlen(cipher_algs[i].mode)))
return &cipher_algs[i];
i++;
}
return NULL;
}
int crypt_cipher_blocksize(const char *name)
int crypt_cipher_ivsize(const char *name, const char *mode)
{
const struct cipher_alg *ca = _get_alg(name);
const struct cipher_alg *ca = _get_alg(name, mode);
return ca ? ca->blocksize : -EINVAL;
}
int crypt_cipher_wrapped_key(const char *name)
int crypt_cipher_wrapped_key(const char *name, const char *mode)
{
const struct cipher_alg *ca = _get_alg(name);
const struct cipher_alg *ca = _get_alg(name, mode);
return ca ? (int)ca->wrapped_key : 0;
}

View File

@@ -99,8 +99,8 @@ int argon2(const char *type, const char *password, size_t password_length,
uint32_t crypt_crc32(uint32_t seed, const unsigned char *buf, size_t len);
/* ciphers */
int crypt_cipher_blocksize(const char *name);
int crypt_cipher_wrapped_key(const char *name);
int crypt_cipher_ivsize(const char *name, const char *mode);
int crypt_cipher_wrapped_key(const char *name, const char *mode);
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length);
void crypt_cipher_destroy(struct crypt_cipher *ctx);
@@ -111,6 +111,10 @@ int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length);
/* Check availability of a cipher */
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length);
/* storage encryption wrappers */
int crypt_storage_init(struct crypt_storage **ctx, uint64_t sector_start,
const char *cipher, const char *cipher_mode,

View File

@@ -22,6 +22,7 @@
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <errno.h>
#include <unistd.h>
#include <sys/socket.h>
@@ -51,22 +52,16 @@ struct crypt_cipher {
* ENOTSUP - AF_ALG family not available
* (but cannot check specifically for skcipher API)
*/
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
static int _crypt_cipher_init(struct crypt_cipher **ctx,
const void *key, size_t key_length,
struct sockaddr_alg *sa)
{
struct crypt_cipher *h;
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "skcipher",
};
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
snprintf((char *)sa.salg_name, sizeof(sa.salg_name),
"%s(%s)", mode, name);
h->opfd = -1;
h->tfmfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
if (h->tfmfd < 0) {
@@ -74,14 +69,11 @@ int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
return -ENOTSUP;
}
if (bind(h->tfmfd, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
if (bind(h->tfmfd, (struct sockaddr *)sa, sizeof(*sa)) < 0) {
crypt_cipher_destroy(h);
return -ENOENT;
}
if (!strcmp(name, "cipher_null"))
key_length = 0;
if (setsockopt(h->tfmfd, SOL_ALG, ALG_SET_KEY, key, key_length) < 0) {
crypt_cipher_destroy(h);
return -EINVAL;
@@ -97,6 +89,22 @@ int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
return 0;
}
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "skcipher",
};
if (!strcmp(name, "cipher_null"))
key_length = 0;
snprintf((char *)sa.salg_name, sizeof(sa.salg_name), "%s(%s)", mode, name);
return _crypt_cipher_init(ctx, key, key_length, &sa);
}
/* The in/out should be aligned to page boundary */
static int crypt_cipher_crypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
@@ -191,6 +199,68 @@ void crypt_cipher_destroy(struct crypt_cipher *ctx)
free(ctx);
}
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length)
{
struct crypt_cipher *c = NULL;
char mode_name[64], tmp_salg_name[180], *real_mode = NULL, *cipher_iv = NULL, *key;
const char *salg_type;
bool aead;
int r;
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
};
aead = integrity && strcmp(integrity, "none");
/* Remove IV if present */
if (mode) {
strncpy(mode_name, mode, sizeof(mode_name));
mode_name[sizeof(mode_name) - 1] = 0;
cipher_iv = strchr(mode_name, '-');
if (cipher_iv) {
*cipher_iv = '\0';
real_mode = mode_name;
}
}
salg_type = aead ? "aead" : "skcipher";
snprintf((char *)sa.salg_type, sizeof(sa.salg_type), "%s", salg_type);
memset(tmp_salg_name, 0, sizeof(tmp_salg_name));
/* FIXME: this is duplicating a part of devmapper backend */
if (aead && !strcmp(integrity, "poly1305"))
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "rfc7539(%s,%s)", name, integrity);
else if (!real_mode)
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "%s", name);
else if (aead && !strcmp(real_mode, "ccm"))
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "rfc4309(%s(%s))", real_mode, name);
else
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "%s(%s)", real_mode, name);
if (r <= 0 || r > (int)(sizeof(sa.salg_name) - 1))
return -EINVAL;
memcpy(sa.salg_name, tmp_salg_name, sizeof(sa.salg_name));
key = malloc(key_length);
if (!key)
return -ENOMEM;
r = crypt_backend_rng(key, key_length, CRYPT_RND_NORMAL, 0);
if (r < 0) {
free (key);
return r;
}
r = _crypt_cipher_init(&c, key, key_length, &sa);
if (c)
crypt_cipher_destroy(c);
free(key);
return r;
}
#else /* ENABLE_AF_ALG */
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *buffer, size_t length)
@@ -215,4 +285,9 @@ int crypt_cipher_decrypt(struct crypt_cipher *ctx,
{
return -EINVAL;
}
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length)
{
return 0;
}
#endif

View File

@@ -60,7 +60,7 @@ static int crypt_sector_iv_init(struct crypt_sector_iv *ctx,
{
memset(ctx, 0, sizeof(*ctx));
ctx->iv_size = crypt_cipher_blocksize(cipher_name);
ctx->iv_size = crypt_cipher_ivsize(cipher_name, mode_name);
if (ctx->iv_size < 8)
return -ENOENT;

View File

@@ -48,7 +48,7 @@ int crypt_pbkdf_get_limits(const char *kdf, struct crypt_pbkdf_limits *limits)
limits->min_parallel = 0; /* N/A */
limits->max_parallel = 0; /* N/A */
return 0;
} else if (!strncmp(kdf, "argon2", 6)) {
} else if (!strcmp(kdf, "argon2i") || !strcmp(kdf, "argon2id")) {
limits->min_iterations = 4;
limits->max_iterations = UINT32_MAX;
limits->min_memory = 32;

View File

@@ -64,31 +64,34 @@ out:
/* diffuse: Information spreading over the whole dataset with
* the help of hash function.
*/
static int diffuse(char *src, char *dst, size_t size, const char *hash_name)
{
int hash_size = crypt_hash_size(hash_name);
int r, hash_size = crypt_hash_size(hash_name);
unsigned int digest_size;
unsigned int i, blocks, padding;
if (hash_size <= 0)
return 1;
return -EINVAL;
digest_size = hash_size;
blocks = size / digest_size;
padding = size % digest_size;
for (i = 0; i < blocks; i++)
if(hash_buf(src + digest_size * i,
for (i = 0; i < blocks; i++) {
r = hash_buf(src + digest_size * i,
dst + digest_size * i,
i, (size_t)digest_size, hash_name))
return 1;
i, (size_t)digest_size, hash_name);
if (r < 0)
return r;
}
if(padding)
if(hash_buf(src + digest_size * i,
if (padding) {
r = hash_buf(src + digest_size * i,
dst + digest_size * i,
i, (size_t)padding, hash_name))
return 1;
i, (size_t)padding, hash_name);
if (r < 0)
return r;
}
return 0;
}
@@ -104,17 +107,19 @@ int AF_split(const char *src, char *dst, size_t blocksize,
{
unsigned int i;
char *bufblock;
int r = -EINVAL;
int r;
if((bufblock = calloc(blocksize, 1)) == NULL) return -ENOMEM;
/* process everything except the last block */
for(i=0; i<blocknumbers-1; i++) {
r = crypt_random_get(NULL, dst+(blocksize*i), blocksize, CRYPT_RND_NORMAL);
if(r < 0) goto out;
if (r < 0)
goto out;
XORblock(dst+(blocksize*i),bufblock,bufblock,blocksize);
if(diffuse(bufblock, bufblock, blocksize, hash))
r = diffuse(bufblock, bufblock, blocksize, hash);
if (r < 0)
goto out;
}
/* the last block is computed */
@@ -130,7 +135,7 @@ int AF_merge(const char *src, char *dst, size_t blocksize,
{
unsigned int i;
char *bufblock;
int r = -EINVAL;
int r;
if((bufblock = calloc(blocksize, 1)) == NULL)
return -ENOMEM;
@@ -138,7 +143,8 @@ int AF_merge(const char *src, char *dst, size_t blocksize,
memset(bufblock,0,blocksize);
for(i=0; i<blocknumbers-1; i++) {
XORblock(src+(blocksize*i),bufblock,bufblock,blocksize);
if(diffuse(bufblock, bufblock, blocksize, hash))
r = diffuse(bufblock, bufblock, blocksize, hash);
if (r < 0)
goto out;
}
XORblock(src + blocksize * i, bufblock, dst, blocksize);

View File

@@ -331,6 +331,9 @@ int LUKS2_generate_hdr(
unsigned int alignOffset,
int detached_metadata_device);
int LUKS2_check_metadata_area_size(uint64_t metadata_size);
int LUKS2_check_keyslots_area_size(uint64_t keyslots_size);
int LUKS2_wipe_header_areas(struct crypt_device *cd,
struct luks2_hdr *hdr);

View File

@@ -26,12 +26,13 @@
/*
* Helper functions
*/
json_object *parse_json_len(const char *json_area, int length, int *end_offset)
json_object *parse_json_len(const char *json_area, uint64_t max_length, int *json_len)
{
json_object *jobj;
struct json_tokener *jtok;
if (!json_area || length <= 0)
/* INT32_MAX is internal (json-c) json_tokener_parse_ex() limit */
if (!json_area || max_length > INT32_MAX)
return NULL;
jtok = json_tokener_new();
@@ -40,13 +41,13 @@ json_object *parse_json_len(const char *json_area, int length, int *end_offset)
return NULL;
}
jobj = json_tokener_parse_ex(jtok, json_area, length);
jobj = json_tokener_parse_ex(jtok, json_area, max_length);
if (!jobj)
log_dbg("ERROR: Failed to parse json data (%d): %s",
json_tokener_get_error(jtok),
json_tokener_error_desc(json_tokener_get_error(jtok)));
else
*end_offset = jtok->char_offset;
*json_len = jtok->char_offset;
json_tokener_free(jtok);
@@ -204,6 +205,12 @@ static int hdr_disk_sanity_check_pre(struct luks2_hdr_disk *hdr,
return -EINVAL;
}
if (secondary && (offset != be64_to_cpu(hdr->hdr_size))) {
log_dbg("LUKS2 offset 0x%04x in secondary header doesn't match size 0x%04x.",
(unsigned)offset, (unsigned)be64_to_cpu(hdr->hdr_size));
return -EINVAL;
}
/* FIXME: sanity check checksum alg. */
log_dbg("LUKS2 header version %u of size %u bytes, checksum %s.",
@@ -388,11 +395,6 @@ int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr, struct
return -EINVAL;
}
if (hdr->hdr_size != LUKS2_HDR_16K_LEN) {
log_dbg("Unsupported LUKS2 header size (%zu).", hdr->hdr_size);
return -EINVAL;
}
r = LUKS2_check_device_size(cd, crypt_metadata_device(cd), LUKS2_hdr_and_areas_size(hdr->jobj), 1);
if (r)
return r;
@@ -449,7 +451,7 @@ int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr, struct
return r;
}
static int validate_json_area(const char *json_area, int start, int length)
static int validate_json_area(const char *json_area, uint64_t json_len, uint64_t max_length)
{
char c;
@@ -459,7 +461,7 @@ static int validate_json_area(const char *json_area, int start, int length)
return -EINVAL;
}
if (start >= length) {
if (json_len >= max_length) {
log_dbg("ERROR: Missing trailing null byte beyond parsed json data string.");
return -EINVAL;
}
@@ -467,22 +469,22 @@ static int validate_json_area(const char *json_area, int start, int length)
/*
* TODO:
* validate there are legal json format characters between
* 'json_area' and 'json_area + start'
* 'json_area' and 'json_area + json_len'
*/
do {
c = *(json_area + start);
c = *(json_area + json_len);
if (c != '\0') {
log_dbg("ERROR: Forbidden ascii code 0x%02hhx found beyond json data string at offset %d.",
c, start);
log_dbg("ERROR: Forbidden ascii code 0x%02hhx found beyond json data string at offset %" PRIu64,
c, json_len);
return -EINVAL;
}
} while (++start < length);
} while (++json_len < max_length);
return 0;
}
static int validate_luks2_json_object(json_object *jobj_hdr)
static int validate_luks2_json_object(json_object *jobj_hdr, uint64_t length)
{
int r;
@@ -493,14 +495,14 @@ static int validate_luks2_json_object(json_object *jobj_hdr)
return r;
}
r = LUKS2_hdr_validate(jobj_hdr);
r = LUKS2_hdr_validate(jobj_hdr, length);
if (r) {
log_dbg("Repairing JSON metadata.");
/* try to correct known glitches */
LUKS2_hdr_repair(jobj_hdr);
/* run validation again */
r = LUKS2_hdr_validate(jobj_hdr);
r = LUKS2_hdr_validate(jobj_hdr, length);
}
if (r)
@@ -509,20 +511,20 @@ static int validate_luks2_json_object(json_object *jobj_hdr)
return r;
}
static json_object *parse_and_validate_json(const char *json_area, int length)
static json_object *parse_and_validate_json(const char *json_area, uint64_t max_length)
{
int offset, r;
json_object *jobj = parse_json_len(json_area, length, &offset);
int json_len, r;
json_object *jobj = parse_json_len(json_area, max_length, &json_len);
if (!jobj)
return NULL;
/* successful parse_json_len must not return offset <= 0 */
assert(offset > 0);
assert(json_len > 0);
r = validate_json_area(json_area, offset, length);
r = validate_json_area(json_area, json_len, max_length);
if (!r)
r = validate_luks2_json_object(jobj);
r = validate_luks2_json_object(jobj, max_length);
if (r) {
json_object_put(jobj);

View File

@@ -58,7 +58,7 @@ json_object *LUKS2_get_tokens_jobj(struct luks2_hdr *hdr);
void hexprint_base64(struct crypt_device *cd, json_object *jobj,
const char *sep, const char *line_sep);
json_object *parse_json_len(const char *json_area, int length, int *end_offset);
json_object *parse_json_len(const char *json_area, uint64_t max_length, int *json_len);
uint64_t json_object_get_uint64(json_object *jobj);
uint32_t json_object_get_uint32(json_object *jobj);
json_object *json_object_new_uint64(uint64_t value);
@@ -73,7 +73,7 @@ void JSON_DBG(json_object *jobj, const char *desc);
json_object *json_contains(json_object *jobj, const char *name, const char *section,
const char *key, json_type type);
int LUKS2_hdr_validate(json_object *hdr_jobj);
int LUKS2_hdr_validate(json_object *hdr_jobj, uint64_t json_size);
int LUKS2_keyslot_validate(json_object *hdr_jobj, json_object *hdr_keyslot, const char *key);
int LUKS2_check_json_size(const struct luks2_hdr *hdr);
int LUKS2_token_validate(json_object *hdr_jobj, json_object *jobj_token, const char *key);

View File

@@ -114,6 +114,22 @@ int LUKS2_find_area_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
return 0;
}
int LUKS2_check_metadata_area_size(uint64_t metadata_size)
{
/* see LUKS2_HDR2_OFFSETS */
return (metadata_size != 0x004000 &&
metadata_size != 0x008000 && metadata_size != 0x010000 &&
metadata_size != 0x020000 && metadata_size != 0x040000 &&
metadata_size != 0x080000 && metadata_size != 0x100000 &&
metadata_size != 0x200000 && metadata_size != 0x400000);
}
int LUKS2_check_keyslots_area_size(uint64_t keyslots_size)
{
return (MISALIGNED_4K(keyslots_size) ||
keyslots_size > LUKS2_MAX_KEYSLOTS_SIZE);
}
int LUKS2_generate_hdr(
struct crypt_device *cd,
struct luks2_hdr *hdr,
@@ -242,7 +258,7 @@ int LUKS2_wipe_header_areas(struct crypt_device *cd,
length = LUKS2_get_data_offset(hdr) * SECTOR_SIZE;
wipe_block = 1024 * 1024;
if (LUKS2_hdr_validate(hdr->jobj))
if (LUKS2_hdr_validate(hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN))
return -EINVAL;
/* On detached header wipe at least the first 4k */

View File

@@ -363,12 +363,13 @@ static json_bool segment_has_digest(const char *segment_name, json_object *jobj_
return FALSE;
}
static json_bool validate_intervals(int length, const struct interval *ix, uint64_t *data_offset)
static json_bool validate_intervals(int length, const struct interval *ix,
uint64_t metadata_size, uint64_t keyslots_area_end)
{
int j, i = 0;
while (i < length) {
if (ix[i].offset < 2 * LUKS2_HDR_16K_LEN) {
if (ix[i].offset < 2 * metadata_size) {
log_dbg("Illegal area offset: %" PRIu64 ".", ix[i].offset);
return FALSE;
}
@@ -378,10 +379,9 @@ static json_bool validate_intervals(int length, const struct interval *ix, uint6
return FALSE;
}
/* first segment at offset 0 means we have detached header. Do not check then. */
if (*data_offset && (ix[i].offset + ix[i].length) > *data_offset) {
log_dbg("Area [%" PRIu64 ", %" PRIu64 "] intersects with segment starting at offset: %" PRIu64,
ix[i].offset, ix[i].offset + ix[i].length, *data_offset);
if ((ix[i].offset + ix[i].length) > keyslots_area_end) {
log_dbg("Area [%" PRIu64 ", %" PRIu64 "] overflows binary keyslots area (ends at offset: %" PRIu64 ").",
ix[i].offset, ix[i].offset + ix[i].length, keyslots_area_end);
return FALSE;
}
@@ -402,7 +402,6 @@ static json_bool validate_intervals(int length, const struct interval *ix, uint6
return TRUE;
}
static int hdr_validate_areas(json_object *hdr_jobj);
int LUKS2_keyslot_validate(json_object *hdr_jobj, json_object *hdr_keyslot, const char *key)
{
json_object *jobj_key_size;
@@ -419,9 +418,6 @@ int LUKS2_keyslot_validate(json_object *hdr_jobj, json_object *hdr_keyslot, cons
return 1;
}
if (hdr_validate_areas(hdr_jobj))
return 1;
return 0;
}
@@ -446,7 +442,7 @@ int LUKS2_token_validate(json_object *hdr_jobj, json_object *jobj_token, const c
return 0;
}
static int hdr_validate_json_size(json_object *hdr_jobj)
static int hdr_validate_json_size(json_object *hdr_jobj, uint64_t hdr_json_size)
{
json_object *jobj, *jobj1;
const char *json;
@@ -460,12 +456,22 @@ static int hdr_validate_json_size(json_object *hdr_jobj)
json_area_size = json_object_get_uint64(jobj1);
json_size = (uint64_t)strlen(json);
return json_size > json_area_size ? 1 : 0;
if (hdr_json_size != json_area_size) {
log_dbg("JSON area size doesn't match value in binary header.");
return 1;
}
if (json_size > json_area_size) {
log_dbg("JSON doesn't fit in the designated area.");
return 1;
}
return 0;
}
int LUKS2_check_json_size(const struct luks2_hdr *hdr)
{
return hdr_validate_json_size(hdr->jobj);
return hdr_validate_json_size(hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN);
}
static int hdr_validate_keyslots(json_object *hdr_jobj)
@@ -624,12 +630,24 @@ static int hdr_validate_segments(json_object *hdr_jobj)
return 0;
}
static uint64_t LUKS2_metadata_size(json_object *jobj)
{
json_object *jobj1, *jobj2;
uint64_t json_size;
json_object_object_get_ex(jobj, "config", &jobj1);
json_object_object_get_ex(jobj1, "json_size", &jobj2);
json_str_to_uint64(jobj2, &json_size);
return json_size + LUKS2_HDR_BIN_LEN;
}
static int hdr_validate_areas(json_object *hdr_jobj)
{
struct interval *intervals;
json_object *jobj_keyslots, *jobj_offset, *jobj_length, *jobj_segments, *jobj_area;
int length, ret, i = 0;
uint64_t first_offset;
uint64_t metadata_size;
if (!json_object_object_get_ex(hdr_jobj, "keyslots", &jobj_keyslots))
return 1;
@@ -638,6 +656,9 @@ static int hdr_validate_areas(json_object *hdr_jobj)
if (!json_object_object_get_ex(hdr_jobj, "segments", &jobj_segments))
return 1;
/* config is already validated */
metadata_size = LUKS2_metadata_size(hdr_jobj);
length = json_object_object_length(jobj_keyslots);
/* Empty section */
@@ -681,9 +702,7 @@ static int hdr_validate_areas(json_object *hdr_jobj)
return 1;
}
first_offset = get_first_data_offset(jobj_segments, NULL);
ret = validate_intervals(length, intervals, &first_offset) ? 0 : 1;
ret = validate_intervals(length, intervals, metadata_size, LUKS2_hdr_and_areas_size(hdr_jobj)) ? 0 : 1;
free(intervals);
@@ -725,56 +744,11 @@ static int hdr_validate_digests(json_object *hdr_jobj)
return 0;
}
/* requires keyslots and segments sections being already validated */
static int validate_keyslots_size(json_object *hdr_jobj, json_object *jobj_keyslots_size)
{
json_object *jobj_keyslots, *jobj, *jobj1;
uint64_t keyslots_size, segment_offset, keyslots_area_sum = 0;
if (!json_str_to_uint64(jobj_keyslots_size, &keyslots_size))
return 1;
if (MISALIGNED_4K(keyslots_size)) {
log_dbg("keyslots_size is not 4 KiB aligned");
return 1;
}
if (keyslots_size > LUKS2_MAX_KEYSLOTS_SIZE) {
log_dbg("keyslots_size is too large. The cap is %" PRIu64 " bytes", (uint64_t) LUKS2_MAX_KEYSLOTS_SIZE);
return 1;
}
json_object_object_get_ex(hdr_jobj, "segments", &jobj);
segment_offset = get_first_data_offset(jobj, "crypt");
if (segment_offset &&
(segment_offset < keyslots_size ||
(segment_offset - keyslots_size) < (2 * LUKS2_HDR_16K_LEN))) {
log_dbg("keyslots_size is too large %" PRIu64 " (bytes). Data offset: %" PRIu64 ", keyslots offset: %d", keyslots_size, segment_offset, 2 * LUKS2_HDR_16K_LEN);
return 1;
}
json_object_object_get_ex(hdr_jobj, "keyslots", &jobj_keyslots);
json_object_object_foreach(jobj_keyslots, key, val) {
UNUSED(key);
json_object_object_get_ex(val, "area", &jobj);
json_object_object_get_ex(jobj, "size", &jobj1);
keyslots_area_sum += json_object_get_uint64(jobj1);
}
if (keyslots_area_sum > keyslots_size) {
log_dbg("Sum of all keyslot area sizes (%" PRIu64 ") is greater than value in config section %" PRIu64, keyslots_area_sum, keyslots_size);
return 1;
}
return 0;
}
static int hdr_validate_config(json_object *hdr_jobj)
{
json_object *jobj_config, *jobj, *jobj1;
int i;
uint64_t json_size;
uint64_t keyslots_size, metadata_size, segment_offset;
if (!json_object_object_get_ex(hdr_jobj, "config", &jobj_config)) {
log_dbg("Missing config section.");
@@ -782,25 +756,40 @@ static int hdr_validate_config(json_object *hdr_jobj)
}
if (!(jobj = json_contains(jobj_config, "section", "Config", "json_size", json_type_string)) ||
!json_str_to_uint64(jobj, &json_size))
!json_str_to_uint64(jobj, &metadata_size))
return 1;
/* currently it's hardcoded */
if (json_size != (LUKS2_HDR_16K_LEN - LUKS2_HDR_BIN_LEN)) {
log_dbg("Invalid json_size %" PRIu64, json_size);
/* single metadata instance is assembled from json area size plus
* binary header size */
metadata_size += LUKS2_HDR_BIN_LEN;
if (!(jobj = json_contains(jobj_config, "section", "Config", "keyslots_size", json_type_string)) ||
!json_str_to_uint64(jobj, &keyslots_size))
return 1;
if (LUKS2_check_metadata_area_size(metadata_size)) {
log_dbg("Unsupported LUKS2 header size (%" PRIu64 ").", metadata_size);
return 1;
}
if (MISALIGNED_4K(json_size)) {
log_dbg("Json area is not properly aligned to 4 KiB.");
if (LUKS2_check_keyslots_area_size(keyslots_size)) {
log_dbg("Unsupported LUKS2 keyslots size (%" PRIu64 ").", keyslots_size);
return 1;
}
if (!(jobj = json_contains(jobj_config, "section", "Config", "keyslots_size", json_type_string)))
return 1;
if (validate_keyslots_size(hdr_jobj, jobj))
/*
* validate keyslots_size fits in between (2 * metadata_size) and first
* segment_offset (except detached header)
*/
json_object_object_get_ex(hdr_jobj, "segments", &jobj);
segment_offset = get_first_data_offset(jobj, "crypt");
if (segment_offset &&
(segment_offset < keyslots_size ||
(segment_offset - keyslots_size) < (2 * metadata_size))) {
log_dbg("keyslots_size is too large %" PRIu64 " (bytes). Data offset: %" PRIu64
", keyslots offset: %" PRIu64, keyslots_size, segment_offset, 2 * metadata_size);
return 1;
}
/* Flags array is optional */
if (json_object_object_get_ex(jobj_config, "flags", &jobj)) {
@@ -833,7 +822,7 @@ static int hdr_validate_config(json_object *hdr_jobj)
return 0;
}
int LUKS2_hdr_validate(json_object *hdr_jobj)
int LUKS2_hdr_validate(json_object *hdr_jobj, uint64_t json_size)
{
struct {
int (*validate)(json_object *);
@@ -842,8 +831,8 @@ int LUKS2_hdr_validate(json_object *hdr_jobj)
{ hdr_validate_digests },
{ hdr_validate_segments },
{ hdr_validate_keyslots },
{ hdr_validate_areas },
{ hdr_validate_config },
{ hdr_validate_areas },
{ NULL }
};
int i;
@@ -855,10 +844,8 @@ int LUKS2_hdr_validate(json_object *hdr_jobj)
if (checks[i].validate && checks[i].validate(hdr_jobj))
return 1;
if (hdr_validate_json_size(hdr_jobj)) {
log_dbg("Json header is too large.");
if (hdr_validate_json_size(hdr_jobj, json_size))
return 1;
}
/* validate keyslot implementations */
if (LUKS2_keyslots_validate(hdr_jobj))
@@ -906,7 +893,7 @@ int LUKS2_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr)
/* erase unused digests (no assigned keyslot or segment) */
LUKS2_digests_erase_unused(cd, hdr);
if (LUKS2_hdr_validate(hdr->jobj))
if (LUKS2_hdr_validate(hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN))
return -EINVAL;
return LUKS2_disk_hdr_write(cd, hdr, crypt_metadata_device(cd));
@@ -966,14 +953,7 @@ uint64_t LUKS2_keyslots_size(json_object *jobj)
uint64_t LUKS2_hdr_and_areas_size(json_object *jobj)
{
json_object *jobj1, *jobj2;
uint64_t json_size;
json_object_object_get_ex(jobj, "config", &jobj1);
json_object_object_get_ex(jobj1, "json_size", &jobj2);
json_str_to_uint64(jobj2, &json_size);
return 2 * (json_size + LUKS2_HDR_BIN_LEN) + LUKS2_keyslots_size(jobj);
return 2 * LUKS2_metadata_size(jobj) + LUKS2_keyslots_size(jobj);
}
int LUKS2_hdr_backup(struct crypt_device *cd, struct luks2_hdr *hdr,
@@ -1266,9 +1246,11 @@ int LUKS2_config_set_flags(struct crypt_device *cd, struct luks2_hdr *hdr, uint3
jobj_flags = json_object_new_array();
for (i = 0; persistent_flags[i].description; i++) {
if (flags & persistent_flags[i].flag)
if (flags & persistent_flags[i].flag) {
log_dbg("Setting persistent flag: %s.", persistent_flags[i].description);
json_object_array_add(jobj_flags,
json_object_new_string(persistent_flags[i].description));
}
}
/* Replace or add new flags array */
@@ -1915,7 +1897,7 @@ int LUKS2_activate(struct crypt_device *cd,
}
snprintf(dm_int_name, sizeof(dm_int_name), "%s_dif", name);
r = INTEGRITY_activate(cd, dm_int_name, NULL, NULL, NULL, NULL, flags);
r = INTEGRITY_activate(cd, dm_int_name, NULL, NULL, NULL, NULL, dmd.flags);
if (r)
return r;

View File

@@ -114,13 +114,18 @@ int LUKS2_keyslot_active_count(struct luks2_hdr *hdr, int segment)
int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd)
{
const char *cipher = crypt_get_cipher(cd);
const char *cipher_mode = crypt_get_cipher_mode(cd);
/* Keyslot is already authenticated; we cannot use integrity tags here */
if (crypt_get_integrity_tag_size(cd) || !cipher)
return 1;
/* Wrapped key schemes cannot be used for keyslot encryption */
if (crypt_cipher_wrapped_key(cipher))
if (crypt_cipher_wrapped_key(cipher, cipher_mode))
return 1;
/* Check if crypto backend can use the cipher */
if (crypt_cipher_ivsize(cipher, cipher_mode) < 0)
return 1;
return 0;

View File

@@ -674,7 +674,7 @@ int LUKS2_luks2_to_luks1(struct crypt_device *cd, struct luks2_hdr *hdr2, struct
if (r < 0)
return r;
if (crypt_cipher_wrapped_key(cipher)) {
if (crypt_cipher_wrapped_key(cipher, cipher_mode)) {
log_err(cd, _("Cannot convert to LUKS1 format - device uses wrapped key cipher %s."), cipher);
return -EINVAL;
}

View File

@@ -1348,6 +1348,7 @@ static int _crypt_format_plain(struct crypt_device *cd,
struct crypt_params_plain *params)
{
unsigned int sector_size = params ? params->sector_size : SECTOR_SIZE;
uint64_t dev_size;
if (!cipher || !cipher_mode) {
log_err(cd, _("Invalid plain crypt parameters."));
@@ -1374,6 +1375,15 @@ static int _crypt_format_plain(struct crypt_device *cd,
return -EINVAL;
}
if (sector_size > SECTOR_SIZE && !device_size(cd->device, &dev_size)) {
if (params && params->offset)
dev_size -= (params->offset * SECTOR_SIZE);
if (dev_size % sector_size) {
log_err(cd, _("Device size is not aligned to requested sector size."));
return -EINVAL;
}
}
if (!(cd->type = strdup(CRYPT_PLAIN)))
return -ENOMEM;
@@ -1499,6 +1509,7 @@ static int _crypt_format_luks2(struct crypt_device *cd,
unsigned long alignment_offset = 0;
unsigned int sector_size = params ? params->sector_size : SECTOR_SIZE;
const char *integrity = params ? params->integrity : NULL;
uint64_t dev_size;
cd->u.luks2.hdr.jobj = NULL;
@@ -1585,8 +1596,16 @@ static int _crypt_format_luks2(struct crypt_device *cd,
goto out;
}
/* FIXME: we have no way how to check AEAD ciphers,
* only length preserving mode or authenc() composed modes */
/* FIXME: allow this later also for normal ciphers (check AF_ALG availability. */
if (integrity && !integrity_key_size) {
r = crypt_cipher_check(cipher, cipher_mode, integrity, volume_key_size);
if (r < 0) {
log_err(cd, _("Cipher %s-%s (key size %zd bits) is not available."),
cipher, cipher_mode, volume_key_size * 8);
goto out;
}
}
if ((!integrity || integrity_key_size) && !LUKS2_keyslot_cipher_incompatible(cd)) {
r = LUKS_check_cipher(cd, volume_key_size - integrity_key_size,
cipher, cipher_mode);
@@ -1604,6 +1623,15 @@ static int _crypt_format_luks2(struct crypt_device *cd,
if (r < 0)
goto out;
if (!integrity && sector_size > SECTOR_SIZE && !device_size(crypt_data_device(cd), &dev_size)) {
dev_size -= (crypt_get_data_offset(cd) * SECTOR_SIZE);
if (dev_size % sector_size) {
log_err(cd, _("Device size is not aligned to requested sector size."));
r = -EINVAL;
goto out;
}
}
if (params && (params->label || params->subsystem)) {
r = LUKS2_hdr_labels(cd, &cd->u.luks2.hdr,
params->label, params->subsystem, 0);
@@ -2360,7 +2388,7 @@ int crypt_suspend(struct crypt_device *cd,
key_desc = crypt_get_device_key_description(name);
/* we can't simply wipe wrapped keys */
if (crypt_cipher_wrapped_key(crypt_get_cipher(cd)))
if (crypt_cipher_wrapped_key(crypt_get_cipher(cd), crypt_get_cipher_mode(cd)))
r = dm_suspend_device(cd, name);
else
r = dm_suspend_and_wipe_key(cd, name);
@@ -3348,13 +3376,14 @@ int crypt_deactivate(struct crypt_device *cd, const char *name)
int crypt_get_active_device(struct crypt_device *cd, const char *name,
struct crypt_active_device *cad)
{
struct crypt_dm_active_device dmd;
struct crypt_dm_active_device dmd = {}, dmdi = {};
const char *namei = NULL;
int r;
if (!cd || !name || !cad)
return -EINVAL;
r = dm_query_device(cd, name, 0, &dmd);
r = dm_query_device(cd, name, DM_ACTIVE_DEVICE, &dmd);
if (r < 0)
return r;
@@ -3363,6 +3392,14 @@ int crypt_get_active_device(struct crypt_device *cd, const char *name,
dmd.target != DM_INTEGRITY)
return -ENOTSUP;
/* For LUKS2 with integrity we need flags from underlying dm-integrity */
if (isLUKS2(cd->type) && crypt_get_integrity_tag_size(cd)) {
namei = device_dm_name(dmd.data_device);
if (namei && dm_query_device(cd, namei, 0, &dmdi) >= 0)
dmd.flags |= dmdi.flags;
}
device_free(dmd.data_device);
if (cd && isTCRYPT(cd->type)) {
cad->offset = TCRYPT_get_data_offset(cd, &cd->u.tcrypt.hdr, &cd->u.tcrypt.params);
cad->iv_offset = TCRYPT_get_iv_offset(cd, &cd->u.tcrypt.hdr, &cd->u.tcrypt.params);
@@ -3412,7 +3449,8 @@ int crypt_volume_key_get(struct crypt_device *cd,
return -EINVAL;
/* wrapped keys or unbound keys may be exported */
if (crypt_fips_mode() && !crypt_cipher_wrapped_key(crypt_get_cipher(cd))) {
if (crypt_fips_mode() &&
!crypt_cipher_wrapped_key(crypt_get_cipher(cd), crypt_get_cipher_mode(cd))) {
if (!isLUKS2(cd->type) || keyslot == CRYPT_ANY_SLOT ||
!LUKS2_keyslot_for_segment(&cd->u.luks2.hdr, keyslot, CRYPT_DEFAULT_SEGMENT)) {
log_err(cd, _("Function not available in FIPS mode."));

View File

@@ -79,7 +79,6 @@ struct crypt_dm_active_device {
struct {
const char *cipher;
const char *integrity;
char *key_description;
/* Active key for device */
struct volume_key *vk;

View File

@@ -63,7 +63,11 @@ int verify_pbkdf_params(struct crypt_device *cd,
{
struct crypt_pbkdf_limits pbkdf_limits;
const char *pbkdf_type;
int r = 0;
int r;
r = init_crypto(cd);
if (r < 0)
return r;
if (!pbkdf->type ||
(!pbkdf->hash && !strcmp(pbkdf->type, "pbkdf2")))
@@ -74,13 +78,17 @@ int verify_pbkdf_params(struct crypt_device *cd,
return -EINVAL;
}
/* TODO: initialise crypto and check the hash and pbkdf are both available */
r = crypt_parse_pbkdf(pbkdf->type, &pbkdf_type);
if (r < 0) {
log_err(cd, _("Unknown PBKDF type %s."), pbkdf->type);
return r;
}
if (pbkdf->hash && crypt_hash_size(pbkdf->hash) < 0) {
log_err(cd, _("Requested hash %s is not supported."), pbkdf->hash);
return -EINVAL;
}
r = crypt_pbkdf_get_limits(pbkdf->type, &pbkdf_limits);
if (r < 0)
return r;
@@ -161,11 +169,6 @@ int init_pbkdf_type(struct crypt_device *cd,
if (r < 0)
return r;
/*
* Crypto backend may be not initialized here,
* cannot check if algorithms are really available.
* It will fail later anyway :-)
*/
type = strdup(pbkdf->type);
hash = pbkdf->hash ? strdup(pbkdf->hash) : NULL;

View File

@@ -1176,6 +1176,9 @@ Specify integrity algorithm to be used for authenticated disk encryption in LUKS
\fBWARNING: This extension is EXPERIMENTAL\fR and requires dm-integrity
kernel target (available since kernel version 4.12).
For native AEAD modes, also enable "User-space interface for AEAD cipher algorithms"
in "Cryptographic API" section (CONFIG_CRYPTO_USER_API_AEAD .config option).
For more info, see \fIAUTHENTICATED DISK ENCRYPTION\fR section.
.TP
.B "\-\-integrity\-no\-journal"

View File

@@ -14,6 +14,8 @@ lib/utils_benchmark.c
lib/utils_device_locking.c
lib/utils_wipe.c
lib/utils_keyring.c
lib/utils_blkid.c
lib/utils_io.c
lib/luks1/af.c
lib/luks1/keyencryption.c
lib/luks1/keymanage.c
@@ -39,3 +41,4 @@ src/integritysetup.c
src/cryptsetup_reencrypt.c
src/utils_tools.c
src/utils_password.c
src/utils_luks2.c

1252
po/cs.po

File diff suppressed because it is too large Load Diff

842
po/es.po

File diff suppressed because it is too large Load Diff

857
po/fr.po

File diff suppressed because it is too large Load Diff

842
po/pl.po

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

857
po/uk.po

File diff suppressed because it is too large Load Diff

View File

@@ -769,7 +769,7 @@ static int action_benchmark(void)
char cipher[MAX_CIPHER_LEN], cipher_mode[MAX_CIPHER_LEN];
double enc_mbr = 0, dec_mbr = 0;
int key_size = (opt_key_size ?: DEFAULT_PLAIN_KEYBITS) / 8;
int iv_size = 16, skipped = 0;
int iv_size = 16, skipped = 0, width;
char *c;
int i, r;
@@ -796,13 +796,19 @@ static int action_benchmark(void)
if (!strcmp(cipher_mode, "ecb"))
iv_size = 0;
if (!strcmp(cipher_mode, "adiantum"))
iv_size = 32;
r = benchmark_cipher_loop(cipher, cipher_mode,
key_size, iv_size,
&enc_mbr, &dec_mbr);
if (!r) {
width = strlen(cipher) + strlen(cipher_mode) + 1;
if (width < 11)
width = 11;
/* TRANSLATORS: The string is header of a table and must be exactly (right side) aligned. */
log_std(_("# Algorithm | Key | Encryption | Decryption\n"));
log_std("%11s-%s %9db %10.1f MiB/s %10.1f MiB/s\n",
log_std(_("#%*s Algorithm | Key | Encryption | Decryption\n"), width - 11, "");
log_std("%*s-%s %9db %10.1f MiB/s %10.1f MiB/s\n", width - (int)strlen(cipher_mode) - 1,
cipher, cipher_mode, key_size*8, enc_mbr, dec_mbr);
} else if (r == -ENOENT)
log_err(_("Cipher %s is not available."), opt_cipher);

View File

@@ -588,8 +588,9 @@ static int create_new_header(struct reenc_ctx *rc, struct crypt_device *cd_old,
goto out;
}
if ((r = crypt_format(cd_new, type, cipher, cipher_mode,
uuid, key, key_size, params)))
r = crypt_format(cd_new, type, cipher, cipher_mode, uuid, key, key_size, params);
check_signal(&r);
if (r < 0)
goto out;
log_verbose(_("New LUKS header for device %s created."), rc->device);
@@ -598,6 +599,7 @@ static int create_new_header(struct reenc_ctx *rc, struct crypt_device *cd_old,
continue;
r = create_new_keyslot(rc, i, cd_old, cd_new);
check_signal(&r);
if (r < 0)
goto out;
tools_keyslot_msg(r, CREATED);
@@ -835,11 +837,13 @@ static int backup_fake_header(struct reenc_ctx *rc)
r = crypt_format(cd_new, CRYPT_LUKS1, "cipher_null", "ecb",
NO_UUID, NULL, opt_key_size / 8, &params);
check_signal(&r);
if (r < 0)
goto out;
r = crypt_keyslot_add_by_volume_key(cd_new, rc->keyslot, NULL, 0,
rc->p[rc->keyslot].password, rc->p[rc->keyslot].passwordLen);
check_signal(&r);
if (r < 0)
goto out;
@@ -1535,6 +1539,8 @@ static int run_reencrypt(const char *device)
.stained = 1
};
set_int_handler(0);
if (initialize_context(&rc, device))
goto out;
@@ -1654,8 +1660,6 @@ int main(int argc, const char **argv)
crypt_set_log_callback(NULL, tool_log, NULL);
set_int_block(1);
setlocale(LC_ALL, "");
bindtextdomain(PACKAGE, LOCALEDIR);
textdomain(PACKAGE);

View File

@@ -41,6 +41,7 @@ EXTRA_DIST = compatimage.img.xz compatv10image.img.xz \
luks2_valid_hdr.img.xz \
luks2_header_requirements.xz \
luks2_header_requirements_free.xz \
luks2_mda_images.tar.xz \
evil_hdr-payload_overwrite.xz \
evil_hdr-stripes_payload_dmg.xz \
evil_hdr-luks_hdr_damage.xz \

View File

@@ -3,6 +3,7 @@
CRYPTSETUP="../cryptsetup"
DEV=""
DEV_STACKED="luks0xbabe"
DEV_NAME="dummyalign"
MNT_DIR="./mnt_luks"
PWD1="93R4P4pIqAH8"
PWD2="mymJeD8ivEhE"
@@ -15,6 +16,7 @@ cleanup() {
rmdir $MNT_DIR 2>/dev/null
fi
[ -b /dev/mapper/$DEV_STACKED ] && dmsetup remove --retry $DEV_STACKED >/dev/null 2>&1
[ -b /dev/mapper/$DEV_NAME ] && dmsetup remove --retry $DEV_NAME >/dev/null 2>&1
# FIXME scsi_debug sometimes in-use here
sleep 1
rmmod scsi_debug 2>/dev/null
@@ -35,6 +37,29 @@ skip()
exit 0
}
function dm_crypt_features()
{
VER_STR=$(dmsetup targets | grep crypt | cut -f2 -dv)
[ -z "$VER_STR" ] && fail "Failed to parse dm-crypt version."
VER_MAJ=$(echo $VER_STR | cut -f 1 -d.)
VER_MIN=$(echo $VER_STR | cut -f 2 -d.)
VER_PTC=$(echo $VER_STR | cut -f 3 -d.)
[ $VER_MAJ -lt 1 ] && return
[ $VER_MAJ -gt 1 ] && {
DM_PERF_CPU=1
DM_SECTOR_SIZE=1
return
}
[ $VER_MIN -lt 14 ] && return
DM_PERF_CPU=1
if [ $VER_MIN -ge 17 -o \( $VER_MIN -eq 14 -a $VER_PTC -ge 5 \) ]; then
DM_SECTOR_SIZE=1
fi
}
add_device() {
modprobe scsi_debug $@ delay=0
if [ $? -ne 0 ] ; then
@@ -59,12 +84,16 @@ format() # key_bits expected [forced]
{
if [ -z "$3" ] ; then
echo -n "Formatting using topology info ($1 bits key)..."
echo $PWD1 | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -c aes-cbc-essiv:sha256 -s $1
echo $PWD1 | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -c aes-cbc-essiv:sha256 -s $1 || fail
else
echo -n "Formatting using forced sector alignment $3 ($1 bits key)..."
echo $PWD1 | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -s $1 -c aes-cbc-essiv:sha256 --align-payload=$3
echo $PWD1 | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -s $1 -c aes-cbc-essiv:sha256 --align-payload=$3 ||fail
fi
# check the device can be activated
echo $PWD1 | $CRYPTSETUP luksOpen $DEV $DEV_NAME || fail
$CRYPTSETUP close $DEV_NAME || fail
ALIGN=$($CRYPTSETUP luksDump $DEV |grep "Payload offset" | sed -e s/.*\\t//)
#echo "ALIGN = $ALIGN"
@@ -90,12 +119,16 @@ format_null()
{
if [ $3 -eq 0 ] ; then
echo -n "Formatting using topology info ($1 bits key) [slot 0"
echo | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -c null -s $1
echo | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -c null -s $1 || fail
else
echo -n "Formatting using forced sector alignment $3 ($1 bits key) [slot 0"
echo | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -c null -s $1 --align-payload=$3
echo | $CRYPTSETUP luksFormat --type luks1 $DEV -q $FAST_PBKDF -c null -s $1 --align-payload=$3 || fail
fi
# check the device can be activated
echo | $CRYPTSETUP luksOpen $DEV $DEV_NAME || fail
$CRYPTSETUP close $DEV_NAME || fail
POFF=$(get_offsets "Payload offset")
[ -z "$POFF" ] && fail
[ $POFF != $2 ] && fail "Expected data offset differs: expected $2 != detected $POFF"
@@ -114,11 +147,35 @@ format_null()
echo "]...PASSED"
}
format_plain() # sector size
{
echo -n "Formatting plain device (sector size $1)..."
if [ -n "$DM_SECTOR_SIZE" ] ; then
echo $PWD1 | $CRYPTSETUP open --type plain --sector-size $1 $DEV $DEV_NAME || fail
$CRYPTSETUP close $DEV_NAME || fail
echo "PASSED"
else
echo "N/A"
fi
}
format_plain_fail() # sector size
{
echo -n "Formatting plain device (sector size $1, must fail)..."
if [ -n "$DM_SECTOR_SIZE" ] ; then
echo $PWD1 | $CRYPTSETUP open --type plain --sector-size $1 $DEV $DEV_NAME >/dev/null 2>&1 && fail
echo "PASSED"
else
echo "N/A"
fi
}
if [ $(id -u) != 0 ]; then
echo "WARNING: You must be root to run this test, test skipped."
exit 77
fi
dm_crypt_features
modprobe --dry-run scsi_debug || exit 77
cleanup
@@ -175,6 +232,26 @@ format 128 1032 8
format 128 8192 8192
cleanup
echo "# Create classic 512B drive and stack dm-linear (plain mode)"
add_device dev_size_mb=16 sector_size=512 num_tgts=1
DEV2=$DEV
DEV=/dev/mapper/$DEV_STACKED
dmsetup create $DEV_STACKED --table "0 32768 linear $DEV2 0"
format_plain 512
format_plain 1024
format_plain 2048
format_plain 4096
format_plain_fail 1111
format_plain_fail 8192
echo "# Create classic 512B drive, unaligned to 4096 and stack dm-linear (plain mode)"
dmsetup remove --retry $DEV_STACKED >/dev/null 2>&1
dmsetup create $DEV_STACKED --table "0 32762 linear $DEV2 0"
format_plain 512
format_plain 1024
format_plain_fail 2048
format_plain_fail 4096
cleanup
echo "# Offset check: 512B sector drive"
add_device dev_size_mb=16 sector_size=512 num_tgts=1
# |k| expO reqO expected slot offsets

View File

@@ -3,6 +3,7 @@
CRYPTSETUP="../cryptsetup"
DEV=""
DEV_STACKED="luks0xbabe"
DEV_NAME="dummyalign"
MNT_DIR="./mnt_luks"
PWD1="93R4P4pIqAH8"
PWD2="mymJeD8ivEhE"
@@ -17,6 +18,7 @@ cleanup() {
rmdir $MNT_DIR 2>/dev/null
fi
[ -b /dev/mapper/$DEV_STACKED ] && dmsetup remove --retry $DEV_STACKED >/dev/null 2>&1
[ -b /dev/mapper/$DEV_NAME ] && dmsetup remove --retry $DEV_NAME >/dev/null 2>&1
# FIXME scsi_debug sometimes in-use here
sleep 1
rmmod scsi_debug 2>/dev/null
@@ -38,6 +40,29 @@ skip()
exit 0
}
function dm_crypt_features()
{
VER_STR=$(dmsetup targets | grep crypt | cut -f2 -dv)
[ -z "$VER_STR" ] && fail "Failed to parse dm-crypt version."
VER_MAJ=$(echo $VER_STR | cut -f 1 -d.)
VER_MIN=$(echo $VER_STR | cut -f 2 -d.)
VER_PTC=$(echo $VER_STR | cut -f 3 -d.)
[ $VER_MAJ -lt 1 ] && return
[ $VER_MAJ -gt 1 ] && {
DM_PERF_CPU=1
DM_SECTOR_SIZE=1
return
}
[ $VER_MIN -lt 14 ] && return
DM_PERF_CPU=1
if [ $VER_MIN -ge 17 -o \( $VER_MIN -eq 14 -a $VER_PTC -ge 5 \) ]; then
DM_SECTOR_SIZE=1
fi
}
add_device() {
modprobe scsi_debug $@ delay=0
if [ $? -ne 0 ] ; then
@@ -81,6 +106,12 @@ format() # expected [forced] [encryption_sector_size]
echo $PWD1 | $CRYPTSETUP luksFormat $FAST_PBKDF --type luks2 $DEV -q -c aes-cbc-essiv:sha256 --align-payload=$2 --sector-size $_sec_size || fail
fi
# check the device can be activated
if [ -n "$DM_SECTOR_SIZE" ] ; then
echo $PWD1 | $CRYPTSETUP luksOpen $DEV $DEV_NAME || fail
$CRYPTSETUP close $DEV_NAME || fail
fi
ALIGN=$($CRYPTSETUP luksDump $DEV | tee /tmp/last_dump | grep -A1 "0: crypt" | grep "offset:" | cut -d ' ' -f2)
# echo "ALIGN = $ALIGN"
@@ -98,11 +129,38 @@ format() # expected [forced] [encryption_sector_size]
echo "PASSED"
}
format_fail() # expected [forced] [encryption_sector_size]
{
local _sec_size=512
local _exp=$1
if [ "${2:0:1}" = "s" ]; then
_sec_size=${2:1}
shift
fi
test "${3:0:1}" = "s" && _sec_size=${3:1}
test $_sec_size -eq 512 || local _smsg=" (encryption sector size $_sec_size)"
if [ -z "$2" ] ; then
echo -n "Formatting using topology info$_smsg (must fail)..."
echo $PWD1 | $CRYPTSETUP luksFormat $FAST_PBKDF --type luks2 $DEV -q -c aes-cbc-essiv:sha256 --sector-size $_sec_size >/dev/null 2>&1 && fail
else
echo -n "Formatting using forced sector alignment $2$_smsg (must fail)..."
echo $PWD1 | $CRYPTSETUP luksFormat $FAST_PBKDF --type luks2 $DEV -q -c aes-cbc-essiv:sha256 --align-payload=$2 --sector-size $_sec_size >/dev/null 2>&1 && fail
fi
echo "PASSED"
}
if [ $(id -u) != 0 ]; then
echo "WARNING: You must be root to run this test, test skipped."
exit 77
fi
dm_crypt_features
modprobe --dry-run scsi_debug || exit 77
cleanup
@@ -122,9 +180,9 @@ format $EXPCT 8 s1024
format $EXPCT 8 s2048
format $EXPCT 8 s4096
format $((EXPCT+1)) $((EXPCT+1))
format $((EXPCT+1)) $((EXPCT+1)) s1024
format $((EXPCT+1)) $((EXPCT+1)) s2048
format $((EXPCT+1)) $((EXPCT+1)) s4096
format_fail $((EXPCT+1)) $((EXPCT+1)) s1024
format_fail $((EXPCT+1)) $((EXPCT+1)) s2048
format_fail $((EXPCT+1)) $((EXPCT+1)) s4096
format $EXPCT $EXPCT
format $EXPCT $EXPCT s1024
format $EXPCT $EXPCT s2048
@@ -147,9 +205,9 @@ format $EXPCT 8 s1024
format $EXPCT 8 s2048
format $EXPCT 8 s4096
format $((EXPCT+1)) $((EXPCT+1))
format $((EXPCT+1)) $((EXPCT+1)) s1024
format $((EXPCT+1)) $((EXPCT+1)) s2048
format $((EXPCT+1)) $((EXPCT+1)) s4096
format_fail $((EXPCT+1)) $((EXPCT+1)) s1024
format_fail $((EXPCT+1)) $((EXPCT+1)) s2048
format_fail $((EXPCT+1)) $((EXPCT+1)) s4096
format $EXPCT $EXPCT
format $EXPCT $EXPCT s1024
format $EXPCT $EXPCT s2048
@@ -160,9 +218,9 @@ echo "# Create desktop-class 4K drive w/ 1-sector shift (original bug report)"
echo "# (logical_block_size=512, physical_block_size=4096, alignment_offset=512)"
add_device dev_size_mb=16 sector_size=512 physblk_exp=3 lowest_aligned=1 num_tgts=1
format $((EXPCT+1))
format $((EXPCT+1)) s1024
format $((EXPCT+1)) s2048
format $((EXPCT+1)) s4096
format_fail $((EXPCT+1)) s1024
format_fail $((EXPCT+1)) s2048
format_fail $((EXPCT+1)) s4096
format $EXPCT 1
format $EXPCT 1 s1024
format $EXPCT 1 s2048
@@ -172,9 +230,9 @@ format $EXPCT 8 s1024
format $EXPCT 8 s2048
format $EXPCT 8 s4096
format $((EXPCT+1)) $((EXPCT+1))
format $((EXPCT+1)) $((EXPCT+1)) s1024
format $((EXPCT+1)) $((EXPCT+1)) s2048
format $((EXPCT+1)) $((EXPCT+1)) s4096
format_fail $((EXPCT+1)) $((EXPCT+1)) s1024
format_fail $((EXPCT+1)) $((EXPCT+1)) s2048
format_fail $((EXPCT+1)) $((EXPCT+1)) s4096
format $EXPCT $EXPCT
format $EXPCT $EXPCT s1024
format $EXPCT $EXPCT s2048
@@ -185,9 +243,9 @@ echo "# Create desktop-class 4K drive w/ 63-sector DOS partition compensation"
echo "# (logical_block_size=512, physical_block_size=4096, alignment_offset=3584)"
add_device dev_size_mb=16 sector_size=512 physblk_exp=3 lowest_aligned=7 num_tgts=1
format $((EXPCT+7))
format $((EXPCT+7)) s1024
format $((EXPCT+7)) s2048
format $((EXPCT+7)) s4096
format_fail $((EXPCT+7)) s1024
format_fail $((EXPCT+7)) s2048
format_fail $((EXPCT+7)) s4096
format $EXPCT 1
format $EXPCT 1 s1024
format $EXPCT 1 s2048
@@ -197,9 +255,9 @@ format $EXPCT 8 s1024
format $EXPCT 8 s2048
format $EXPCT 8 s4096
format $((EXPCT+1)) $((EXPCT+1))
format $((EXPCT+1)) $((EXPCT+1)) s1024
format $((EXPCT+1)) $((EXPCT+1)) s2048
format $((EXPCT+1)) $((EXPCT+1)) s4096
format_fail $((EXPCT+1)) $((EXPCT+1)) s1024
format_fail $((EXPCT+1)) $((EXPCT+1)) s2048
format_fail $((EXPCT+1)) $((EXPCT+1)) s4096
format $EXPCT $EXPCT
format $EXPCT $EXPCT s1024
format $EXPCT $EXPCT s2048
@@ -221,10 +279,11 @@ format $EXPCT 8
format $EXPCT 8 s1024
format $EXPCT 8 s2048
format $EXPCT 8 s4096
format $((EXPCT+1)) $((EXPCT+1))
format $((EXPCT+1)) $((EXPCT+1)) s1024
format $((EXPCT+1)) $((EXPCT+1)) s2048
format $((EXPCT+1)) $((EXPCT+1)) s4096
#FIXME: kernel limits issue?
##format $((EXPCT+1)) $((EXPCT+1))
format_fail $((EXPCT+1)) $((EXPCT+1)) s1024
format_fail $((EXPCT+1)) $((EXPCT+1)) s2048
format_fail $((EXPCT+1)) $((EXPCT+1)) s4096
format $EXPCT $EXPCT
format $EXPCT $EXPCT s1024
format $EXPCT $EXPCT s2048
@@ -250,9 +309,9 @@ format $EXPCT 8 s1024
format $EXPCT 8 s2048
format $EXPCT 8 s4096
format $((EXPCT+1)) $((EXPCT+1))
format $((EXPCT+1)) $((EXPCT+1)) s1024
format $((EXPCT+1)) $((EXPCT+1)) s2048
format $((EXPCT+1)) $((EXPCT+1)) s4096
format_fail $((EXPCT+1)) $((EXPCT+1)) s1024
format_fail $((EXPCT+1)) $((EXPCT+1)) s2048
format_fail $((EXPCT+1)) $((EXPCT+1)) s4096
format $EXPCT $EXPCT
format $EXPCT $EXPCT s1024
format $EXPCT $EXPCT s2048

View File

@@ -2269,9 +2269,8 @@ static void Pbkdf(void)
bad.type = NULL;
bad.hash = DEFAULT_LUKS1_HASH;
FAIL_(crypt_set_pbkdf_type(cd, &bad), "Pbkdf type member is empty");
// following test fails atm
// bad.hash = "hamster_hash";
// FAIL_(crypt_set_pbkdf_type(cd, &pbkdf2), "Unknown hash member");
bad.hash = "hamster_hash";
FAIL_(crypt_set_pbkdf_type(cd, &pbkdf2), "Unknown hash member");
crypt_free(cd);
// test whether crypt_get_pbkdf_type() behaves accordingly after second crypt_load() call
OK_(crypt_init(&cd, DEVICE_1));

View File

@@ -79,7 +79,11 @@ run_all_in_fs() {
echo "Run tests in $file put on top block device."
xz -d -c $file | dd of=$DEV bs=1M 2>/dev/null || fail "bad image"
[ ! -d $MNT_DIR ] && mkdir $MNT_DIR
mount $DEV $MNT_DIR || skip "Mounting image $file failed."
mount $DEV $MNT_DIR
if [ $? -ne 0 ]; then
echo "Mounting image $file failed, skipped."
continue;
fi
rm -rf $MNT_DIR/* 2>/dev/null
local tfile=$MNT_DIR/bwunit_tstfile
falloc $DEVSIZEMB $tfile || fail "enospc?"

View File

@@ -23,6 +23,7 @@ PWD0="compatkey"
PWD1="93R4P4pIqAH8"
PWD2="mymJeD8ivEhE"
PWD3="ocMakf3fAcQO"
PWD4="Qx3qn46vq0v"
PWDW="rUkL4RUryBom"
TEST_KEYRING_NAME="compattest2_keyring"
TEST_TOKEN0="compattest2_desc0"
@@ -50,7 +51,7 @@ function remove_mapping()
[ -b /dev/mapper/$DEV_NAME2 ] && dmsetup remove $DEV_NAME2
[ -b /dev/mapper/$DEV_NAME ] && dmsetup remove $DEV_NAME
losetup -d $LOOPDEV >/dev/null 2>&1
rm -f $ORIG_IMG $IMG $IMG10 $KEY1 $KEY2 $KEY5 $KEYE $HEADER_IMG $HEADER_KEYU $VK_FILE $HEADER_LUKS2_PV missing-file $TOKEN_FILE0 $TOKEN_FILE1 >/dev/null 2>&1
rm -f $ORIG_IMG $IMG $IMG10 $KEY1 $KEY2 $KEY5 $KEYE $HEADER_IMG $HEADER_KEYU $VK_FILE $HEADER_LUKS2_PV missing-file $TOKEN_FILE0 $TOKEN_FILE1 test_image_* >/dev/null 2>&1
# unlink whole test keyring
[ -n "$TEST_KEYRING" ] && keyctl unlink $TEST_KEYRING "@u" >/dev/null
@@ -854,5 +855,21 @@ $CRYPTSETUP luksDump $LOOPDEV | grep -q "2: luks2 (unbound)" && fail
$CRYPTSETUP luksKillSlot -q $LOOPDEV 3
$CRYPTSETUP luksDump $LOOPDEV | grep -q "3: luks2 (unbound)" && fail
prepare "[39] LUKS2 metadata variants" wipe
tar xJf luks2_mda_images.tar.xz
echo -n "$IMPORT_TOKEN" > $TOKEN_FILE0
for mda in 16 32 64 128 256 512 1024 2048 4096 ; do
echo -n "[$mda KiB]"
echo $PWD4 | $CRYPTSETUP open test_image_$mda $DEV_NAME || fail
$CRYPTSETUP close $DEV_NAME || fail
echo -e "$PWD4\n$PWD3" | $CRYPTSETUP luksAddKey -S9 $FAST_PBKDF_OPT test_image_$mda || fail
echo $PWD4 | $CRYPTSETUP open --test-passphrase test_image_$mda || fail
echo $PWD3 | $CRYPTSETUP open -S9 --test-passphrase test_image_$mda || fail
echo -n "$IMPORT_TOKEN" | $CRYPTSETUP token import test_image_$mda --token-id 10 || fail
$CRYPTSETUP token export test_image_$mda --token-id 10 | diff --from-file - $TOKEN_FILE0 || fail
echo -n "[OK]"
done
echo
remove_mapping
exit 0

View File

@@ -0,0 +1,85 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary header with config json size mismatching
# value in binary header
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
JS=$(((LUKS2_HDR_SIZE-LUKS2_BIN_HDR_SIZE)*512))
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_32K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
json_str=$(jq -c '.' $TMPDIR/json0)
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0
local str_res1=$(head -c 4 $TMPDIR/hdr_res0)
test "$str_res1" = "LUKS" || exit 2
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 4 $TMPDIR/hdr_res1)
test "$str_res1" = "SKUL" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0
jq -c --arg js $JS 'if .config.json_size != ( $js | tostring )
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,97 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate secondary header with one of allowed json area
# size values. Test wheter auto-recovery code is able
# to validate secondary header with non-default json area
# size.
#
# primary header is corrupted on purpose.
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 128 KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_128K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 128KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_128K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,97 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate secondary header with one of allowed json area
# size values. Test wheter auto-recovery code is able
# to validate secondary header with non-default json area
# size.
#
# primary header is corrupted on purpose.
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 16 KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,97 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate secondary header with one of allowed json area
# size values. Test wheter auto-recovery code is able
# to validate secondary header with non-default json area
# size.
#
# primary header is corrupted on purpose.
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 1 MiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_1M
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 1 MiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_1M
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,97 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate secondary header with one of allowed json area
# size values. Test wheter auto-recovery code is able
# to validate secondary header with non-default json area
# size.
#
# primary header is corrupted on purpose.
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 256 KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_256K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 256KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_256K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,96 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 2 MiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_2M
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 2 MiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_2M
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,97 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate secondary header with one of allowed json area
# size values. Test wheter auto-recovery code is able
# to validate secondary header with non-default json area
# size.
#
# primary header is corrupted on purpose.
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 32 KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_32K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary header with non-default metadata json_size.
# There's only limited set of values allowed as json size in
# config section of LUKS2 metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 32KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_32K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,96 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 4 MiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_4M
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 4 MiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_4M
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,97 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate secondary header with one of allowed json area
# size values. Test wheter auto-recovery code is able
# to validate secondary header with non-default json area
# size.
#
# primary header is corrupted on purpose.
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 512 KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_512K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 512KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_512K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary header with non-default metadata json_size
# and keyslots area trespassing in json area.
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 64KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_64K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024-1))
# overlap in json area by exactly one byte
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024-1))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,96 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary header with non-default metadata json_size
# and keyslot area overflowing out of keyslots area.
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 64KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_64K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
--arg mda $((2*TEST_MDA_SIZE_BYTES)) \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.keyslots."7".area.offset = ( ((.config.keyslots_size | tonumber) + ($mda | tonumber) - (.keyslots."7".area.size | tonumber) + 1) | tostring ) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
# .keyslots.7.area.offset = ( ((.config.keyslots_size | tonumber) + ($mda | tonumber) - (.keyslots.7.area.size | tonumber) + 1) | tostring ) |
jq -c --arg mda $((2*TEST_MDA_SIZE_BYTES)) --arg jsize $JSON_SIZE \
'if (.keyslots."7".area.offset != ( ((.config.keyslots_size | tonumber) + ($mda | tonumber) - (.keyslots."7".area.size | tonumber) + 1) | tostring )) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,96 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size where keyslots size
# overflows in data area (segment offset)
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 64KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_64K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
--arg mda $((2*TEST_MDA_SIZE_BYTES)) \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.config.keyslots_size = (((($off | tonumber) - ($mda | tonumber) + 4096)) | tostring ) |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE --arg off $DATA_OFFSET --arg mda $((2*TEST_MDA_SIZE_BYTES)) \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize) or
(.config.keyslots_size != (((($off | tonumber) - ($mda | tonumber) + 4096)) | tostring ))
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,97 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate secondary header with one of allowed json area
# size values. Test wheter auto-recovery code is able
# to validate secondary header with non-default json area
# size.
#
# primary header is corrupted on purpose.
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 64 KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_64K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
write_bin_hdr_offset $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area0
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr_res0 $TEST_MDA_SIZE
local str_res0=$(head -c 6 $TMPDIR/hdr_res0)
test "$str_res0" = "VACUUM" || exit 2
read_luks2_json1 $TGT_IMG $TMPDIR/json_res1 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res1 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -0,0 +1,94 @@
#!/bin/bash
. lib.sh
#
# *** Description ***
#
# generate primary with predefined json_size. There's only limited
# set of values allowed as json size in config section of LUKS2
# metadata
#
# secondary header is corrupted on purpose as well
#
# $1 full target dir
# $2 full source luks2 image
function prepare()
{
cp $SRC_IMG $TGT_IMG
test -d $TMPDIR || mkdir $TMPDIR
read_luks2_json0 $TGT_IMG $TMPDIR/json0
read_luks2_bin_hdr0 $TGT_IMG $TMPDIR/hdr0
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr1
}
function generate()
{
# 64KiB metadata
TEST_MDA_SIZE=$LUKS2_HDR_SIZE_64K
TEST_MDA_SIZE_BYTES=$((TEST_MDA_SIZE*512))
TEST_JSN_SIZE=$((TEST_MDA_SIZE-LUKS2_BIN_HDR_SIZE))
KEYSLOTS_OFFSET=$((TEST_MDA_SIZE*1024))
JSON_DIFF=$(((TEST_MDA_SIZE-LUKS2_HDR_SIZE)*1024))
JSON_SIZE=$((TEST_JSN_SIZE*512))
DATA_OFFSET=16777216
json_str=$(jq -c --arg jdiff $JSON_DIFF --arg jsize $JSON_SIZE --arg off $DATA_OFFSET \
'.keyslots[].area.offset |= ( . | tonumber + ($jdiff | tonumber) | tostring) |
.config.json_size = $jsize |
.segments."0".offset = $off' $TMPDIR/json0)
test -n "$json_str" || exit 2
test ${#json_str} -lt $((LUKS2_JSON_SIZE*512)) || exit 2
write_luks2_json "$json_str" $TMPDIR/json0 $TEST_JSN_SIZE
write_bin_hdr_size $TMPDIR/hdr0 $TEST_MDA_SIZE_BYTES
write_bin_hdr_size $TMPDIR/hdr1 $TEST_MDA_SIZE_BYTES
merge_bin_hdr_with_json $TMPDIR/hdr0 $TMPDIR/json0 $TMPDIR/area0 $TEST_JSN_SIZE
merge_bin_hdr_with_json $TMPDIR/hdr1 $TMPDIR/json0 $TMPDIR/area1 $TEST_JSN_SIZE
erase_checksum $TMPDIR/area0
chks0=$(calc_sha256_checksum_file $TMPDIR/area0)
write_checksum $chks0 $TMPDIR/area0
erase_checksum $TMPDIR/area1
chks0=$(calc_sha256_checksum_file $TMPDIR/area1)
write_checksum $chks0 $TMPDIR/area1
kill_bin_hdr $TMPDIR/area1
write_luks2_hdr0 $TMPDIR/area0 $TGT_IMG $TEST_MDA_SIZE
write_luks2_hdr1 $TMPDIR/area1 $TGT_IMG $TEST_MDA_SIZE
}
function check()
{
read_luks2_bin_hdr1 $TGT_IMG $TMPDIR/hdr_res1 $TEST_MDA_SIZE
local str_res1=$(head -c 6 $TMPDIR/hdr_res1)
test "$str_res1" = "VACUUM" || exit 2
read_luks2_json0 $TGT_IMG $TMPDIR/json_res0 $TEST_JSN_SIZE
jq -c --arg koff $KEYSLOTS_OFFSET --arg jsize $JSON_SIZE \
'if ([.keyslots[].area.offset] | map(tonumber) | min | tostring != $koff) or
(.config.json_size != $jsize)
then error("Unexpected value in result json") else empty end' $TMPDIR/json_res0 || exit 5
}
function cleanup()
{
rm -f $TMPDIR/*
rm -fd $TMPDIR
}
test $# -eq 2 || exit 1
TGT_IMG=$1/$(test_img_name $0)
SRC_IMG=$2
prepare
generate
check
cleanup

View File

@@ -1,9 +1,17 @@
#!/bin/bash
# all in 512 bytes blocks
# LUKS2 with 16KiB header
LUKS2_HDR_SIZE=32 # 16 KiB
LUKS2_BIN_HDR_SIZE=8 # 4096 B
# all in 512 bytes blocks (including binary hdr (4KiB))
LUKS2_HDR_SIZE=32 # 16 KiB
LUKS2_HDR_SIZE_32K=64 # 32 KiB
LUKS2_HDR_SIZE_64K=128 # 64 KiB
LUKS2_HDR_SIZE_128K=256 # 128 KiB
LUKS2_HDR_SIZE_256K=512 # 256 KiB
LUKS2_HDR_SIZE_512K=1024 # 512 KiB
LUKS2_HDR_SIZE_1M=2048 # 1 MiB
LUKS2_HDR_SIZE_2M=4096 # 2 MiB
LUKS2_HDR_SIZE_4M=8192 # 4 MiB
LUKS2_BIN_HDR_SIZE=8 # 4 KiB
LUKS2_JSON_SIZE=$((LUKS2_HDR_SIZE-LUKS2_BIN_HDR_SIZE))
LUKS2_BIN_HDR_CHKS_OFFSET=0x1C0
@@ -30,57 +38,88 @@ function test_img_name()
echo $str
}
# read primary bin hdr
# 1:from 2:to
function read_luks2_bin_hdr0()
{
_dd if=$1 of=$2 bs=512 count=$LUKS2_BIN_HDR_SIZE
}
# read primary json area
# 1:from 2:to 3:[json only size (defaults to 12KiB)]
function read_luks2_json0()
{
_dd if=$1 of=$2 bs=512 skip=$LUKS2_BIN_HDR_SIZE count=$LUKS2_JSON_SIZE
local _js=${4:-$LUKS2_JSON_SIZE}
local _js=$((_js*512/4096))
_dd if=$1 of=$2 bs=4096 skip=1 count=$_js
}
# read secondary bin hdr
# 1:from 2:to 3:[metadata size (defaults to 16KiB)]
function read_luks2_bin_hdr1()
{
_dd if=$1 of=$2 skip=$LUKS2_HDR_SIZE bs=512 count=$LUKS2_BIN_HDR_SIZE
_dd if=$1 of=$2 skip=${3:-$LUKS2_HDR_SIZE} bs=512 count=$LUKS2_BIN_HDR_SIZE
}
# read secondary json area
# 1:from 2:to 3:[json only size (defaults to 12KiB)]
function read_luks2_json1()
{
_dd if=$1 of=$2 bs=512 skip=$((LUKS2_BIN_HDR_SIZE+LUKS2_HDR_SIZE)) count=$LUKS2_JSON_SIZE
local _js=${3:-$LUKS2_JSON_SIZE}
_dd if=$1 of=$2 bs=512 skip=$((2*LUKS2_BIN_HDR_SIZE+_js)) count=$_js
}
# read primary metadata area (bin + json)
# 1:from 2:to 3:[metadata size (defaults to 16KiB)]
function read_luks2_hdr_area0()
{
_dd if=$1 of=$2 bs=512 count=$LUKS2_HDR_SIZE
local _as=${3:-$LUKS2_HDR_SIZE}
local _as=$((_as*512))
_dd if=$1 of=$2 bs=$_as count=1
}
# read secondary metadata area (bin + json)
# 1:from 2:to 3:[metadata size (defaults to 16KiB)]
function read_luks2_hdr_area1()
{
_dd if=$1 of=$2 bs=512 skip=$LUKS2_HDR_SIZE count=$LUKS2_HDR_SIZE
local _as=${3:-$LUKS2_HDR_SIZE}
local _as=$((_as*512))
_dd if=$1 of=$2 bs=$_as skip=1 count=1
}
# write secondary bin hdr
# 1:from 2:to 3:[metadata size (defaults to 16KiB)]
function write_luks2_bin_hdr1()
{
_dd if=$1 of=$2 bs=512 seek=$LUKS2_HDR_SIZE count=$LUKS2_BIN_HDR_SIZE conv=notrunc
_dd if=$1 of=$2 bs=512 seek=${3:-$LUKS2_HDR_SIZE} count=$LUKS2_BIN_HDR_SIZE conv=notrunc
}
# write primary metadata area (bin + json)
# 1:from 2:to 3:[metadata size (defaults to 16KiB)]
function write_luks2_hdr0()
{
_dd if=$1 of=$2 bs=512 count=$LUKS2_HDR_SIZE conv=notrunc
local _as=${3:-$LUKS2_HDR_SIZE}
local _as=$((_as*512))
_dd if=$1 of=$2 bs=$_as count=1 conv=notrunc
}
# write secondary metadata area (bin + json)
# 1:from 2:to 3:[metadata size (defaults to 16KiB)]
function write_luks2_hdr1()
{
_dd if=$1 of=$2 bs=512 seek=$LUKS2_HDR_SIZE count=$LUKS2_HDR_SIZE conv=notrunc
local _as=${3:-$LUKS2_HDR_SIZE}
local _as=$((_as*512))
_dd if=$1 of=$2 bs=$_as seek=1 count=1 conv=notrunc
}
# 1 - json str
# write json (includes padding)
# 1:json_string 2:to 3:[json size (defaults to 12KiB)]
function write_luks2_json()
{
local _js=${3:-$LUKS2_JSON_SIZE}
local len=${#1}
printf '%s' "$1" | _dd of=$2 bs=1 count=$len conv=notrunc
_dd if=/dev/zero of=$2 bs=1 seek=$len count=$((LUKS2_JSON_SIZE*512-len))
_dd if=/dev/zero of=$2 bs=$((_js*512)) count=1
printf '%s' "$1" | _dd of=$2 bs=$len count=1 conv=notrunc
}
function kill_bin_hdr()
@@ -117,13 +156,14 @@ function calc_sha256_checksum_stdin()
sha256sum - | cut -d ' ' -f 1
}
# 1 - bin
# 2 - json
# 3 - luks2_hdr_area
# merge bin hdr with json to form metadata area
# 1:bin_hdr 2:json 3:to 4:[json size (defaults to 12KiB)]
function merge_bin_hdr_with_json()
{
_dd if=$1 of=$3 bs=512 count=$LUKS2_BIN_HDR_SIZE
_dd if=$2 of=$3 bs=512 seek=$LUKS2_BIN_HDR_SIZE count=$LUKS2_JSON_SIZE
local _js=${4:-$LUKS2_JSON_SIZE}
local _js=$((_js*512/4096))
_dd if=$1 of=$3 bs=4096 count=1
_dd if=$2 of=$3 bs=4096 seek=1 count=$_js
}
function _dd()
@@ -131,3 +171,11 @@ function _dd()
dd $@ 2>/dev/null
#dd $@
}
function write_bin_hdr_size() {
printf '%016x' $2 | xxd -r -p -l 16 | _dd of=$1 bs=8 count=1 seek=1 conv=notrunc
}
function write_bin_hdr_offset() {
printf '%016x' $2 | xxd -r -p -l 16 | _dd of=$1 bs=8 count=1 seek=32 conv=notrunc
}

Binary file not shown.

View File

@@ -8,9 +8,6 @@ CRYPTSETUP=../cryptsetup
CRYPTSETUP_VALGRIND=../.libs/cryptsetup
CRYPTSETUP_LIB_VALGRIND=../.libs
DM_CRYPT_SECTOR=512
LUKS2_HDR_SIZE=2112 # 16 KiB version, stored twice, including luks2 areas with keyslots
START_DIR=$(pwd)
IMG=luks2-backend.img
@@ -19,6 +16,8 @@ TST_IMGS=$START_DIR/luks2-images
GEN_DIR=generators
FAILS=0
[ -z "$srcdir" ] && srcdir="."
function remove_mapping()
@@ -35,6 +34,12 @@ function fail()
exit 2
}
fail_count()
{
echo "$1"
FAILS=$((FAILS+1))
}
function skip()
{
[ -n "$1" ] && echo "$1"
@@ -61,23 +66,24 @@ function test_load()
case "$1" in
R)
if [ -n "$_debug" ]; then
$CRYPTSETUP luksDump $_debug $IMG || fail "$2"
$CRYPTSETUP luksDump $_debug $IMG
else
$CRYPTSETUP luksDump $_debug $IMG > /dev/null || fail "$2"
$CRYPTSETUP luksDump $_debug $IMG > /dev/null 2>&1
fi
test $? -eq 0 || return 1
;;
F)
if [ -n "$_debug" ]; then
$CRYPTSETUP luksDump $_debug $IMG && fail "$2"
$CRYPTSETUP luksDump $_debug $IMG
else
$CRYPTSETUP luksDump $_debug $IMG > /dev/null 2>&1 && fail "$2"
$CRYPTSETUP luksDump $_debug $IMG > /dev/null 2>&1
fi
test $? -ne 0 || return 1
;;
*)
fail "Internal test error"
;;
esac
}
function RUN()
@@ -85,7 +91,11 @@ function RUN()
echo -n "Test image: $1..."
cp $TST_IMGS/$1 $IMG || fail "Missing test image"
test_load $2 "$3"
echo "OK"
if [ $? -ne 0 ]; then
fail_count "$3"
else
echo "OK"
fi
}
function valgrind_setup()
@@ -158,11 +168,6 @@ RUN luks2-area-in-json-hdr-space-json0.img "F" "Failed to detect area referenci
RUN luks2-missing-keyslot-referenced-in-digest.img "F" "Failed to detect missing keyslot referenced in digest"
RUN luks2-missing-segment-referenced-in-digest.img "F" "Failed to detect missing segment referenced in digest"
RUN luks2-missing-keyslot-referenced-in-token.img "F" "Failed to detect missing keyslots referenced in token"
RUN luks2-invalid-keyslots-size-c0.img "F" "Failed to detect too large keyslots_size in config section"
RUN luks2-invalid-keyslots-size-c1.img "F" "Failed to detect unaligned keyslots_size in config section"
RUN luks2-invalid-keyslots-size-c2.img "F" "Failed to detect too small keyslots_size config section"
RUN luks2-invalid-json-size-c0.img "F" "Failed to detect invalid json_size config section"
RUN luks2-invalid-json-size-c1.img "F" "Failed to detect invalid json_size config section"
RUN luks2-keyslot-missing-digest.img "F" "Failed to detect missing keyslot digest."
RUN luks2-keyslot-too-many-digests.img "F" "Failed to detect keyslot has too many digests."
@@ -193,4 +198,34 @@ RUN luks2-segment-two.img "R" "Validation rejected two valid segments"
RUN luks2-segment-wrong-flags.img "F" "Failed to detect invalid flags field"
RUN luks2-segment-wrong-flags-element.img "F" "Failed to detect invalid flags content"
echo "[6] Test metadata size and keyslots size (config section)"
RUN luks2-invalid-keyslots-size-c0.img "F" "Failed to detect too large keyslots_size in config section"
RUN luks2-invalid-keyslots-size-c1.img "F" "Failed to detect unaligned keyslots_size in config section"
RUN luks2-invalid-keyslots-size-c2.img "F" "Failed to detect too small keyslots_size config section"
RUN luks2-invalid-json-size-c0.img "F" "Failed to detect invalid json_size config section"
RUN luks2-invalid-json-size-c1.img "F" "Failed to detect invalid json_size config section"
RUN luks2-invalid-json-size-c2.img "F" "Failed to detect mismatching json size in config and binary hdr"
RUN luks2-metadata-size-32k.img "R" "Valid 32KiB metadata size failed to validate"
RUN luks2-metadata-size-64k.img "R" "Valid 64KiB metadata size failed to validate"
RUN luks2-metadata-size-64k-inv-area-c0.img "F" "Failed to detect keyslot area trespassing in json area"
RUN luks2-metadata-size-64k-inv-area-c1.img "F" "Failed to detect keyslot area overflowing keyslots area"
RUN luks2-metadata-size-64k-inv-keyslots-size-c0.img "F" "Failed to detect keyslots size overflowing in data area"
RUN luks2-metadata-size-128k.img "R" "Valid 128KiB metadata size failed to validate"
RUN luks2-metadata-size-256k.img "R" "Valid 256KiB metadata size failed to validate"
RUN luks2-metadata-size-512k.img "R" "Valid 512KiB metadata size failed to validate"
RUN luks2-metadata-size-1m.img "R" "Valid 1MiB metadata size failed to validate"
RUN luks2-metadata-size-2m.img "R" "Valid 2MiB metadata size failed to validate"
RUN luks2-metadata-size-4m.img "R" "Valid 4MiB metadata size failed to validate"
RUN luks2-metadata-size-16k-secondary.img "R" "Valid 16KiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-32k-secondary.img "R" "Valid 32KiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-64k-secondary.img "R" "Valid 64KiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-128k-secondary.img "R" "Valid 128KiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-256k-secondary.img "R" "Valid 256KiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-512k-secondary.img "R" "Valid 512KiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-1m-secondary.img "R" "Valid 1MiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-2m-secondary.img "R" "Valid 2MiB metadata size in secondary hdr failed to validate"
RUN luks2-metadata-size-4m-secondary.img "R" "Valid 4MiB metadata size in secondary hdr failed to validate"
remove_mapping
test $FAILS -eq 0 || fail "($FAILS wrong result(s) in total)"

Binary file not shown.

Binary file not shown.

View File

@@ -73,7 +73,7 @@ dmcrypt_check() # device outstring
dmremove $1
}
dmcrypt_check_sum() # cipher device outstring
dmcrypt_check_sum() # cipher device
{
EXPSUM="c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29"
# Fill device with zeroes and reopen it
@@ -99,28 +99,35 @@ dmcrypt()
{
OUT=$2
[ -z "$OUT" ] && OUT=$1
printf "%-25s" "$1"
printf "%-31s" "$1"
echo $PASSWORD | $CRYPTSETUP create -h sha256 -c $1 -s 256 "$DEV_NAME"_"$1" /dev/mapper/$DEV_NAME >/dev/null 2>&1
echo $PASSWORD | $CRYPTSETUP create -h sha256 -c $1 -s 256 "$DEV_NAME"_tstdev /dev/mapper/$DEV_NAME >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo -n -e "PLAIN:"
dmcrypt_check "$DEV_NAME"_"$1" $OUT
dmcrypt_check "$DEV_NAME"_tstdev $OUT
else
echo -n "[n/a]"
fi
echo $PASSWORD | $CRYPTSETUP luksFormat -i 1 -c $1 -s 256 /dev/mapper/$DEV_NAME >/dev/null 2>&1
echo $PASSWORD | $CRYPTSETUP luksFormat --type luks1 -i 1 -c $1 -s 256 /dev/mapper/$DEV_NAME >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo -n -e " LUKS:"
echo $PASSWORD | $CRYPTSETUP luksOpen /dev/mapper/$DEV_NAME "$DEV_NAME"_"$1" >/dev/null 2>&1
dmcrypt_check "$DEV_NAME"_"$1" $OUT
echo -n -e " LUKS1:"
echo $PASSWORD | $CRYPTSETUP luksOpen /dev/mapper/$DEV_NAME "$DEV_NAME"_tstdev >/dev/null 2>&1
dmcrypt_check "$DEV_NAME"_tstdev $OUT
fi
echo $PASSWORD | $CRYPTSETUP luksFormat --type luks2 --pbkdf pbkdf2 -i 1 -c $1 -s 256 /dev/mapper/$DEV_NAME >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo -n -e " LUKS2:"
echo $PASSWORD | $CRYPTSETUP luksOpen /dev/mapper/$DEV_NAME "$DEV_NAME"_tstdev >/dev/null 2>&1
dmcrypt_check "$DEV_NAME"_tstdev $OUT
fi
# repeated device creation must return the same checksum
echo $PASSWORD | $CRYPTSETUP create -h sha256 -c $1 -s 256 "$DEV_NAME"_"$1" /dev/mapper/$DEV_NAME >/dev/null 2>&1
echo $PASSWORD | $CRYPTSETUP create -h sha256 -c $1 -s 256 "$DEV_NAME"_tstdev /dev/mapper/$DEV_NAME >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo -n -e " CHECKSUM:"
dmcrypt_check_sum "$1" "$DEV_NAME"_"$1"
dmcrypt_check_sum "$1" "$DEV_NAME"_tstdev
fi
echo
}
@@ -154,4 +161,7 @@ for cipher in $CIPHERS ; do
done
done
dmcrypt xchacha12,aes-adiantum-plain64
dmcrypt xchacha20,aes-adiantum-plain64
cleanup