Compare commits

..

67 Commits

Author SHA1 Message Date
Milan Broz
25e185f6f5 Set 1.7.3 version. 2016-10-28 12:18:22 +02:00
Milan Broz
db09bc58fc Update 1.7.3 Release notes. 2016-10-28 12:11:40 +02:00
Milan Broz
0061ce298a Verify passphrase in cryptsetup-reencrypt when encrypting new drive. 2016-10-28 12:07:35 +02:00
Milan Broz
c8da0a76aa Fix keylength = 0 (no key) case. 2016-10-28 11:55:20 +02:00
Milan Broz
7dbb47f76a Fix crypt_generate_volume_key to use size_t for keylength. 2016-10-28 11:54:58 +02:00
Tobias Stoeckmann
d68d981f36 Avoid integer overflows during memory allocation.
It is possible to overflow integers during memory allocation with
insanely large "key bytes" specified in a LUKS header.

Although it could be argued to properly validate LUKS headers while
parsing them, it's still a good idea to fix any form of possible
overflow attacks against cryptsetup in these allocation functions.
2016-10-28 11:54:18 +02:00
Tobias Stoeckmann
f65dbd5a07 Avoid buffer overflow in uuid_or_device.
The function uuid_or_device is prone to a buffer overflow if a very long
spec has been defined. The range check happens against PATH_MAX, with
i being set to 5 (due to "UUID=" offset of spec), but "/dev/disk/by-uuid"
has been already written into device.

The difference between "/dev/disk/by-uuid" and "UUID=" is 13, therefore
the correct range check must happen against PATH_MAX - 13.
@@ -204,7 +204,7 @@ const char *uuid_or_device(const char *spec)
                strcpy(device, "/dev/disk/by-uuid/");
2016-10-28 11:52:54 +02:00
Milan Broz
2c7c527990 Add 1.7.3. Release Notes. 2016-10-28 11:18:33 +02:00
Milan Broz
3cf86ec1be Update po files. 2016-10-28 11:00:00 +02:00
Eduardo Villanueva Che
274c417e56 Fixed veritysetup bug with hash offsets bigger than 2gb.
The lseek in function write_blockwise() could return value
that is greater than integer for result so it can overflow
and fail the whole write.
[comment added by mbroz]
2016-10-22 09:34:02 +02:00
Jonas Meurer
337b20a4ed Fix several minor spelling errors found by Lintian
* lib/setup.c: miliseconds -> milliseconds
* lib/utils_wipe.c: Unsuported -> Unsupported
* man/crypsetup.8: implicitely -> implicitly
* man/veritysetup.8: verion -> version
* python/pycryptsetup.c: miliseconds -> milliseconds
2016-10-22 09:33:30 +02:00
Milan Broz
35ab06c61c Set configured default iteration time early in crypt_init constructor. 2016-10-20 14:41:15 +02:00
Milan Broz
3e5e9eb620 Rephrase UUID error message forc cryptsetup-reencrypt. 2016-10-20 14:36:58 +02:00
Milan Broz
e856bc37bb Fix error path after conversion to OpenSSL 1.1.0. 2016-10-20 08:26:56 +02:00
Milan Broz
f594435298 Support OpenSSL 1.1.0 in cryptsetup backend. 2016-10-20 08:26:45 +02:00
Milan Broz
a1fb77b8b3 Fix Nettle crypto backend definitions. 2016-10-19 21:17:03 +02:00
Milan Broz
8e3d5bbd70 Try to find python$VERSION-config. 2016-10-19 12:47:53 +02:00
Per x Johansson
443a8b806f Fix memory leak when using openssl backend
Fixes a memory leak when using openssl backend caused by mismatched
calls to EVP_DigestInit and EVP_DigestFinal_ex.
2016-10-18 14:44:51 +02:00
Milan Broz
2fc8b6a306 Fix PBKDF2 benchmark to not double iteration count for corner case.
If measurement function returns exactly 500 ms, the iteration
calculation loop doubles iteration count but instead of repeating
measurement it uses this value directly.

Thanks to Ondrej Mosnacek for bug report.
2016-06-23 09:53:22 +02:00
Milan Broz
94f4f6b1b6 Force test to read device to detect corrupted blocks.
(If udev scanning is switched off, there is no real activity on device yet.)
2016-06-23 09:53:14 +02:00
Milan Broz
af1ce99a6f Update Readme.md. 2016-06-04 14:21:04 +02:00
Milan Broz
602d7f0bb0 Workaround for align test for scsi_debug kernel in-use issue. 2016-06-04 13:13:33 +02:00
Milan Broz
53c4fbac2d Fix possible leak if reencryption is interrupted. 2016-06-04 13:13:24 +02:00
Milan Broz
acc846ceba Revert soname change. 2016-06-04 13:13:15 +02:00
Milan Broz
89bce3d21b Prepare version 1.7.2.
Bump libcryptsetup version (new defines, all backward compatible).
2016-06-04 11:40:44 +02:00
Milan Broz
1de98c12a6 Add 1.7.2 Release notes. 2016-06-04 11:37:11 +02:00
Milan Broz
4d62ef49de Update po files. 2016-06-02 19:18:46 +02:00
Milan Broz
de14f78e25 Update po files. 2016-05-25 15:16:54 +02:00
Milan Broz
a2d33996f4 Fix error message. 2016-05-25 15:16:08 +02:00
Milan Broz
d59d935308 Update po files. 2016-05-19 13:12:41 +02:00
Milan Broz
7c62c82c8f Fix help text for cipher benchmark specification. 2016-05-19 12:59:46 +02:00
Ondrej Kozina
664f48e29d keymanage: eliminate double close() call
fix  double close() cases in LUKS_hdr_backup() and LUKS_hdr_restore()
functions. It should be harmless unless libcryptsetup is used
in multi-thread setup which is not supported anyway.
2016-05-19 12:59:33 +02:00
Milan Broz
96896efed4 Add ABI tracker output link. 2016-05-19 12:59:17 +02:00
Milan Broz
bdf16abc53 Update LUKS doc format.
Clarify fixed sector size and keyslots alignment.
2016-05-19 12:58:56 +02:00
Milan Broz
8030bd0593 Support activation options for error handling modes in dm-verity.
This patch adds veritysetup support for these Linux kernel dm-verity options:

  --ignore-corruption - dm-verity just logs detected corruption
  --restart-on-corruption - dm-verity restarts the kernel if corruption is detected

  If the options above are not specified, default behaviour for dm-verity remains.
  Default is that I/O operation fails with I/O error if corrupted block is detected.

  --ignore-zero-blocks - Instructs dm-verity to not verify blocks that are expected
   to contain zeroes and always return zeroes directly instead.

NOTE that these options could have serious security or functional impacts,
do not use them without assessing the risks!
2016-05-19 12:58:39 +02:00
Milan Broz
a89e6e6e89 Fix dm-verity test typo. 2016-05-19 12:58:06 +02:00
Ondrej Kozina
a5ed08f2d4 dracut_90reencrypt: fix warns reported by static analysis
- moddir is assigned in parent script run by dracut (warning was
  silenced)

- fix defect wrt to assignement and making variable local on
  same line. The variable cwd was first assigned by subshell
  and later any error originating in subshell was masked by
  making the variable local (which returns always 'true')
2016-05-19 12:57:53 +02:00
Milan Broz
f92786a044 Avoid possible divide-by-zero warnings. 2016-05-19 12:57:31 +02:00
Milan Broz
b282cb2366 Fix warnings reported by static analysis.
- ensure that strings are \0 terminated (most of this is already
handled on higher level anyway)

- fix resource leak in error path in tcrypt.c

- fix time of check/time of use race in sysfs path processing

- insruct Coverity scanner to ignore constant expression in random.c
(it is intented to stop compile-time misconfiguration of RNG that would be fatal)
2016-05-19 12:56:51 +02:00
Milan Broz
883bde3f1b Avoid tar archive warnings if tests are run as superuser. 2016-05-19 12:56:16 +02:00
Milan Broz
e969eba2bb Include sys/sysmacros.h if present.
Needed for major/minor definitions.

Thanks Mike Frysinger for pointing this out.
2016-05-19 12:55:54 +02:00
Milan Broz
3c3756fbd7 Link reencryption utility to uuid library.
(Fixes last patch.)
2016-05-19 12:55:36 +02:00
VittGam
b8359b3652 Fix off-by-one error in maximum keyfile size.
Allow keyfiles up to DEFAULT_KEYFILE_SIZE_MAXKB * 1024 bytes in size, and not that value minus one.

Signed-off-by: Vittorio Gambaletta <git-cryptsetup@vittgam.net>
2016-05-19 12:54:57 +02:00
Ondrej Kozina
75eaac3fef cryptsetup-reencrypt: enable resume of decryption
to enable resume of interrupted decryption user has
to pass uuid of the former luks device. That uuid is used
to resume the operation if temporary files LUKS-* still
exist.
2016-05-19 12:54:39 +02:00
Milan Broz
d70e2ba18d Update po files. 2016-05-19 12:54:20 +02:00
Arno Wagner
3a27ce636a sync to WIKI version 2016-05-19 12:53:49 +02:00
Milan Broz
0a951da27f Disable DIRECT_IO for LUKS header with unaligned keyslots.
Fixes issue#287.

Such a header is very rare, it is not worth to do more detection here.
2016-05-19 12:53:23 +02:00
Athira Rajeev
be6ab40fb9 Fix device_block_size_fd to return bsize correctly incase of files.
This patch is for issue #287

In the code for returning block size ( device_block_size_fd in lib/utils_device.c ),
always returns zero in case of files and device_read_test is not executed.

This patch is to fix device_block_size_fd to return block size correctly incase of files.

Signed-off-by: Athira Rajeevatrajeev@linux.vnet.ibm.com
2016-05-19 12:52:57 +02:00
Milan Broz
29ecd515ac Update README for 1.7.1. 2016-05-19 12:52:20 +02:00
Milan Broz
0c7ce6215b Set devel version. 2016-02-28 14:46:13 +01:00
Milan Broz
ddd587d78d Prepare version 1.7.1. 2016-02-28 13:40:11 +01:00
Milan Broz
e6ef5bb698 Add 1.7.1 release notes. 2016-02-28 13:39:13 +01:00
Milan Broz
b4cf5e2dab Fix align test for new scsi_debug defaults. 2016-02-28 11:14:09 +01:00
Ondrej Kozina
a1683189da cryptsetup-reencrypt: harden checks for hdr backups removal
There're various situations where hdr backups together with log file
may get removed even when the hdr was already marked unusable. This
patch fixes the most sever case already reported and generaly tries
harder protecting the log file and both hdr backups.
2016-02-28 11:13:10 +01:00
Ondrej Kozina
a0fc06280e cryptsetup-reencrypt: drop unreachable code path
MAKE_USABLE flag is never used in device_check()
2016-02-28 09:45:48 +01:00
Milan Broz
830edb22cf Update po files. 2016-02-28 09:45:31 +01:00
Milan Broz
26bf547bbc Update po files. 2016-02-23 17:42:33 +01:00
Ondrej Kozina
cec31efee2 Clarify the reencrypt_keyslot= option 2016-02-21 18:58:06 +01:00
Milan Broz
4ad075e928 Fix kernel crypto backend to set key before accept call even for HMAC. 2016-02-21 18:57:49 +01:00
Milan Broz
10a6318b1f Fix cipher_null key setting in kernel crypto backend. 2016-02-21 18:57:15 +01:00
Ondrej Kozina
18528edc31 Fix hang in low level device-mapper code.
udev cookies should be set right in before the dm_task_run()
call otherwise we risk a hang while waiting for a cookie
associated with not yet executed dm task.

For example: failing to add table line (dm_task_add_target())
results in such hang.
2016-02-21 18:57:06 +01:00
Milan Broz
2b91d7c385 Set skcipher key before accept() call in kernel crypto backend.
Also relax input errno checking to catch all errors.
2016-02-21 18:56:50 +01:00
Loui Chang
8d7235b9a9 Update version control history url
Signed-off-by: Loui Chang <louipc.ist@gmail.com>
2016-02-21 18:56:20 +01:00
Loui Chang
02295bed47 Man page typo
Signed-off-by: Loui Chang <louipc.ist@gmail.com>
2016-02-21 18:56:05 +01:00
Milan Broz
0657956351 Update sr.po. 2016-02-21 18:54:29 +01:00
Milan Broz
9f50fd2980 Allow special "-" (standard input) keyfile hangdling even for TCRYPT devices.
Fail if there are more keyfiles specified for non-TCRYPT device.

Fixes issue#269.
2016-01-01 19:19:44 +01:00
Milan Broz
e32376acf1 Fix luksKillSlot to not suppress provided password in batch mode.
Batch mode should enable no-query keyslot wipe but only if user
did not provided password or keyfile explicitely.

Fixes issue #265.

Signed-off-by: Milan Broz <gmazyland@gmail.com>
2015-11-22 12:57:07 +01:00
320 changed files with 18629 additions and 77607 deletions

56
.gitignore vendored
View File

@@ -1,56 +0,0 @@
po/*gmo
*~
Makefile
Makefile.in
Makefile.in.in
*.lo
*.la
*.o
**/*.dirstamp
.deps/
.libs/
src/cryptsetup
src/veritysetup
ABOUT-NLS
aclocal.m4
autom4te.cache/
compile
config.guess
config.h
config.h.in
config.log
config.rpath
config.status
config.sub
configure
cryptsetup
cryptsetup-reencrypt
depcomp
install-sh
integritysetup
lib/libcryptsetup.pc
libtool
ltmain.sh
m4/
missing
po/Makevars.template
po/POTFILES
po/Rules-quot
po/*.pot
po/*.header
po/*.sed
po/*.sin
po/stamp-po
scripts/cryptsetup.conf
stamp-h1
veritysetup
tests/valglog.*
*/*.dirstamp
*-debug-luks2-backup*
tests/api-test
tests/api-test-2
tests/differ
tests/luks1-images
tests/tcrypt-images
tests/unit-utils-io
tests/vectors-test

View File

@@ -1,160 +0,0 @@
#!/bin/bash
#
# .travis-functions.sh:
# - helper functions to be sourced from .travis.yml
# - designed to respect travis' environment but testing locally is possible
# - modified copy from util-linux project
#
if [ ! -f "configure.ac" ]; then
echo ".travis-functions.sh must be sourced from source dir" >&2
return 1 || exit 1
fi
## some config settings
# travis docs say we get 1.5 CPUs
MAKE="make -j2"
DUMP_CONFIG_LOG="short"
export TS_OPT_parsable="yes"
function configure_travis
{
./configure "$@"
err=$?
if [ "$DUMP_CONFIG_LOG" = "short" ]; then
grep -B1 -A10000 "^## Output variables" config.log | grep -v "_FALSE="
elif [ "$DUMP_CONFIG_LOG" = "full" ]; then
cat config.log
fi
return $err
}
function check_nonroot
{
local cfg_opts="$1"
[ -z "$cfg_opts" ] && return
configure_travis \
--enable-cryptsetup-reencrypt \
--enable-internal-sse-argon2 \
"$cfg_opts" \
|| return
$MAKE || return
make check
}
function check_root
{
local cfg_opts="$1"
[ -z "$cfg_opts" ] && return
configure_travis \
--enable-cryptsetup-reencrypt \
--enable-internal-sse-argon2 \
"$cfg_opts" \
|| return
$MAKE || return
# FIXME: we should use -E option here
sudo make check
}
function check_nonroot_compile_only
{
local cfg_opts="$1"
[ -z "$cfg_opts" ] && return
configure_travis \
--enable-cryptsetup-reencrypt \
--enable-internal-sse-argon2 \
"$cfg_opts" \
|| return
$MAKE
}
function travis_install_script
{
# install some packages from Ubuntu's default sources
sudo apt-get -qq update
sudo apt-get install -qq >/dev/null \
sharutils \
libgcrypt20-dev \
libssl-dev \
libdevmapper-dev \
libpopt-dev \
uuid-dev \
libsepol1-dev \
libtool \
dmsetup \
autoconf \
automake \
pkg-config \
autopoint \
gettext \
expect \
keyutils \
libjson-c-dev \
libblkid-dev \
|| return
}
function travis_before_script
{
set -o xtrace
./autogen.sh
ret=$?
set +o xtrace
return $ret
}
function travis_script
{
local ret
set -o xtrace
case "$MAKE_CHECK" in
gcrypt)
check_nonroot "--with-crypto_backend=gcrypt" && \
check_root "--with-crypto_backend=gcrypt"
;;
gcrypt_compile)
check_nonroot_compile_only "--with-crypto_backend=gcrypt"
;;
openssl)
check_nonroot "--with-crypto_backend=openssl" && \
check_root "--with-crypto_backend=openssl"
;;
openssl_compile)
check_nonroot_compile_only "--with-crypto_backend=openssl"
;;
kernel)
check_nonroot "--with-crypto_backend=kernel" && \
check_root "--with-crypto_backend=kernel"
;;
kernel_compile)
check_nonroot_compile_only "--with-crypto_backend=kernel"
;;
*)
echo "error, check environment (travis.yml)" >&2
false
;;
esac
ret=$?
set +o xtrace
return $ret
}
function travis_after_script
{
return 0
}

View File

@@ -1,40 +0,0 @@
language: c
sudo: required
dist: trusty
compiler:
- gcc
env:
- MAKE_CHECK="gcrypt"
- MAKE_CHECK="openssl"
- MAKE_CHECK="kernel"
branches:
only:
- master
- wip-luks2
- v2_0_x
before_install:
- uname -a
- $CC --version
- which $CC
# workaround clang not system wide, fail on sudo make install
- export CC=`which $CC`
# workaround travis-ci issue #5301
- unset PYTHON_CFLAGS
install:
- source ./.travis-functions.sh
- travis_install_script
before_script:
- travis_before_script
script:
- travis_script
after_script:
- travis_after_script

1111
ABOUT-NLS Normal file

File diff suppressed because it is too large Load Diff

214
FAQ
View File

@@ -9,8 +9,7 @@ Sections
6. Backup and Data Recovery
7. Interoperability with other Disk Encryption Tools
8. Issues with Specific Versions of cryptsetup
9. The Initrd question
10. References and Further Reading
9. References and Further Reading
A. Contributors
1. General Questions
@@ -128,7 +127,7 @@ A. Contributors
recommended to not install Ubuntu on a system with existing LUKS
containers without complete backups.
Update 11/2014: There seem to be other problems with existing LUKS
Update 11/2014: There seem to be other problems withe existing LUKS
containers and Ubuntu as well, be extra careful when using LUKS
on Ubuntu in any way, but exactly as the Ubuntu installer does.
@@ -209,7 +208,7 @@ A. Contributors
to send it from your list address.
The mailing list archive is here:
https://marc.info/?l=dm-crypt
http://dir.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt
* 1.8 Unsubscribe from the mailing-list
@@ -311,7 +310,7 @@ A. Contributors
and adjust the iteration count accordingly by creating the container
again with a different iteration time (the number after '-i' is the
iteration time in milliseconds) until your requirements are met.
iteration time in milicesonds) until your requirements are met.
05) Map the container. Here it will be mapped to /dev/mapper/c1:
@@ -976,7 +975,7 @@ A. Contributors
In order to find out whether a key-slot is damaged one has to look
for "non-random looking" data in it. There is a tool that
automates this in the cryptsetup distribution from version 1.6.0
automatizes this in the cryptsetup distribution from version 1.6.0
onwards. It is located in misc/keyslot_checker/. Instructions how
to use and how to interpret results are in the README file. Note
that this tool requires a libcryptsetup from cryptsetup 1.6.0 or
@@ -1607,33 +1606,11 @@ A. Contributors
* 5.18 What about Plausible Deniability?
First let me attempt a definition for the case of encrypted
filesystems: Plausible deniability is when you store data
filesystems: Plausible deniability is when you hide encrypted data
inside an encrypted container and it is not possible to prove it is
there without having a special passphrase. And at the same time
it must be "plausible" that there actually is no hidden data there.
As a simple entropy-analysis will show that here may be data there,
the second part is what makes it tricky.
There seem to be a lot of misunderstandings what that
means, so let me make clear that this refers to the situation where
the attackers can prove that there is data that may be random or
may be part of a plausible-deniability scheme, they just cannot
prove which one it is. Hence a plausible-deniability
scheme must hold up when the attackers know there is
something potentially fishy. If you just hide data and rely on
it not being found, that is just simple deniability, not "plausible"
deniability and I am not talking about that in the following.
Simple deniability against a low-competence attacker may
be as simple as renaming a file or putting data into an unused
part of a disk. Simple deniability against a high-skill attacker
with time to invest is usually pointless though unless you go
for advanced steganographic techniques, which have their own
drawbacks, such as low data capacity.
Now, the idea of plausible deniability is compelling and on first
glance it seems possible to do it. And from a cryptographic point
of view, it actually is possible.
there. The idea is compelling and on first glance it seems possible
to do it. And from a cryptographic point of view, it actually is
possible.
So, does it work in practice? No, unfortunately. The reasoning used
by its proponents is fundamentally flawed in several ways and the
@@ -1645,7 +1622,7 @@ A. Contributors
with random data, nothing in there"? I do not see any reason.
Second, there are two types of situations: Either they cannot force
you to give them the key (then you simply do not) or they can. In the
you to give them the key (then you simply do not) or the can. In the
second case, they can always do bad things to you, because they
cannot prove that you have the key in the first place! This means
they do not have to prove you have the key, or that this random
@@ -1676,8 +1653,8 @@ A. Contributors
places were they can demand encryption keys.
Here is an additional reference for some problems with plausible
deniability: http://www.schneier.com/paper-truecrypt-dfs.pdf
I strongly suggest you read it.
deniability: http://www.schneier.com/paper-truecrypt-dfs.pdf I
strongly suggest you read it.
So, no, I will not provide any instructions on how to do it with
plain dm-crypt or LUKS. If you insist on shooting yourself in the
@@ -2475,7 +2452,7 @@ offset length name data type description
More details:
Cipher, mode and password hash (or no hash):
Cipher, mode and pasword hash (or no hash):
-e cipher [-N] => -c cipher-cbc-plain -H plain [-s 256]
-e cipher => -c cipher-cbc-plain -H ripemd160 [-s 256]
@@ -2583,170 +2560,7 @@ offset length name data type description
bug was discovered.
9. The Initrd question
* 9.1 My initrd is broken with cryptsetup or does now work as I want it to
That is not nice! However the initrd is supplied by your distribution, not by
the cryptsetup project and hence you should complain to them. We cannot
really do anything about it.
* 9.2 CVE-2016-4484 says cryptsetup is broken!
Not really. It says the initrd in some Debian versions have a behavior that
under some very special and unusual conditions may be considered
a vulnerability. Incidentally, at this time (1-Jan-17) CVE-2016-4484 still says
absolutely nothing, which means that the reporters could not be bothered
do actually describe the problem so far and hence it cannot be that bad.
If it were, you would expect that they would have a CVE description in
there more than 30 days (!) after reporting the problem to the press.
What happens is that you can trick the initrd to go to a rescue-shell
if you enter the LUKS password wrongly in a specific way. But falling
back to a rescue shell on initrd errors is a sensible default behavior
in the first place. It gives you about as much access as booting
a rescue system from CD or USB-Stick or as removing the disk would
give you. So this only applies when an attacker has physical access,
but cannot boot anything else or remove the disk. These will be rare
circumstances indeed, and if you rely on the default distribution
initrd to keep you safe under these circumstances, than you have
bigger problems than this somewhat expected behavior.
My take is this was much more driven by some big egos that wanted
to make a splash for self-aggrandizement, than by any actual
security concerns. Ignore it.
* 9.3 How do I do my own initrd with cryptsetup?
It depends on the distribution. Below, I give a very simple example
and step-by-step instructions for Debian. With a bit of work, it
should be possible to adapt this to other distributions. Note that
the description is pretty general, so if you want to do other things
with an initrd it provides an useful starting point for that too.
01) Unpacking an existing initrd to use as template
A Linux initrd is in gzip'ed cpio format. To unpack it, use something
like this:
md tmp; cd tmp; cat ../initrd | gunzip | cpio -id
After this, you have the full initrd content in tmp/
02) Inspecting the init-script
The init-script is the only thing the kernel cares about. All activity
starts there. Its traditional location is /sbin/init on disk, but /init
in an initrd. In an initrd unpacked as above it is tmp/init.
While init can be a binary despite usually being called "init script",
in Debian the main init on the root partition is a binary, but the
init in the initrd (and only that one is called by the kernel) is a script
and starts like this:
#!/bin/sh
....
The "sh" used here is in tmp/bin/sh as just unpacked, and in
Debian it currently is a busybox.
03) Creating your own initrd
The two examples below should give you most of what is needed.
Here is a really minimal example. It does nothing but set up some
things and then drop to an interactive shell. It is perfect to try
out things that you want to go into the init-script.
!/bin/sh
export PATH=/sbin:/bin
[ -d /sys ] || mkdir /sys
[ -d /proc ] || mkdir /proc
[ -d /tmp ] || mkdir /tmp
mount -t sysfs -o nodev,noexec,nosuid sysfs /sys
mount -t proc -o nodev,noexec,nosuid proc /proc
echo "initrd is running, starting BusyBox..."
exec /bin/sh --login
Here is an example that opens the first LUKS-partition it
finds with the hard-coded password "test2" and then
mounts it as root-filesystem. This is intended to be
used on an USB-stick that after boot goes into a safe,
as it contains the LUKS-passphrase in plain text and is
not secure to be left in the system. The script contains
debug-output that should make it easier to see what
is going on. Note that the final hand-over to the
init on the encrypted root-partition is done
by "exec switch_root /mnt/root /sbin/init", after
mounting the decrypted LUKS container
with "mount /dev/mapper/c1 /mnt/root".
The second argument of switch_root is relative to the
first argument, i.e. the init started with this command
is really /mnt/sbin/init before switch_root runs.
!/bin/sh
export PATH=/sbin:/bin
[ -d /sys ] || mkdir /sys
[ -d /proc ] || mkdir /proc
[ -d /tmp ] || mkdir /tmp
mount -t sysfs -o nodev,noexec,nosuid sysfs /sys
mount -t proc -o nodev,noexec,nosuid proc /proc
echo "detecting LUKS containers in sda1-10, sdb1-10"; sleep 1
for i in a b
do
for j in 1 2 3 4 5 6 7 8 9 10
do
sleep 0.5
d="/dev/sd"$i""$j
echo -n $d
cryptsetup isLuks $d >/dev/null 2>&1
r=$?
echo -n " result: "$r""
# 0 = is LUKS, 1 = is not LUKS, 4 = other error
if expr $r = 0 > /dev/null
then
echo " is LUKS, attempting unlock"
echo -n "test2" | cryptsetup luksOpen --key-file=- $d c1
r=$?
echo " result of unlock attempt: "$r""
sleep 2
if expr $r = 0 > /dev/null
then
echo "*** LUKS partition unlocked, switching root *** (waiting 30 seconds before doing that)"
mount /dev/mapper/c1 /mnt/root
sleep 30
exec switch_root /mnt/root /sbin/init
fi
else
echo " is not LUKS"
fi
done
done
echo "FAIL finding root on LUKS, loading BusyBox..."; sleep 5
exec /bin/sh --login
04) What if I want a binary in the initrd, but libraries are missing?
That is a bit tricky. One option is to compile statically, but that
does not work for everything. Debian puts some libraries into
lib/ and lib64/ which are usually enough. If you need more, you
can add the libraries you need there. That may or may not need a
configuration change for the dynamic linker "ld" as well.
Refer to standard Linux documentation
on how to add a library to a Linux system. A running initrd is
just a running Linux system after all, it is not special in any way.
05) How do I repack the initrd?
Simply repack the changed directory. While in tmp/, do
the following:
find . | cpio --create --format='newc' | gzip > ../new_initrd
Rename "new_initrd" to however you want it called (the name of
the initrd is a kernel-parameter) and move to /boot. That is it.
10. References and Further Reading
9. References and Further Reading
* Purpose of this Section

View File

@@ -44,7 +44,7 @@ The simplest way to compile this package is:
`sh ./configure' instead to prevent `csh' from trying to execute
`configure' itself.
Running `configure' takes a while. While running, it prints some
Running `configure' takes awhile. While running, it prints some
messages telling which features it is checking for.
2. Type `make' to compile the package.

View File

@@ -1,48 +1,13 @@
EXTRA_DIST = COPYING.LGPL FAQ docs misc
SUBDIRS = po tests
CLEANFILES =
DISTCLEAN_TARGETS =
AM_CPPFLAGS = \
-include config.h \
-I$(top_srcdir)/lib \
-DDATADIR=\""$(datadir)"\" \
-DLOCALEDIR=\""$(datadir)/locale"\" \
-DLIBDIR=\""$(libdir)"\" \
-DPREFIX=\""$(prefix)"\" \
-DSYSCONFDIR=\""$(sysconfdir)"\" \
-DVERSION=\""$(VERSION)"\"
AM_CFLAGS = -Wall
AM_LDFLAGS =
tmpfilesddir = @DEFAULT_TMPFILESDIR@
noinst_LTLIBRARIES =
sbin_PROGRAMS =
man8_MANS =
tmpfilesd_DATA =
include man/Makemodule.am
include scripts/Makemodule.am
if CRYPTO_INTERNAL_ARGON2
include lib/crypto_backend/argon2/Makemodule.am
endif
include lib/crypto_backend/Makemodule.am
include lib/Makemodule.am
include src/Makemodule.am
SUBDIRS = \
lib \
src \
man \
python \
tests \
po
ACLOCAL_AMFLAGS = -I m4
DISTCHECK_CONFIGURE_FLAGS = \
--with-tmpfilesdir=$$dc_install_base/usr/lib/tmpfiles.d \
--enable-internal-argon2 --enable-internal-sse-argon2
distclean-local:
-find . -name \*~ -o -name \*.orig -o -name \*.rej | xargs rm -f
rm -rf autom4te.cache
clean-local:
-rm -rf docs/doxygen_api_docs libargon2.la
-rm -rf docs/doxygen_api_docs

View File

@@ -9,23 +9,18 @@ These include **plain** **dm-crypt** volumes, **LUKS** volumes, **loop-AES**
and **TrueCrypt** (including **VeraCrypt** extension) format.
Project also includes **veritysetup** utility used to conveniently setup
[DMVerity](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMVerity) block integrity checking kernel module
and, since version 2.0, **integritysetup** to setup
[DMIntegrity](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMIntegrity) block integrity kernel module.
[DMVerity](https://gitlab.com/cryptsetup/cryptsetup/wikis/DMVerity) block integrity checking kernel module.
LUKS Design
-----------
**LUKS** is the standard for Linux hard disk encryption. By providing a standard on-disk-format, it does not
only facilitate compatibility among distributions, but also provides secure management of multiple user passwords.
LUKS stores all necessary setup information in the partition header, enabling to transport or migrate data seamlessly.
Last version of the LUKS format specification is
[available here](https://www.kernel.org/pub/linux/utils/cryptsetup/LUKS_docs/on-disk-format.pdf).
In contrast to existing solution, LUKS stores all setup necessary setup information in the partition header,
enabling the user to transport or migrate his data seamlessly.
Why LUKS?
---------
* compatibility via standardization,
* compatiblity via standardization,
* secure against low entropy attacks,
* support for multiple keys,
* effective passphrase revocation,
@@ -41,58 +36,43 @@ Download
--------
All release tarballs and release notes are hosted on [kernel.org](https://www.kernel.org/pub/linux/utils/cryptsetup/).
**The latest cryptsetup version is 2.0.6**
* [cryptsetup-2.0.6.tar.xz](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.6.tar.xz)
* Signature [cryptsetup-2.0.6.tar.sign](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.6.tar.sign)
**The latest cryptsetup version is 1.7.2**
* [cryptsetup-1.7.2.tar.xz](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.2.tar.xz)
* Signature [cryptsetup-1.7.2.tar.sign](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.2.tar.sign)
_(You need to decompress file first to check signature.)_
* [Cryptsetup 2.0.6 Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.6-ReleaseNotes).
* [Cryptsetup 1.7.2 Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/v1.7.2-ReleaseNotes).
Previous versions
* [Version 2.0.5](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.5.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.5-ReleaseNotes).
* [Version 2.0.4](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.4.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.4.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.4-ReleaseNotes).
* [Version 2.0.3](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.3.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.3.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.3-ReleaseNotes).
* [Version 2.0.2](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.2.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.2.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.2-ReleaseNotes).
* [Version 2.0.1](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.1.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.1.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.1-ReleaseNotes).
* [Version 2.0.0](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.0.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/cryptsetup-2.0.0.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.0-ReleaseNotes).
* [Version 1.7.5](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.5.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.5.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/v1.7.5-ReleaseNotes).
* [Version 1.7.4](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.4.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.4.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/v1.7.4-ReleaseNotes).
* [Version 1.7.3](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.3.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.3.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/v1.7.3-ReleaseNotes).
* [Version 1.7.2](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.2.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.2.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/v1.7.2-ReleaseNotes).
* [Version 1.7.1](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.1.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.1.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/v1.7.1-ReleaseNotes).
* [Version 1.7.0](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.0.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/cryptsetup-1.7.0.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.7/v1.7.0-ReleaseNotes).
* [Version 1.6.8](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.8.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.8.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/v1.6.8-ReleaseNotes).
* [Version 1.6.7](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.7.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.7.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/v1.6.7-ReleaseNotes).
* [Version 1.6.6](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.6.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.6.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/v1.6.6-ReleaseNotes).
* [Version 1.6.5](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.5.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.5.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/v1.6.5-ReleaseNotes).
* [Version 1.6.4](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.4.tar.xz) -
[Signature](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/cryptsetup-1.6.4.tar.sign) -
[Release Notes](https://www.kernel.org/pub/linux/utils/cryptsetup/v1.6/v1.6.4-ReleaseNotes).
Source and API docs
-------------------
For development version code, please refer to [source](https://gitlab.com/cryptsetup/cryptsetup/tree/master) page,
mirror on [kernel.org](https://git.kernel.org/cgit/utils/cryptsetup/cryptsetup.git/) or [GitHub](https://github.com/mbroz/cryptsetup).
For libcryptsetup documentation see [libcryptsetup API](https://mbroz.fedorapeople.org/libcryptsetup_API/) page.
For libcryptsetup documentation see [libcryptsetup API](https://gitlab.com/cryptsetup/cryptsetup/wikis/API/index.html) page.
The libcryptsetup API/ABI changes are tracked in [compatibility report](https://abi-laboratory.pro/tracker/timeline/cryptsetup/).
The libcryptsetup API/ABI changes are tracked in [compatibility report](https://gitlab.com/cryptsetup/cryptsetup/wikis/ABI-tracker/timeline/libcryptsetup/index.html).
NLS PO files are maintained by [TranslationProject](http://translationproject.org/domain/cryptsetup.html).
@@ -104,4 +84,4 @@ For cryptsetup and LUKS related questions, please use the dm-crypt mailing list,
If you want to subscribe just send an empty mail to [dm-crypt-subscribe@saout.de](mailto:dm-crypt-subscribe@saout.de).
You can also browse [list archive](http://www.saout.de/pipermail/dm-crypt/) or read it through
[web interface](https://marc.info/?l=dm-crypt).
[web interface](http://news.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt).

9
TODO
View File

@@ -1 +1,8 @@
Please see issues tracked at https://gitlab.com/cryptsetup/cryptsetup/issues.
Version 1.7:
- Export wipe device functions
- Support K/M suffixes for align payload (new switch?).
- TRIM for keyslots
- Do we need crypt_data_path() - path to data device (if differs)?
- Resync ETA time is not accurate, calculate it better (last minute window?).
- Extend existing LUKS header to use another KDF? (https://password-hashing.net/)
- Fix all crazy automake warnings (or switch to Cmake).

View File

@@ -56,6 +56,13 @@ if test "$DIE" -eq 1; then
exit 1
fi
if test -z "$*"; then
echo
echo "**Warning**: I am going to run 'configure' with no arguments."
echo "If you wish to pass any to it, please specify them on the"
echo \'$0\'" command line."
fi
echo
echo "Generate build-system by:"
echo " autopoint: $(autopoint --version | head -1)"
@@ -74,6 +81,10 @@ autoheader $AH_OPTS
automake --add-missing --copy --gnu $AM_OPTS
autoconf $AC_OPTS
echo
echo "Now type '$srcdir/configure' and 'make' to compile."
echo
if test x$NOCONFIGURE = x; then
echo Running $srcdir/configure $conf_flags "$@" ...
$srcdir/configure $conf_flags "$@" \
&& echo Now type \`make\' to compile $PKG_NAME
else
echo Skipping configure process.
fi

View File

@@ -1,9 +1,9 @@
AC_PREREQ([2.67])
AC_INIT([cryptsetup],[2.1.0])
AC_INIT([cryptsetup],[1.7.3])
dnl library version from <major>.<minor>.<release>[-<suffix>]
LIBCRYPTSETUP_VERSION=$(echo $PACKAGE_VERSION | cut -f1 -d-)
LIBCRYPTSETUP_VERSION_INFO=16:0:4
LIBCRYPTSETUP_VERSION_INFO=11:0:7
AM_SILENT_RULES([yes])
AC_CONFIG_SRCDIR(src/cryptsetup.c)
@@ -15,8 +15,8 @@ AC_CONFIG_HEADERS([config.h:config.h.in])
# http://lists.gnu.org/archive/html/automake/2013-01/msg00060.html
# For old automake use this
#AM_INIT_AUTOMAKE(dist-xz subdir-objects)
AM_INIT_AUTOMAKE([dist-xz 1.12 serial-tests subdir-objects])
#AM_INIT_AUTOMAKE(dist-xz)
AM_INIT_AUTOMAKE([dist-xz 1.12 serial-tests])
if test "x$prefix" = "xNONE"; then
sysconfdir=/etc
@@ -34,69 +34,23 @@ AC_ENABLE_STATIC(no)
LT_INIT
PKG_PROG_PKG_CONFIG
dnl ==========================================================================
dnl define PKG_CHECK_VAR for old pkg-config <= 0.28
m4_ifndef([AS_VAR_COPY],
[m4_define([AS_VAR_COPY],
[AS_LITERAL_IF([$1[]$2], [$1=$$2], [eval $1=\$$2])])
])
m4_ifndef([PKG_CHECK_VAR], [
AC_DEFUN([PKG_CHECK_VAR],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])
AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])
_PKG_CONFIG([$1], [variable="][$3]["], [$2])
AS_VAR_COPY([$1], [pkg_cv_][$1])
AS_VAR_IF([$1], [""], [$5], [$4])
])
])
dnl ==========================================================================
AC_C_RESTRICT
AC_HEADER_DIRENT
AC_HEADER_STDC
AC_CHECK_HEADERS(fcntl.h malloc.h inttypes.h sys/ioctl.h sys/mman.h \
sys/sysmacros.h sys/statvfs.h ctype.h unistd.h locale.h byteswap.h endian.h stdint.h)
sys/sysmacros.h ctype.h unistd.h locale.h byteswap.h endian.h)
AC_CHECK_HEADERS(uuid/uuid.h,,[AC_MSG_ERROR([You need the uuid library.])])
AC_CHECK_HEADER(libdevmapper.h,,[AC_MSG_ERROR([You need the device-mapper library.])])
AC_ARG_ENABLE([keyring],
AS_HELP_STRING([--disable-keyring], [disable kernel keyring support and builtin kernel keyring token]),
[], [enable_keyring=yes])
if test "x$enable_keyring" = "xyes"; then
AC_CHECK_HEADERS(linux/keyctl.h,,[AC_MSG_ERROR([You need Linux kernel headers with kernel keyring service compiled.])])
dnl ==========================================================================
dnl check whether kernel is compiled with kernel keyring service syscalls
AC_CHECK_DECL(__NR_add_key,,[AC_MSG_ERROR([The kernel is missing add_key syscall.])], [#include <syscall.h>])
AC_CHECK_DECL(__NR_keyctl,,[AC_MSG_ERROR([The kernel is missing keyctl syscall.])], [#include <syscall.h>])
AC_CHECK_DECL(__NR_request_key,,[AC_MSG_ERROR([The kernel is missing request_key syscall.])], [#include <syscall.h>])
dnl ==========================================================================
dnl check that key_serial_t hasn't been adopted yet in stdlib
AC_CHECK_TYPES([key_serial_t], [], [], [
AC_INCLUDES_DEFAULT
#ifdef HAVE_LINUX_KEYCTL_H
# include <linux/keyctl.h>
#endif
])
AC_DEFINE(KERNEL_KEYRING, 1, [Enable kernel keyring service support])
fi
AM_CONDITIONAL(KERNEL_KEYRING, test "x$enable_keyring" = "xyes")
saved_LIBS=$LIBS
AC_CHECK_LIB(uuid, uuid_clear, ,[AC_MSG_ERROR([You need the uuid library.])])
AC_SUBST(UUID_LIBS, $LIBS)
LIBS=$saved_LIBS
AC_SEARCH_LIBS([clock_gettime],[rt posix4])
AC_CHECK_FUNCS([posix_memalign clock_gettime posix_fallocate explicit_bzero])
AC_CHECK_FUNCS([posix_memalign clock_gettime])
if test "x$enable_largefile" = "xno"; then
if test "x$enable_largefile" = "xno" ; then
AC_MSG_ERROR([Building with --disable-largefile is not supported, it can cause data corruption.])
fi
@@ -111,7 +65,7 @@ AC_FUNC_STRERROR_R
dnl ==========================================================================
AM_GNU_GETTEXT([external],[need-ngettext])
AM_GNU_GETTEXT_VERSION([0.18.3])
AM_GNU_GETTEXT_VERSION([0.15])
dnl ==========================================================================
@@ -122,10 +76,12 @@ AC_SUBST(POPT_LIBS, $LIBS)
LIBS=$saved_LIBS
dnl ==========================================================================
dnl FIPS extensions
AC_ARG_ENABLE([fips],
AS_HELP_STRING([--enable-fips], [enable FIPS mode restrictions]))
if test "x$enable_fips" = "xyes"; then
dnl FIPS extensions (only for RHEL)
AC_ARG_ENABLE([fips], AS_HELP_STRING([--enable-fips],[enable FIPS mode restrictions]),
[with_fips=$enableval],
[with_fips=no])
if test "x$with_fips" = "xyes"; then
AC_DEFINE(ENABLE_FIPS, 1, [Enable FIPS mode restrictions])
if test "x$enable_static" = "xyes" -o "x$enable_static_cryptsetup" = "xyes" ; then
@@ -134,7 +90,7 @@ if test "x$enable_fips" = "xyes"; then
fi
AC_DEFUN([NO_FIPS], [
if test "x$enable_fips" = "xyes"; then
if test "x$with_fips" = "xyes"; then
AC_MSG_ERROR([This option is not compatible with FIPS.])
fi
])
@@ -142,9 +98,12 @@ AC_DEFUN([NO_FIPS], [
dnl ==========================================================================
dnl pwquality library (cryptsetup CLI only)
AC_ARG_ENABLE([pwquality],
AS_HELP_STRING([--enable-pwquality], [enable password quality checking using pwquality library]))
AS_HELP_STRING([--enable-pwquality],
[enable password quality checking using pwquality library]),
[with_pwquality=$enableval],
[with_pwquality=no])
if test "x$enable_pwquality" = "xyes"; then
if test "x$with_pwquality" = "xyes"; then
AC_DEFINE(ENABLE_PWQUALITY, 1, [Enable password quality checking using pwquality library])
PKG_CHECK_MODULES([PWQUALITY], [pwquality >= 1.0.0],,
AC_MSG_ERROR([You need pwquality library.]))
@@ -156,11 +115,13 @@ fi
dnl ==========================================================================
dnl passwdqc library (cryptsetup CLI only)
AC_ARG_ENABLE([passwdqc],
AS_HELP_STRING([--enable-passwdqc@<:@=CONFIG_PATH@:>@],
[enable password quality checking using passwdqc library (optionally with CONFIG_PATH)]))
AS_HELP_STRING([--enable-passwdqc@<:@=CONFIG_PATH@:>@],
[enable password quality checking using passwdqc library (optionally with CONFIG_PATH)]),
[enable_passwdqc=$enableval],
[enable_passwdqc=no])
case "$enable_passwdqc" in
""|yes|no) use_passwdqc_config="" ;;
yes|no) use_passwdqc_config="" ;;
/*) use_passwdqc_config="$enable_passwdqc"; enable_passwdqc=yes ;;
*) AC_MSG_ERROR([Unrecognized --enable-passwdqc parameter.]) ;;
esac
@@ -172,7 +133,7 @@ if test "x$enable_passwdqc" = "xyes"; then
PASSWDQC_LIBS="-lpasswdqc"
fi
if test "x$enable_pwquality$enable_passwdqc" = "xyesyes"; then
if test "x$with_pwquality$enable_passwdqc" = "xyesyes"; then
AC_MSG_ERROR([--enable-pwquality and --enable-passwdqc are mutually incompatible.])
fi
@@ -180,26 +141,20 @@ dnl ==========================================================================
dnl Crypto backend functions
AC_DEFUN([CONFIGURE_GCRYPT], [
if test "x$enable_fips" = "xyes"; then
if test "x$with_fips" = "xyes"; then
GCRYPT_REQ_VERSION=1.4.5
else
GCRYPT_REQ_VERSION=1.1.42
fi
dnl libgcrypt rejects to use pkgconfig, use AM_PATH_LIBGCRYPT from gcrypt-devel here.
dnl Do not require gcrypt-devel if other crypto backend is used.
m4_ifdef([AM_PATH_LIBGCRYPT],[
AC_ARG_ENABLE([gcrypt-pbkdf2],
dnl Check if we can use gcrypt PBKDF2 (1.6.0 supports empty password)
AS_HELP_STRING([--enable-gcrypt-pbkdf2], [force enable internal gcrypt PBKDF2]),
dnl Check if we can use gcrypt PBKDF2 (1.6.0 supports empty password)
AC_ARG_ENABLE([gcrypt-pbkdf2], AS_HELP_STRING([--enable-gcrypt-pbkdf2],[force enable internal gcrypt PBKDF2]),
if test "x$enableval" = "xyes"; then
[use_internal_pbkdf2=0]
else
[use_internal_pbkdf2=1]
fi,
[AM_PATH_LIBGCRYPT([1.6.1], [use_internal_pbkdf2=0], [use_internal_pbkdf2=1])])
AM_PATH_LIBGCRYPT($GCRYPT_REQ_VERSION,,[AC_MSG_ERROR([You need the gcrypt library.])])],
AC_MSG_ERROR([Missing support for gcrypt: install gcrypt and regenerate configure.]))
AM_PATH_LIBGCRYPT($GCRYPT_REQ_VERSION,,[AC_MSG_ERROR([You need the gcrypt library.])])
AC_MSG_CHECKING([if internal cryptsetup PBKDF2 is compiled-in])
if test $use_internal_pbkdf2 = 0; then
@@ -209,7 +164,7 @@ AC_DEFUN([CONFIGURE_GCRYPT], [
NO_FIPS([])
fi
if test "x$enable_static_cryptsetup" = "xyes"; then
if test x$enable_static_cryptsetup = xyes; then
saved_LIBS=$LIBS
LIBS="$saved_LIBS $LIBGCRYPT_LIBS -static"
AC_CHECK_LIB(gcrypt, gcry_check_version,,
@@ -233,17 +188,18 @@ AC_DEFUN([CONFIGURE_OPENSSL], [
CRYPTO_LIBS=$OPENSSL_LIBS
use_internal_pbkdf2=0
if test "x$enable_static_cryptsetup" = "xyes"; then
if test x$enable_static_cryptsetup = xyes; then
saved_PKG_CONFIG=$PKG_CONFIG
PKG_CONFIG="$PKG_CONFIG --static"
PKG_CHECK_MODULES([OPENSSL_STATIC], [openssl])
CRYPTO_STATIC_LIBS=$OPENSSL_STATIC_LIBS
PKG_CONFIG=$saved_PKG_CONFIG
fi
NO_FIPS([])
])
AC_DEFUN([CONFIGURE_NSS], [
if test "x$enable_static_cryptsetup" = "xyes"; then
if test x$enable_static_cryptsetup = xyes; then
AC_MSG_ERROR([Static build of cryptsetup is not supported with NSS.])
fi
@@ -276,7 +232,6 @@ AC_DEFUN([CONFIGURE_KERNEL], [
AC_DEFUN([CONFIGURE_NETTLE], [
AC_CHECK_HEADERS(nettle/sha.h,,
[AC_MSG_ERROR([You need Nettle cryptographic library.])])
AC_CHECK_HEADERS(nettle/version.h)
saved_LIBS=$LIBS
AC_CHECK_LIB(nettle, nettle_pbkdf2_hmac_sha256,,
@@ -293,42 +248,33 @@ dnl ==========================================================================
saved_LIBS=$LIBS
AC_ARG_ENABLE([static-cryptsetup],
AS_HELP_STRING([--enable-static-cryptsetup], [enable build of static version of tools]))
if test "x$enable_static_cryptsetup" = "xyes"; then
if test "x$enable_static" = "xno"; then
AS_HELP_STRING([--enable-static-cryptsetup],
[enable build of static cryptsetup binary]))
if test x$enable_static_cryptsetup = xyes; then
if test x$enable_static = xno; then
AC_MSG_WARN([Requested static cryptsetup build, enabling static library.])
enable_static=yes
fi
fi
AM_CONDITIONAL(STATIC_TOOLS, test "x$enable_static_cryptsetup" = "xyes")
AM_CONDITIONAL(STATIC_TOOLS, test x$enable_static_cryptsetup = xyes)
AC_ARG_ENABLE([cryptsetup],
AS_HELP_STRING([--disable-cryptsetup], [disable cryptsetup support]),
[], [enable_cryptsetup=yes])
AM_CONDITIONAL(CRYPTSETUP, test "x$enable_cryptsetup" = "xyes")
AC_ARG_ENABLE([veritysetup],
AS_HELP_STRING([--disable-veritysetup], [disable veritysetup support]),
[], [enable_veritysetup=yes])
AM_CONDITIONAL(VERITYSETUP, test "x$enable_veritysetup" = "xyes")
AC_ARG_ENABLE(veritysetup,
AS_HELP_STRING([--disable-veritysetup],
[disable veritysetup support]),[], [enable_veritysetup=yes])
AM_CONDITIONAL(VERITYSETUP, test x$enable_veritysetup = xyes)
AC_ARG_ENABLE([cryptsetup-reencrypt],
AS_HELP_STRING([--disable-cryptsetup-reencrypt], [disable cryptsetup-reencrypt tool]),
[], [enable_cryptsetup_reencrypt=yes])
AM_CONDITIONAL(REENCRYPT, test "x$enable_cryptsetup_reencrypt" = "xyes")
AS_HELP_STRING([--enable-cryptsetup-reencrypt],
[enable cryptsetup-reencrypt tool]))
AM_CONDITIONAL(REENCRYPT, test x$enable_cryptsetup_reencrypt = xyes)
AC_ARG_ENABLE([integritysetup],
AS_HELP_STRING([--disable-integritysetup], [disable integritysetup support]),
[], [enable_integritysetup=yes])
AM_CONDITIONAL(INTEGRITYSETUP, test "x$enable_integritysetup" = "xyes")
AC_ARG_ENABLE([selinux],
AS_HELP_STRING([--disable-selinux], [disable selinux support [default=auto]]),
[], [enable_selinux=yes])
AC_ARG_ENABLE(selinux,
AS_HELP_STRING([--disable-selinux],
[disable selinux support [default=auto]]),[], [])
AC_ARG_ENABLE([udev],
AS_HELP_STRING([--disable-udev], [disable udev support]),
[], [enable_udev=yes])
AS_HELP_STRING([--disable-udev],
[disable udev support]),[], enable_udev=yes)
dnl Try to use pkg-config for devmapper, but fallback to old detection
PKG_CHECK_MODULES([DEVMAPPER], [devmapper >= 1.02.03],, [
@@ -343,9 +289,6 @@ LIBS=$saved_LIBS
LIBS="$LIBS $DEVMAPPER_LIBS"
AC_CHECK_DECLS([dm_task_secure_data], [], [], [#include <libdevmapper.h>])
AC_CHECK_DECLS([dm_task_retry_remove], [], [], [#include <libdevmapper.h>])
AC_CHECK_DECLS([dm_task_deferred_remove], [], [], [#include <libdevmapper.h>])
AC_CHECK_DECLS([dm_device_has_mounted_fs], [], [], [#include <libdevmapper.h>])
AC_CHECK_DECLS([dm_device_has_holders], [], [], [#include <libdevmapper.h>])
AC_CHECK_DECLS([DM_UDEV_DISABLE_DISK_RULES_FLAG], [have_cookie=yes], [have_cookie=no], [#include <libdevmapper.h>])
if test "x$enable_udev" = xyes; then
if test "x$have_cookie" = xno; then
@@ -356,21 +299,19 @@ if test "x$enable_udev" = xyes; then
fi
LIBS=$saved_LIBS
dnl Check for JSON-C used in LUKS2
PKG_CHECK_MODULES([JSON_C], [json-c])
AC_CHECK_DECLS([json_object_object_add_ex], [], [], [#include <json-c/json.h>])
dnl Crypto backend configuration.
AC_ARG_WITH([crypto_backend],
AS_HELP_STRING([--with-crypto_backend=BACKEND], [crypto backend (gcrypt/openssl/nss/kernel/nettle) [openssl]]),
[], [with_crypto_backend=openssl])
AS_HELP_STRING([--with-crypto_backend=BACKEND], [crypto backend (gcrypt/openssl/nss/kernel/nettle) [gcrypt]]),
[], with_crypto_backend=gcrypt
)
dnl Kernel crypto API backend needed for benchmark and tcrypt
AC_ARG_ENABLE([kernel_crypto],
AS_HELP_STRING([--disable-kernel_crypto], [disable kernel userspace crypto (no benchmark and tcrypt)]),
[], [enable_kernel_crypto=yes])
AC_ARG_ENABLE([kernel_crypto], AS_HELP_STRING([--disable-kernel_crypto],
[disable kernel userspace crypto (no benchmark and tcrypt)]),
[with_kernel_crypto=$enableval],
[with_kernel_crypto=yes])
if test "x$enable_kernel_crypto" = "xyes"; then
if test "x$with_kernel_crypto" = "xyes"; then
AC_CHECK_HEADERS(linux/if_alg.h,,
[AC_MSG_ERROR([You need Linux kernel headers with userspace crypto interface. (Or use --disable-kernel_crypto.)])])
AC_DEFINE(ENABLE_AF_ALG, 1, [Enable using of kernel userspace crypto])
@@ -384,88 +325,17 @@ case $with_crypto_backend in
nettle) CONFIGURE_NETTLE([]) ;;
*) AC_MSG_ERROR([Unknown crypto backend.]) ;;
esac
AM_CONDITIONAL(CRYPTO_BACKEND_GCRYPT, test "$with_crypto_backend" = "gcrypt")
AM_CONDITIONAL(CRYPTO_BACKEND_OPENSSL, test "$with_crypto_backend" = "openssl")
AM_CONDITIONAL(CRYPTO_BACKEND_NSS, test "$with_crypto_backend" = "nss")
AM_CONDITIONAL(CRYPTO_BACKEND_KERNEL, test "$with_crypto_backend" = "kernel")
AM_CONDITIONAL(CRYPTO_BACKEND_NETTLE, test "$with_crypto_backend" = "nettle")
AM_CONDITIONAL(CRYPTO_BACKEND_GCRYPT, test $with_crypto_backend = gcrypt)
AM_CONDITIONAL(CRYPTO_BACKEND_OPENSSL, test $with_crypto_backend = openssl)
AM_CONDITIONAL(CRYPTO_BACKEND_NSS, test $with_crypto_backend = nss)
AM_CONDITIONAL(CRYPTO_BACKEND_KERNEL, test $with_crypto_backend = kernel)
AM_CONDITIONAL(CRYPTO_BACKEND_NETTLE, test $with_crypto_backend = nettle)
AM_CONDITIONAL(CRYPTO_INTERNAL_PBKDF2, test $use_internal_pbkdf2 = 1)
AC_DEFINE_UNQUOTED(USE_INTERNAL_PBKDF2, [$use_internal_pbkdf2], [Use internal PBKDF2])
dnl Argon2 implementation
AC_ARG_ENABLE([internal-argon2],
AS_HELP_STRING([--disable-internal-argon2], [disable internal implementation of Argon2 PBKDF]),
[], [enable_internal_argon2=yes])
AC_ARG_ENABLE([libargon2],
AS_HELP_STRING([--enable-libargon2], [enable external libargon2 (PHC) library (disables internal bundled version)]))
if test "x$enable_libargon2" = "xyes" ; then
AC_CHECK_HEADERS(argon2.h,,
[AC_MSG_ERROR([You need libargon2 development library installed.])])
AC_CHECK_DECL(Argon2_id,,[AC_MSG_ERROR([You need more recent Argon2 library with support for Argon2id.])], [#include <argon2.h>])
PKG_CHECK_MODULES([LIBARGON2], [libargon2],,[LIBARGON2_LIBS="-largon2"])
enable_internal_argon2=no
else
AC_MSG_WARN([Argon2 bundled (slow) reference implementation will be used, please consider to use system library with --enable-libargon2.])
AC_ARG_ENABLE([internal-sse-argon2],
AS_HELP_STRING([--enable-internal-sse-argon2], [enable internal SSE implementation of Argon2 PBKDF]))
if test "x$enable_internal_sse_argon2" = "xyes"; then
AC_MSG_CHECKING(if Argon2 SSE optimization can be used)
AC_LINK_IFELSE([AC_LANG_PROGRAM([[
#include <emmintrin.h>
__m128i testfunc(__m128i *a, __m128i *b) {
return _mm_xor_si128(_mm_loadu_si128(a), _mm_loadu_si128(b));
}
]])],,[enable_internal_sse_argon2=no])
AC_MSG_RESULT($enable_internal_sse_argon2)
fi
fi
if test "x$enable_internal_argon2" = "xyes"; then
AC_DEFINE(USE_INTERNAL_ARGON2, 1, [Use internal Argon2])
fi
AM_CONDITIONAL(CRYPTO_INTERNAL_ARGON2, test "x$enable_internal_argon2" = "xyes")
AM_CONDITIONAL(CRYPTO_INTERNAL_SSE_ARGON2, test "x$enable_internal_sse_argon2" = "xyes")
dnl Link with blkid to check for other device types
AC_ARG_ENABLE([blkid],
AS_HELP_STRING([--disable-blkid], [disable use of blkid for device signature detection and wiping]),
[], [enable_blkid=yes])
if test "x$enable_blkid" = "xyes"; then
PKG_CHECK_MODULES([BLKID], [blkid],[AC_DEFINE([HAVE_BLKID], 1, [Define to 1 to use blkid for detection of disk signatures.])],[LIBBLKID_LIBS="-lblkid"])
AC_CHECK_HEADERS(blkid/blkid.h,,[AC_MSG_ERROR([You need blkid development library installed.])])
AC_CHECK_DECL([blkid_do_wipe],
[ AC_DEFINE([HAVE_BLKID_WIPE], 1, [Define to 1 to use blkid_do_wipe.])
enable_blkid_wipe=yes
],,
[#include <blkid/blkid.h>])
AC_CHECK_DECL([blkid_probe_step_back],
[ AC_DEFINE([HAVE_BLKID_STEP_BACK], 1, [Define to 1 to use blkid_probe_step_back.])
enable_blkid_step_back=yes
],,
[#include <blkid/blkid.h>])
AC_CHECK_DECLS([ blkid_reset_probe,
blkid_probe_set_device,
blkid_probe_filter_superblocks_type,
blkid_do_safeprobe,
blkid_do_probe,
blkid_probe_lookup_value
],,
[AC_MSG_ERROR([Can not compile with blkid support, disable it by --disable-blkid.])],
[#include <blkid/blkid.h>])
fi
AM_CONDITIONAL(HAVE_BLKID, test "x$enable_blkid" = "xyes")
AM_CONDITIONAL(HAVE_BLKID_WIPE, test "x$enable_blkid_wipe" = "xyes")
AM_CONDITIONAL(HAVE_BLKID_STEP_BACK, test "x$enable_blkid_step_back" = "xyes")
dnl Magic for cryptsetup.static build.
if test "x$enable_static_cryptsetup" = "xyes"; then
if test x$enable_static_cryptsetup = xyes; then
saved_PKG_CONFIG=$PKG_CONFIG
PKG_CONFIG="$PKG_CONFIG --static"
@@ -477,7 +347,7 @@ if test "x$enable_static_cryptsetup" = "xyes"; then
LIBS="$saved_LIBS -static"
PKG_CHECK_MODULES([DEVMAPPER_STATIC], [devmapper >= 1.02.27],,[
DEVMAPPER_STATIC_LIBS=$DEVMAPPER_LIBS
if test "x$enable_selinux" = "xyes"; then
if test "x$enable_selinux" != xno; then
AC_CHECK_LIB(sepol, sepol_bool_set)
AC_CHECK_LIB(selinux, is_selinux_enabled)
DEVMAPPER_STATIC_LIBS="$DEVMAPPER_STATIC_LIBS $LIBS"
@@ -496,10 +366,6 @@ if test "x$enable_static_cryptsetup" = "xyes"; then
PKG_CONFIG=$saved_PKG_CONFIG
fi
AC_MSG_CHECKING([for systemd tmpfiles config directory])
PKG_CHECK_VAR([systemd_tmpfilesdir], [systemd], [tmpfilesdir], [], [systemd_tmpfilesdir=no])
AC_MSG_RESULT([$systemd_tmpfilesdir])
AC_SUBST([DEVMAPPER_LIBS])
AC_SUBST([DEVMAPPER_STATIC_LIBS])
@@ -512,21 +378,13 @@ AC_SUBST([CRYPTO_CFLAGS])
AC_SUBST([CRYPTO_LIBS])
AC_SUBST([CRYPTO_STATIC_LIBS])
AC_SUBST([JSON_C_LIBS])
AC_SUBST([LIBARGON2_LIBS])
AC_SUBST([BLKID_LIBS])
AC_SUBST([LIBCRYPTSETUP_VERSION])
AC_SUBST([LIBCRYPTSETUP_VERSION_INFO])
dnl ==========================================================================
AC_ARG_ENABLE([dev-random],
AS_HELP_STRING([--enable-dev-random], [use /dev/random by default for key generation (otherwise use /dev/urandom)]))
if test "x$enable_dev_random" = "xyes"; then
default_rng=/dev/random
else
default_rng=/dev/urandom
fi
AC_ARG_ENABLE([dev-random], AS_HELP_STRING([--enable-dev-random],
[use blocking /dev/random by default for key generator (otherwise use /dev/urandom)]),
[default_rng=/dev/random], [default_rng=/dev/urandom])
AC_DEFINE_UNQUOTED(DEFAULT_RNG, ["$default_rng"], [default RNG type for key generator])
dnl ==========================================================================
@@ -546,12 +404,35 @@ AC_DEFUN([CS_NUM_WITH], [AC_ARG_WITH([$1],
[CS_DEFINE([$1], [$3], [$2])]
)])
AC_DEFUN([CS_ABSPATH], [
case "$1" in
/*) ;;
*) AC_MSG_ERROR([$2 argument must be an absolute path.]);;
esac
])
dnl ==========================================================================
dnl Python bindings
AC_ARG_ENABLE([python], AS_HELP_STRING([--enable-python],[enable Python bindings]),
[with_python=$enableval],
[with_python=no])
AC_ARG_WITH([python_version],
AS_HELP_STRING([--with-python_version=VERSION], [required Python version [2.6]]),
[PYTHON_VERSION=$withval], [PYTHON_VERSION=2.6])
if test "x$with_python" = "xyes"; then
AM_PATH_PYTHON([$PYTHON_VERSION])
AC_PATH_PROGS([PYTHON_CONFIG], [python${PYTHON_VERSION}-config python-config], [no])
if test "${PYTHON_CONFIG}" = "no"; then
AC_MSG_ERROR([cannot find python${PYTHON_VERSION}-config or python-config in PATH])
fi
AC_MSG_CHECKING(for python headers using $PYTHON_CONFIG --includes)
PYTHON_INCLUDES=$($PYTHON_CONFIG --includes)
AC_MSG_RESULT($PYTHON_INCLUDES)
AC_SUBST(PYTHON_INCLUDES)
AC_MSG_CHECKING(for python libraries using $PYTHON_CONFIG --libs)
PYTHON_LIBS=$($PYTHON_CONFIG --libs)
AC_MSG_RESULT($PYTHON_LIBS)
AC_SUBST(PYTHON_LIBS)
fi
AM_CONDITIONAL([PYTHON_CRYPTSETUP], [test "x$with_python" = "xyes"])
dnl ==========================================================================
CS_STR_WITH([plain-hash], [password hashing function for plain mode], [ripemd160])
@@ -563,22 +444,7 @@ CS_STR_WITH([luks1-hash], [hash function for LUKS1 header], [sha256])
CS_STR_WITH([luks1-cipher], [cipher for LUKS1], [aes])
CS_STR_WITH([luks1-mode], [cipher mode for LUKS1], [xts-plain64])
CS_NUM_WITH([luks1-keybits],[key length in bits for LUKS1], [256])
AC_ARG_ENABLE([luks_adjust_xts_keysize], AS_HELP_STRING([--disable-luks-adjust-xts-keysize],
[XTS mode requires two keys, double default LUKS keysize if needed]),
[], [enable_luks_adjust_xts_keysize=yes])
if test "x$enable_luks_adjust_xts_keysize" = "xyes"; then
AC_DEFINE(ENABLE_LUKS_ADJUST_XTS_KEYSIZE, 1, [XTS mode - double default LUKS keysize if needed])
fi
CS_STR_WITH([luks2-pbkdf], [Default PBKDF algorithm (pbkdf2 or argon2i/argon2id) for LUKS2], [argon2i])
CS_NUM_WITH([luks1-iter-time], [PBKDF2 iteration time for LUKS1 (in ms)], [2000])
CS_NUM_WITH([luks2-iter-time], [Argon2 PBKDF iteration time for LUKS2 (in ms)], [2000])
CS_NUM_WITH([luks2-memory-kb], [Argon2 PBKDF memory cost for LUKS2 (in kB)], [1048576])
CS_NUM_WITH([luks2-parallel-threads],[Argon2 PBKDF max parallel cost for LUKS2 (if CPUs available)], [4])
CS_STR_WITH([luks2-keyslot-cipher], [fallback cipher for LUKS2 keyslot (if data encryption is incompatible)], [aes-xts-plain64])
CS_NUM_WITH([luks2-keyslot-keybits],[fallback key size for LUKS2 keyslot (if data encryption is incompatible)], [512])
CS_NUM_WITH([luks1-iter-time],[PBKDF2 iteration time for LUKS1 (in ms)], [2000])
CS_STR_WITH([loopaes-cipher], [cipher for loop-AES mode], [aes])
CS_NUM_WITH([loopaes-keybits],[key length in bits for loop-AES mode], [256])
@@ -590,46 +456,21 @@ CS_STR_WITH([verity-hash], [hash function for verity mode], [sha256])
CS_NUM_WITH([verity-data-block], [data block size for verity mode], [4096])
CS_NUM_WITH([verity-hash-block], [hash block size for verity mode], [4096])
CS_NUM_WITH([verity-salt-size], [salt size for verity mode], [32])
CS_NUM_WITH([verity-fec-roots], [parity bytes for verity FEC], [2])
CS_STR_WITH([tmpfilesdir], [override default path to directory with systemd temporary files], [])
test -z "$with_tmpfilesdir" && with_tmpfilesdir=$systemd_tmpfilesdir
test "x$with_tmpfilesdir" = "xno" || {
CS_ABSPATH([${with_tmpfilesdir}],[with-tmpfilesdir])
DEFAULT_TMPFILESDIR=$with_tmpfilesdir
AC_SUBST(DEFAULT_TMPFILESDIR)
}
AM_CONDITIONAL(CRYPTSETUP_TMPFILE, test -n "$DEFAULT_TMPFILESDIR")
CS_STR_WITH([luks2-lock-path], [path to directory for LUKSv2 locks], [/run/cryptsetup])
test -z "$with_luks2_lock_path" && with_luks2_lock_path=/run/cryptsetup
CS_ABSPATH([${with_luks2_lock_path}],[with-luks2-lock-path])
DEFAULT_LUKS2_LOCK_PATH=$with_luks2_lock_path
AC_SUBST(DEFAULT_LUKS2_LOCK_PATH)
CS_NUM_WITH([luks2-lock-dir-perms], [default luks2 locking directory permissions], [0700])
test -z "$with_luks2_lock_dir_perms" && with_luks2_lock_dir_perms=0700
DEFAULT_LUKS2_LOCK_DIR_PERMS=$with_luks2_lock_dir_perms
AC_SUBST(DEFAULT_LUKS2_LOCK_DIR_PERMS)
dnl Override default LUKS format version (for cryptsetup or cryptsetup-reencrypt format actions only).
AC_ARG_WITH([default_luks_format],
AS_HELP_STRING([--with-default-luks-format=FORMAT], [default LUKS format version (LUKS1/LUKS2) [LUKS2]]),
[], [with_default_luks_format=LUKS2])
case $with_default_luks_format in
LUKS1) default_luks=CRYPT_LUKS1 ;;
LUKS2) default_luks=CRYPT_LUKS2 ;;
*) AC_MSG_ERROR([Unknown default LUKS format. Use LUKS1 or LUKS2 only.]) ;;
esac
AC_DEFINE_UNQUOTED([DEFAULT_LUKS_FORMAT], [$default_luks], [default LUKS format version])
dnl ==========================================================================
AC_CONFIG_FILES([ Makefile
lib/Makefile
lib/libcryptsetup.pc
lib/crypto_backend/Makefile
lib/luks1/Makefile
lib/loopaes/Makefile
lib/verity/Makefile
lib/tcrypt/Makefile
src/Makefile
po/Makefile.in
scripts/cryptsetup.conf
man/Makefile
tests/Makefile
python/Makefile
])
AC_OUTPUT

View File

@@ -178,7 +178,7 @@
* Document cryptsetup exit codes.
2011-03-18 Milan Broz <mbroz@redhat.com>
* Respect maximum keyfile size parameter.
* Respect maximum keyfile size paramater.
* Introduce maximum default keyfile size, add configure option.
* Require the whole key read from keyfile in create command (broken in 1.2.0).
* Fix offset option for loopaesOpen.
@@ -195,7 +195,7 @@
2011-03-05 Milan Broz <mbroz@redhat.com>
* Add exception to COPYING for binary distribution linked with OpenSSL library.
* Set secure data flag (wipe all ioctl buffers) if devmapper library supports it.
* Set secure data flag (wipe all ioclt buffers) if devmapper library supports it.
2011-01-29 Milan Broz <mbroz@redhat.com>
* Fix mapping removal if device disappeared but node still exists.
@@ -334,13 +334,13 @@
* Version 1.1.0.
2010-01-10 Milan Broz <mbroz@redhat.com>
* Fix initialisation of gcrypt during luksFormat.
* Convert hash name to lower case in header (fix sha1 backward compatible header)
* Fix initialisation of gcrypt duting luksFormat.
* Convert hash name to lower case in header (fix sha1 backward comatible header)
* Check for minimum required gcrypt version.
2009-12-30 Milan Broz <mbroz@redhat.com>
* Fix key slot iteration count calculation (small -i value was the same as default).
* The slot and key digest iteration minimum is now 1000.
* The slot and key digest iteration minimun is now 1000.
* The key digest iteration # is calculated from iteration time (approx 1/8 of that).
* Version 1.1.0-rc4.
@@ -395,16 +395,16 @@
* Require device device-mapper to build and do not use backend wrapper for dm calls.
* Move memory locking and dm initialization to command layer.
* Increase priority of process if memory is locked.
* Add log macros and make logging more consistent.
* Add log macros and make logging modre consitent.
* Move command successful messages to verbose level.
* Introduce --debug parameter.
* Move device utils code and provide context parameter (for log).
* Keyfile now must be provided by path, only stdin file descriptor is used (api only).
* Do not call isatty() on closed keyfile descriptor.
* Run performance check for PBKDF2 from LUKS code, do not mix hash algorithms results.
* Run performance check for PBKDF2 from LUKS code, do not mix hash algoritms results.
* Add ability to provide pre-generated master key and UUID in LUKS header format.
* Add LUKS function to verify master key digest.
* Move key slot manipulation function into LUKS specific code.
* Move key slot manuipulation function into LUKS specific code.
* Replace global options struct with separate parameters in helper functions.
* Add new libcryptsetup API (documented in libcryptsetup.h).
* Implement old API calls using new functions.
@@ -412,7 +412,7 @@
* Add --master-key-file option for luksFormat and luksAddKey.
2009-08-17 Milan Broz <mbroz@redhat.com>
* Fix PBKDF2 speed calculation for large passphrases.
* Fix PBKDF2 speed calculation for large passhrases.
* Allow using passphrase provided in options struct for LuksOpen.
* Allow restrict keys size in LuksOpen.
@@ -424,7 +424,7 @@
* Switch PBKDF2 from internal SHA1 to libgcrypt, make hash algorithm not hardcoded to SHA1 here.
* Add required parameters for changing hash used in LUKS key setup scheme.
* Do not export simple XOR helper now used only inside AF functions.
* Completely remove internal SHA1 implementation code, not needed anymore.
* Completely remove internal SHA1 implementanion code, not needed anymore.
* Enable hash algorithm selection for LUKS through -h luksFormat option.
2009-07-28 Milan Broz <mbroz@redhat.com>
@@ -636,7 +636,7 @@
2006-03-15 Clemens Fruhwirth <clemens@endorphin.org>
* configure.in: 1.0.3-rc3. Most displease release ever.
* configure.in: 1.0.3-rc3. Most unplease release ever.
* lib/setup.c (__crypt_create_device): More verbose error message.
2006-02-26 Clemens Fruhwirth <clemens@endorphin.org>
@@ -705,7 +705,7 @@
2005-12-06 Clemens Fruhwirth <clemens@endorphin.org>
* man/cryptsetup.8: Correct "seconds" to "microseconds" in the explanation for -i.
* man/cryptsetup.8: Correct "seconds" to "microseconds" in the explaination for -i.
2005-11-09 Clemens Fruhwirth <clemens@endorphin.org>
@@ -726,7 +726,7 @@
2005-09-08 Clemens Fruhwirth <clemens@endorphin.org>
* lib/setup.c (get_key): Fixed another incompatibility with
* lib/setup.c (get_key): Fixed another incompatiblity with
original cryptsetup.
2005-08-20 Clemens Fruhwirth <clemens@endorphin.org>
@@ -816,7 +816,7 @@
* man/cryptsetup.1: Add man page.
* lib/setup.c: Remove unnecessary LUKS_write_phdr call, so the
* lib/setup.c: Remove unneccessary LUKS_write_phdr call, so the
phdr is written after passphrase reading, so the user can change
his mind, and not have a partial written LUKS header on it's disk.

View File

@@ -1,56 +0,0 @@
Integration with kernel keyring service
---------------------------------------
We have two different use cases for kernel keyring service:
I) Volume keys
Since upstream kernel 4.10 dm-crypt device mapper target allows loading volume
key (VK) in kernel keyring service. The key offloaded in kernel keyring service
is only referenced (by key description) in dm-crypt target and the VK is therefore
no longer stored directly in dm-crypt target. Starting with cryptsetup 2.0 we
load VK in kernel keyring by default for LUKSv2 devices (when dm-crypt with the
feature is available).
Currently cryptsetup loads VK in 'logon' type kernel key so that VK is passed in
the kernel and can't be read from userspace afterward. Also cryptsetup loads VK in
thread keyring (before passing the reference to dm-crypt target) so that the key
lifetime is directly bound to the process that performs the dm-crypt setup. When
cryptsetup process exits (for whatever reason) the key gets unlinked in kernel
automatically. In summary, the key description visible in dm-crypt table line is
a reference to VK that usually no longer exists in kernel keyring service if you
used cryptsetup to for device activation.
Using this feature dm-crypt no longer maintains a direct key copy (but there's
always at least one copy in kernel crypto layer).
II) Keyslot passphrase
The second use case for kernel keyring is to allow cryptsetup reading the keyslot
passphrase stored in kernel keyring instead. The user may load passphrase in kernel
keyring and notify cryptsetup to read it from there later. Currently, cryptsetup
cli supports kernel keyring for passphrase only via LUKS2 internal token
(luks2-keyring). Library also provides a general method for device activation by
reading passphrase from keyring: crypt_activate_by_keyring(). The key type
for use case II) must always be 'user' since we need to read the actual key
data from userspace unlike with VK in I). Ability to read keyslot passphrase
from kernel keyring also allows easily auto-activate LUKS2 devices.
Simple example how to use kernel keyring for keyslot passphrase:
1) create LUKS2 keyring token for keyslot 0 (in LUKS2 device/image)
cryptsetup token add --key-description my:key -S 0 /dev/device
2) Load keyslot passphrase in user keyring
read -s -p "Keyslot passphrase: "; echo -n $REPLY | keyctl padd user my:key @u
3) Activate device using passphrase stored in kernel keyring
cryptsetup open /dev/device my_unlocked_device
4a) unlink the key when no longer needed by
keyctl unlink %user:my:key @u
4b) or revoke it immediately by
keyctl revoke %user:my:key
If cryptsetup asks for passphrase in step 3) something went wrong with keyring
activation. See --debug output then.

View File

@@ -1,61 +0,0 @@
LUKS2 device locking overview
=============================
Why
~~~
LUKS2 format keeps two identical copies of metadata stored consecutively
at the head of metadata device (file or bdev). The metadata
area (both copies) must be updated in a single atomic operation to avoid
header corruption during concurrent write.
While with LUKS1 users may have clear knowledge of when a LUKS header is
being updated (written to) or when it's being read solely the need for
locking with legacy format was not so obvious as it is with the LUKSv2 format.
With LUKS2 the boundary between read-only and read-write is blurry and what
used to be the exclusively read-only operation (i.e., cryptsetup open command) may
easily become read-update operation silently without user's knowledge.
Major feature of LUKS2 format is resilience against accidental
corruption of metadata (i.e., partial header overwrite by parted or cfdisk
while creating partition on mistaken block device).
Such header corruption is detected early on header read and auto-recovery
procedure takes place (the corrupted header with checksum mismatch is being
replaced by the secondary one if that one is intact).
On current Linux systems header load operation may be triggered without user
direct intervention for example by udev rule or from systemd service.
Such clash of header read and auto-recovery procedure could have severe
consequences with the worst case of having LUKS2 device unaccessible or being
broken beyond repair.
The whole locking of LUKSv2 device headers split into two categories depending
what backend the header is stored on:
I) block device
~~~~~~~~~~~~~~~
We perform flock() on file descriptors of files stored in a private
directory (by default /run/lock/cryptsetup). The file name is derived
from major:minor couple of affected block device. Note we recommend
that access to private locking directory is supposed to be limited
to superuser only. For this method to work the distribution needs
to install the locking directory with appropriate access rights.
II) regular files
~~~~~~~~~~~~~~~~~
First notable difference between headers stored in a file
vs. headers stored in a block device is that headers in a file may be
manipulated by the regular user unlike headers on block devices. Therefore
we perform flock() protection on file with the luks2 header directly.
Limitations
~~~~~~~~~~~
a) In general, the locking model provides serialization of I/Os targeting
the header only. It means the header is always written or read at once
while locking is enabled.
We do not suppress any other negative effect that two or more concurrent
writers of the same header may cause.
b) The locking is not cluster aware in any way.

View File

@@ -1,4 +1,4 @@
# Doxyfile 1.8.8
# Doxyfile 1.7.4
#---------------------------------------------------------------------------
# Project related configuration options
@@ -10,7 +10,6 @@ PROJECT_BRIEF = "Public cryptsetup API"
PROJECT_LOGO =
OUTPUT_DIRECTORY = doxygen_api_docs
CREATE_SUBDIRS = NO
ALLOW_UNICODE_NAMES = NO
OUTPUT_LANGUAGE = English
BRIEF_MEMBER_DESC = YES
REPEAT_BRIEF = YES
@@ -28,14 +27,11 @@ INHERIT_DOCS = YES
SEPARATE_MEMBER_PAGES = NO
TAB_SIZE = 8
ALIASES =
TCL_SUBST =
OPTIMIZE_OUTPUT_FOR_C = YES
OPTIMIZE_OUTPUT_JAVA = NO
OPTIMIZE_FOR_FORTRAN = NO
OPTIMIZE_OUTPUT_VHDL = NO
EXTENSION_MAPPING =
MARKDOWN_SUPPORT = YES
AUTOLINK_SUPPORT = YES
BUILTIN_STL_SUPPORT = NO
CPP_CLI_SUPPORT = NO
SIP_SUPPORT = NO
@@ -43,15 +39,13 @@ IDL_PROPERTY_SUPPORT = YES
DISTRIBUTE_GROUP_DOC = NO
SUBGROUPING = YES
INLINE_GROUPED_CLASSES = NO
INLINE_SIMPLE_STRUCTS = NO
TYPEDEF_HIDES_STRUCT = YES
LOOKUP_CACHE_SIZE = 0
SYMBOL_CACHE_SIZE = 0
#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------
EXTRACT_ALL = NO
EXTRACT_PRIVATE = NO
EXTRACT_PACKAGE = NO
EXTRACT_STATIC = NO
EXTRACT_LOCAL_CLASSES = YES
EXTRACT_LOCAL_METHODS = NO
@@ -64,7 +58,6 @@ INTERNAL_DOCS = NO
CASE_SENSE_NAMES = YES
HIDE_SCOPE_NAMES = NO
SHOW_INCLUDE_FILES = YES
SHOW_GROUPED_MEMB_INC = NO
FORCE_LOCAL_INCLUDES = NO
INLINE_INFO = YES
SORT_MEMBER_DOCS = YES
@@ -80,13 +73,13 @@ GENERATE_DEPRECATEDLIST= YES
ENABLED_SECTIONS =
MAX_INITIALIZER_LINES = 30
SHOW_USED_FILES = YES
SHOW_DIRECTORIES = NO
SHOW_FILES = YES
SHOW_NAMESPACES = YES
FILE_VERSION_FILTER =
LAYOUT_FILE =
CITE_BIB_FILES =
#---------------------------------------------------------------------------
# Configuration options related to warning and progress messages
# configuration options related to warning and progress messages
#---------------------------------------------------------------------------
QUIET = NO
WARNINGS = YES
@@ -96,10 +89,9 @@ WARN_NO_PARAMDOC = NO
WARN_FORMAT = "$file:$line: $text"
WARN_LOGFILE =
#---------------------------------------------------------------------------
# Configuration options related to the input files
# configuration options related to the input files
#---------------------------------------------------------------------------
INPUT = "doxygen_index.h" \
"../lib/libcryptsetup.h"
INPUT = "doxygen_index" "../lib/libcryptsetup.h"
INPUT_ENCODING = UTF-8
FILE_PATTERNS =
RECURSIVE = NO
@@ -115,9 +107,8 @@ INPUT_FILTER =
FILTER_PATTERNS =
FILTER_SOURCE_FILES = NO
FILTER_SOURCE_PATTERNS =
USE_MDFILE_AS_MAINPAGE =
#---------------------------------------------------------------------------
# Configuration options related to source browsing
# configuration options related to source browsing
#---------------------------------------------------------------------------
SOURCE_BROWSER = NO
INLINE_SOURCES = NO
@@ -125,19 +116,16 @@ STRIP_CODE_COMMENTS = YES
REFERENCED_BY_RELATION = NO
REFERENCES_RELATION = NO
REFERENCES_LINK_SOURCE = YES
SOURCE_TOOLTIPS = YES
USE_HTAGS = NO
VERBATIM_HEADERS = YES
CLANG_ASSISTED_PARSING = NO
CLANG_OPTIONS =
#---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index
# configuration options related to the alphabetical class index
#---------------------------------------------------------------------------
ALPHABETICAL_INDEX = YES
COLS_IN_ALPHA_INDEX = 5
IGNORE_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the HTML output
# configuration options related to the HTML output
#---------------------------------------------------------------------------
GENERATE_HTML = YES
HTML_OUTPUT = html
@@ -145,14 +133,13 @@ HTML_FILE_EXTENSION = .html
HTML_HEADER =
HTML_FOOTER =
HTML_STYLESHEET =
HTML_EXTRA_STYLESHEET =
HTML_EXTRA_FILES =
HTML_COLORSTYLE_HUE = 220
HTML_COLORSTYLE_SAT = 100
HTML_COLORSTYLE_GAMMA = 80
HTML_TIMESTAMP = YES
HTML_ALIGN_MEMBERS = YES
HTML_DYNAMIC_SECTIONS = NO
HTML_INDEX_NUM_ENTRIES = 100
GENERATE_DOCSET = NO
DOCSET_FEEDNAME = "Doxygen generated docs"
DOCSET_BUNDLE_ID = org.doxygen.Project
@@ -176,26 +163,19 @@ QHG_LOCATION =
GENERATE_ECLIPSEHELP = NO
ECLIPSE_DOC_ID = org.doxygen.Project
DISABLE_INDEX = NO
GENERATE_TREEVIEW = NO
ENUM_VALUES_PER_LINE = 4
GENERATE_TREEVIEW = NO
USE_INLINE_TREES = NO
TREEVIEW_WIDTH = 250
EXT_LINKS_IN_WINDOW = NO
FORMULA_FONTSIZE = 10
FORMULA_TRANSPARENT = YES
USE_MATHJAX = NO
MATHJAX_FORMAT = HTML-CSS
MATHJAX_RELPATH = http://www.mathjax.org/mathjax
MATHJAX_EXTENSIONS =
MATHJAX_CODEFILE =
SEARCHENGINE = YES
SERVER_BASED_SEARCH = NO
EXTERNAL_SEARCH = NO
SEARCHENGINE_URL =
SEARCHDATA_FILE = searchdata.xml
EXTERNAL_SEARCH_ID =
EXTRA_SEARCH_MAPPINGS =
#---------------------------------------------------------------------------
# Configuration options related to the LaTeX output
# configuration options related to the LaTeX output
#---------------------------------------------------------------------------
GENERATE_LATEX = YES
LATEX_OUTPUT = latex
@@ -206,15 +186,13 @@ PAPER_TYPE = a4
EXTRA_PACKAGES =
LATEX_HEADER =
LATEX_FOOTER =
LATEX_EXTRA_FILES =
PDF_HYPERLINKS = YES
USE_PDFLATEX = YES
LATEX_BATCHMODE = NO
LATEX_HIDE_INDICES = NO
LATEX_SOURCE_CODE = NO
LATEX_BIB_STYLE = plain
#---------------------------------------------------------------------------
# Configuration options related to the RTF output
# configuration options related to the RTF output
#---------------------------------------------------------------------------
GENERATE_RTF = NO
RTF_OUTPUT = rtf
@@ -223,31 +201,26 @@ RTF_HYPERLINKS = NO
RTF_STYLESHEET_FILE =
RTF_EXTENSIONS_FILE =
#---------------------------------------------------------------------------
# Configuration options related to the man page output
# configuration options related to the man page output
#---------------------------------------------------------------------------
GENERATE_MAN = NO
MAN_OUTPUT = man
MAN_EXTENSION = .3
MAN_SUBDIR =
MAN_LINKS = NO
#---------------------------------------------------------------------------
# Configuration options related to the XML output
# configuration options related to the XML output
#---------------------------------------------------------------------------
GENERATE_XML = NO
XML_OUTPUT = xml
XML_SCHEMA =
XML_DTD =
XML_PROGRAMLISTING = YES
#---------------------------------------------------------------------------
# Configuration options related to the DOCBOOK output
#---------------------------------------------------------------------------
GENERATE_DOCBOOK = NO
DOCBOOK_OUTPUT = docbook
DOCBOOK_PROGRAMLISTING = NO
#---------------------------------------------------------------------------
# Configuration options for the AutoGen Definitions output
# configuration options for the AutoGen Definitions output
#---------------------------------------------------------------------------
GENERATE_AUTOGEN_DEF = NO
#---------------------------------------------------------------------------
# Configuration options related to the Perl module output
# configuration options related to the Perl module output
#---------------------------------------------------------------------------
GENERATE_PERLMOD = NO
PERLMOD_LATEX = NO
@@ -266,20 +239,18 @@ PREDEFINED =
EXPAND_AS_DEFINED =
SKIP_FUNCTION_MACROS = YES
#---------------------------------------------------------------------------
# Configuration options related to external references
# Configuration::additions related to external references
#---------------------------------------------------------------------------
TAGFILES =
GENERATE_TAGFILE =
ALLEXTERNALS = NO
EXTERNAL_GROUPS = YES
EXTERNAL_PAGES = YES
PERL_PATH =
#---------------------------------------------------------------------------
# Configuration options related to the dot tool
#---------------------------------------------------------------------------
CLASS_DIAGRAMS = YES
MSCGEN_PATH =
DIA_PATH =
HIDE_UNDOC_RELATIONS = YES
HAVE_DOT = NO
DOT_NUM_THREADS = 0
@@ -290,7 +261,6 @@ CLASS_GRAPH = YES
COLLABORATION_GRAPH = YES
GROUP_GRAPHS = YES
UML_LOOK = NO
UML_LIMIT_NUM_FIELDS = 10
TEMPLATE_RELATIONS = NO
INCLUDE_GRAPH = YES
INCLUDED_BY_GRAPH = YES
@@ -299,12 +269,9 @@ CALLER_GRAPH = NO
GRAPHICAL_HIERARCHY = YES
DIRECTORY_GRAPH = YES
DOT_IMAGE_FORMAT = png
INTERACTIVE_SVG = NO
DOT_PATH =
DOTFILE_DIRS =
MSCFILE_DIRS =
DIAFILE_DIRS =
PLANTUML_JAR_PATH =
DOT_GRAPH_MAX_NODES = 50
MAX_DOT_GRAPH_DEPTH = 0
DOT_TRANSPARENT = NO

View File

@@ -1,9 +1,10 @@
/*! \mainpage Cryptsetup API
/**
* @mainpage Cryptsetup API
*
* <b>The</b> documentation covers public parts of cryptsetup API. In the following sections you'll find
* The documentation covers public parts of cryptsetup API. In the following sections you'll find
* the examples that describe some features of cryptsetup API.
* For more info about libcryptsetup API versions see
* <a href="https://gitlab.com/cryptsetup/cryptsetup/wikis/ABI-tracker/timeline/libcryptsetup/index.html">API Tracker</a>.
* <a href="http://upstream-tracker.org/versions/libcryptsetup.html">Upstream Tracker</a>.
*
* <OL type="A">
* <LI>@ref cexamples "Cryptsetup API examples"</LI>
@@ -31,16 +32,18 @@
* @section cexamples Cryptsetup API examples
* @section cluks crypt_luks_usage - cryptsetup LUKS device type usage
* @subsection cinit crypt_init()
* Every time you need to do something with cryptsetup or dmcrypt device
*
* Every time you need to do something with cryptsetup or dmcrypt device
* you need a valid context. The first step to start your work is
* @ref crypt_init call. You can call it either with path
* to the block device or path to the regular file. If you don't supply the path,
* empty context is initialized.
*
* @subsection cformat crypt_format() - header and payload on mutual device
*
* This section covers basic use cases for formatting LUKS devices. Format operation
* sets device type in context and in case of LUKS header is written at the beginning
* of block device. In the example below we use the scenario where LUKS header and data
* of block device. In the example bellow we use the scenario where LUKS header and data
* are both stored on the same device. There's also a possibility to store header and
* data separately.
*
@@ -48,6 +51,7 @@
* overwrites part of the backing block device.
*
* @subsection ckeys Keyslot operations examples
*
* After successful @ref crypt_format of LUKS device, volume key is not stored
* in a persistent way on the device. Keyslot area is an array beyond LUKS header, where
* volume key is stored in the encrypted form using user input passphrase. For more info about
@@ -56,27 +60,33 @@
* There are two basic methods to create a new keyslot:
*
* @subsection ckeyslot_vol crypt_keyslot_add_by_volume_key()
*
* Creates a new keyslot directly by encrypting volume_key stored in the device
* context. Passphrase should be supplied or user is prompted if passphrase param is
* NULL.
*
* @subsection ckeyslot_pass crypt_keyslot_add_by_passphrase()
*
* Creates a new keyslot for the volume key by opening existing active keyslot,
* extracting volume key from it and storing it into a new keyslot
* protected by a new passphrase
*
* @subsection cload crypt_load()
*
* Function loads header from backing block device into device context.
*
* @subsection cactivate crypt_activate_by_passphrase()
*
* Activates crypt device by user supplied password for keyslot containing the volume_key.
* If <I>keyslot</I> parameter is set to <I>CRYPT_ANY_SLOT</I> then all active keyslots
* are tried one by one until the volume key is found.
*
* @subsection cactive_pars crypt_get_active_device()
*
* This call returns structure containing runtime attributes of active device.
*
* @subsection cinit_by_name crypt_init_by_name()
*
* In case you need to do operations with active device (device which already
* has its corresponding mapping) and you miss valid device context stored in
* *crypt_device reference, you should use this call. Function tries to
@@ -84,18 +94,22 @@
* header.
*
* @subsection cdeactivate crypt_deactivate()
*
* Deactivates crypt device (removes DM mapping and safely erases volume key from kernel).
*
* @subsection cluks_ex crypt_luks_usage.c - Complex example
*
* To compile and run use following commands in examples directory:
*
* @code
* make
* ./crypt_luks_usage _path_to_[block_device]_file
* @endcode
*
* Note that you need to have the cryptsetup library compiled. @include crypt_luks_usage.c
*
* @section clog crypt_log_usage - cryptsetup logging API example
*
* Example describes basic use case for cryptsetup logging. To compile and run
* use following commands in examples directory:
*
@@ -103,6 +117,7 @@
* make
* ./crypt_log_usage
* @endcode
*
* Note that you need to have the cryptsetup library compiled. @include crypt_log_usage.c
*
* @example crypt_luks_usage.c

View File

@@ -1,7 +1,7 @@
/*
* An example of using logging through libcryptsetup API
*
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011, Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -1,7 +1,7 @@
/*
* An example of using LUKS device through libcryptsetup API
*
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011, Red Hat, Inc. All rights reserved.
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

Binary file not shown.

Binary file not shown.

View File

@@ -15,7 +15,7 @@ Important changes
* NSS (because of missing ripemd160 it cannot provide full backward compatibility)
* kernel userspace API (provided by kernel 2.6.38 and above)
(Note that kernel userspace backend is very slow for this type of operation.
But it can be useful for embedded systems, because you can avoid userspace
But it can be usefull for embedded systems, because you can avoid userspace
crypto library completely.)
Backend is selected during configure time, using --with-crypto_backend option.

View File

@@ -89,7 +89,7 @@ WARNING: This release removes old deprecated API from libcryptsetup
(It can be used to simulate trivial hidden disk concepts.)
libcryptsetup API changes:
* Added options to support detached metadata device
* Added options to suport detached metadata device
crypt_init_by_name_and_header()
crypt_set_data_device()
* Add crypt_last_error() API call.

View File

@@ -13,8 +13,12 @@ Changes since version 1.7.2
If the measurement function returns exactly 500 ms, the iteration calculation loop
doubled iteration count but instead of repeating measurement it used this value directly.
* Verify passphrase in cryptsetup-reencrypt when encrypting a new drive.
* OpenSSL backend: fix memory leak if hash context was repeatedly reused.
* OpenSSL backend: add support for OpenSSL 1.1.0.
* Fix several minor spelling errors.
* Properly check maximal buffer size when parsing UUID from /dev/disk/.

View File

@@ -1,22 +0,0 @@
Cryptsetup 1.7.4 Release Notes
==============================
Changes since version 1.7.3
* Allow to specify LUKS1 hash algorithm in Python luksFormat wrapper.
* Use LUKS1 compiled-in defaults also in Python wrapper.
* OpenSSL backend: Fix OpenSSL 1.1.0 support without backward compatible API.
* OpenSSL backend: Fix LibreSSL compatibility.
* Check for data device and hash device area overlap in veritysetup.
* Fix a possible race while allocating a free loop device.
* Fix possible file descriptor leaks if libcryptsetup is run from a forked process.
* Fix missing same_cpu_crypt flag in status command.
* Various updates to FAQ and man pages.

View File

@@ -1,22 +0,0 @@
Cryptsetup 1.7.5 Release Notes
==============================
Changes since version 1.7.4
* Fixes to luksFormat to properly support recent kernel running in FIPS mode.
Cryptsetup must never use a weak key even if it is just used for testing
of algorithm availability. In FIPS mode, weak keys are always rejected.
A weak key is for example detected if the XTS encryption mode use
the same key for the tweak and the encryption part.
* Fixes accesses to unaligned hidden legacy TrueCrypt header.
On a native 4k-sector device the old hidden TrueCrypt header is not
aligned with the hw sector size (this problem was fixed in later TrueCrypt
on-disk format versions).
Cryptsetup now properly aligns the read so it does not fail.
* Fixes to optional dracut ramdisk scripts for offline re-encryption on initial boot.

View File

@@ -1,605 +0,0 @@
Cryptsetup 2.0.0 Release Notes
==============================
Stable release with experimental features.
This version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
NOTE: This version changes soname of libcryptsetup library and increases
major version for all public symbols.
Most of the old functions are fully backward compatible, so only
recompilation of programs should be needed.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
To provide all security features of authenticated encryption we need
better nonce-reuse resistant algorithm in kernel (see note below).
For now, please use authenticated encryption as experimental feature.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.0-RC1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Limit KDF requested (for format) memory by available physical memory.
On some systems too high requested amount of memory causes OOM killer
to kill the process (instead of returning ENOMEM).
We never try to use more than half of available physical memory.
* Ignore device alignment if it is not multiple of minimal-io.
Some USB enclosures seems to report bogus topology info that
prevents to use LUKS detached header.
Changes since version 2.0.0-RC0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Enable to use system libargon2 instead of bundled version.
Renames --disable-argon2 to --disable-internal-argon2 option
and adds --enable-libargon2 flag to allow system libargon2.
* Changes in build system (Automake)
- The build system now uses non-recursive automake (except for tests).
(Tools binaries are now located in buildroot directory.)
- New --disable-cryptsetup option to disable build of cryptsetup tool.
- Enable build of cryptsetup-reencrypt by default.
* Install tmpfiles.d configuration for LUKS2 locking directory.
You can overwrite this using --with-tmpfilesdir configure option.
If your distro does not support tmpfiles.d directory, you have
to create locking directory (/run/lock/cryptsetup) in cryptsetup
package (or init scripts).
* Adds limited support for offline reencryption of LUKS2 format.
* Decrease size of testing images (and the whole release archive).
* Fixes for several memory leaks found by Valgrind and Coverity tools.
* Fixes for several typos in man pages and error messages.
* LUKS header file in luksFormat is now automatically created
if it does not exist.
* Do not allow resize if device size is not aligned to sector size.
Cryptsetup 2.0.0 RC0 Release Notes
==================================
Important features
~~~~~~~~~~~~~~~~~~
* New command integritysetup: support for the new dm-integrity kernel target.
The dm-integrity is a new kernel device-mapper target that introduces
software emulation of per-sector integrity fields on the disk sector level.
It is available since Linux kernel version 4.12.
The provided per-sector metadata fields can be used for storing a data
integrity checksum (for example CRC32).
The dm-integrity implements data journal that enforces atomic update
of a sector and its integrity metadata.
Integritysetup is a CLI utility that can setup standalone dm-integrity
devices (that internally check integrity of data).
Integritysetup is intended to be used for settings that require
non-cryptographic data integrity protection with no data encryption.
Fo setting integrity protected encrypted devices, see disk authenticated
encryption below.
Note that after formatting the checksums need to be initialized;
otherwise device reads will fail because of integrity errors.
Integritysetup by default tries to wipe the device with zero blocks
to avoid this problem. Device wipe can be time-consuming, you can skip
this step by specifying --no-wipe option.
(But note that not wiping device can cause some operations to fail
if a write is not multiple of page size and kernel page cache tries
to read sectors with not yet initialized checksums.)
The default setting is tag size 4 bytes per-sector and CRC32C protection.
To format device with these defaults:
$ integritysetup format <device>
$ integritysetup open <device> <name>
Note that used algorithm (unlike tag size) is NOT stored in device
kernel superblock and if you use different algorithm, you MUST specify
it in every open command, for example:
$ integritysetup format <device> --tag-size 32 --integrity sha256
$ integritysetup open <device> <name> --integrity sha256
For more info, see integrity man page.
* Veritysetup command can now format and activate dm-verity devices
that contain Forward Error Correction (FEC) (Reed-Solomon code is used).
This feature is used on most of Android devices already (available since
Linux kernel 4.5).
There are new options --fec-device, --fec-offset to specify data area
with correction code and --fec-roots that set Redd-Solomon generator roots.
This setting can be used for format command (veritysetup will calculate
and store RS codes) or open command (veritysetup configures kernel
dm-verity to use RS codes).
For more info see veritysetup man page.
* Support for larger sector sizes for crypt devices.
LUKS2 and plain crypt devices can be now configured with larger encryption
sector (typically 4096 bytes, sector size must be the power of two,
maximal sector size is 4096 bytes for portability).
Large sector size can decrease encryption overhead and can also help
with some specific crypto hardware accelerators that perform very
badly with 512 bytes sectors.
Note that if you configure such a larger sector of the device that does use
smaller physical sector, there is a possibility of a data corruption during
power fail (partial sector writes).
WARNING: If you use different sector size for a plain device after data were
stored, the decryption will produce garbage.
For LUKS2, the sector size is stored in metadata and cannot be changed later.
LUKS2 format and features
~~~~~~~~~~~~~~~~~~~~~~~~~
The LUKS2 is an on-disk storage format designed to provide simple key
management, primarily intended for Full Disk Encryption based on dm-crypt.
The LUKS2 is inspired by LUKS1 format and in some specific situations (most
of the default configurations) can be converted in-place from LUKS1.
The LUKS2 format is designed to allow future updates of various
parts without the need to modify binary structures and internally
uses JSON text format for metadata. Compilation now requires the json-c library
that is used for JSON data processing.
On-disk format provides redundancy of metadata, detection
of metadata corruption and automatic repair from metadata copy.
NOTE: For security reasons, there is no redundancy in keyslots binary data
(encrypted keys) but the format allows adding such a feature in future.
NOTE: to operate correctly, LUKS2 requires locking of metadata.
Locking is performed by using flock() system call for images in file
and for block device by using a specific lock file in /run/lock/cryptsetup.
This directory must be created by distribution (do not rely on internal
fallback). For systemd-based distribution, you can simply install
scripts/cryptsetup.conf into tmpfiles.d directory.
For more details see LUKS2-format.txt and LUKS2-locking.txt in the docs
directory. (Please note this is just overview, there will be more formal
documentation later.)
LUKS2 use
~~~~~~~~~
LUKS2 allows using all possible configurations as LUKS1.
To format device as LUKS2, you have to add "--type luks2" during format:
$ cryptsetup luksFormat --type luks2 <device>
All commands issued later will recognize the new format automatically.
The newly added features in LUKS2 include:
* Authenticated disk (sector) encryption (EXPERIMENTAL)
Legacy Full disk encryption (FDE), for example, LUKS1, is a length-preserving
encryption (plaintext is the same size as a ciphertext).
Such FDE can provide data confidentiality, but cannot provide sound data
integrity protection.
Full disk authenticated encryption is a way how to provide both
confidentiality and data integrity protection. Integrity protection here means
not only detection of random data corruption (silent data corruption) but also
prevention of an unauthorized intentional change of disk sector content.
NOTE: Integrity protection of this type cannot prevent a replay attack.
An attacker can replace the device or its part of the old content, and it
cannot be detected.
If you need such protection, better use integrity protection on a higher layer.
For data integrity protection on the sector level, we need additional
per-sector metadata space. In LUKS2 this space is provided by a new
device-mapper dm-integrity target (available since kernel 4.12).
Here the integrity target provides only reliable per-sector metadata store,
and the whole authenticated encryption is performed inside dm-crypt stacked
over the dm-integrity device.
For encryption, Authenticated Encryption with Additional Data (AEAD) is used.
Every sector is processed as a encryption request of this format:
|----- AAD -------|------ DATA -------|-- AUTH TAG --|
| (authenticated) | (auth+encryption) | |
| sector_LE | IV | sector in/out | tag in/out |
AEAD encrypts the whole sector and also authenticates sector number
(to detect sector relocation) and also authenticates Initialization Vector.
AEAD encryption produces encrypted data and authentication tag.
The authenticated tag is then stored in per-sector metadata space provided
by dm-integrity.
Most of the current AEAD algorithms requires IV as a nonce, value that is
never reused. Because sector number, as an IV, cannot be used in this
environment, we use a new random IV (IV is a random value generated by system
RNG on every write). This random IV is then stored in the per-sector metadata
as well.
Because the authentication tag (and IV) requires additional space, the device
provided for a user has less capacity. Also, the data journalling means that
writes are performed twice, decreasing throughput.
This integrity protection works better with SSDs. If you want to ignore
dm-integrity data journal (because journalling is performed on some higher
layer or you just want to trade-off performance to safe recovery), you can
switch journal off with --integrity-no-journal option.
(This flag can be stored persistently as well.)
Note that (similar to integritysetup) the device read will fail if
authentication tag is not initialized (no previous write).
By default cryptsetup run wipe of a device (writing zeroes) to initialize
authentication tags. This operation can be very time-consuming.
You can skip device wipe using --integrity-no-wipe option.
To format LUKS2 device with integrity protection, use new --integrity option.
For now, there are very few AEAD algorithms that can be used, and some
of them are known to be problematic. In this release we support only
a few of AEAD algorithms (options are for now hard coded), later this
extension will be completely algorithm-agnostic.
For testing of authenticated encryption, these algorithms work for now:
1) aes-xts-plain64 with hmac-sha256 or hmac-sha512 as the authentication tag.
(Common FDE mode + independent authentication tag. Authentication key
for HMAC is independently generated. This mode is very slow.)
$ cryptsetup luksFormat --type luks2 <device> --cipher aes-xts-plain64 --integrity hmac-sha256
2) aes-gcm-random (native AEAD mode)
DO NOT USE in production! The GCM mode uses only 96-bit nonce,
and possible collision means fatal security problem.
GCM mode has very good hardware support through AES-NI, so it is useful
for performance testing.
$ cryptsetup luksFormat --type luks2 <device> --cipher aes-gcm-random --integrity aead
3) ChaCha20 with Poly1305 authenticator (according to RFC7539)
$ cryptsetup luksFormat --type luks2 <device> --cipher chacha20-random --integrity poly1305
To specify AES128/AES256 just specify proper key size (without possible
authentication key). Other symmetric ciphers, like Serpent or Twofish,
should work as well. The mode 1) and 2) should be compatible with IEEE 1619.1
standard recommendation.
There will be better suitable authenticated modes available soon
For now we are just preparing framework to enable it (and hopefully improve security of FDE).
FDE authenticated encryption is not a replacement for filesystem layer
authenticated encryption. The goal is to provide at least something because
data integrity protection is often completely ignored in today systems.
* New memory-hard PBKDF
LUKS1 introduced Password-Based Key Derivation Function v2 as a tool to
increase attacker cost for a dictionary and brute force attacks.
The PBKDF2 uses iteration count to increase time of key derivation.
Unfortunately, with modern GPUs, the PBKDF2 calculations can be run
in parallel and PBKDF2 can no longer provide the best available protection.
Increasing iteration count just cannot prevent massive parallel dictionary
password attacks in long-term.
To solve this problem, a new PBKDF, based on so-called memory-hard functions
can be used. Key derivation with memory-hard function requires a certain
amount of memory to compute its output. The memory requirement is very
costly for GPUs and prevents these systems to operate effectively,
increasing cost for attackers.
LUKS2 introduces support for Argon2i and Argon2id as a PBKDF.
Argon2 is the winner of Password Hashing Competition and is currently
in final RFC draft specification.
For now, libcryptsetup contains the embedded copy of reference implementation
of Argon2 (that is easily portable to all architectures).
Later, once this function is available in common crypto libraries, it will
switch to external implementation. (This happened for LUKS1 and PBKDF2
as well years ago.)
With using reference implementation (that is not optimized for speed), there
is some performance penalty. However, using memory-hard PBKDF should still
significantly complicate GPU-optimized dictionary and brute force attacks.
The Argon2 uses three costs: memory, time (number of iterations) and parallel
(number of threads).
Note that time and memory cost highly influences each other (accessing a lot
of memory takes more time).
There is a new benchmark that tries to calculate costs to take similar way as
in LUKS1 (where iteration is measured to take 1-2 seconds on user system).
Because now there are more cost variables, it prefers time cost (iterations)
and tries to find required memory that fits. (IOW required memory cost can be
lower if the benchmarks are not able to find required parameters.)
The benchmark cannot run too long, so it tries to approximate next step
for benchmarking.
For now, default LUKS2 PBKDF algorithm is Argon2i (data independent variant)
with memory cost set to 128MB, time to 800ms and parallel thread according
to available CPU cores but no more than 4.
All default parameters can be set during compile time and also set on
the command line by using --pbkdf, --pbkdf-memory, --pbkdf-parallel and
--iter-time options.
(Or without benchmark directly by using --pbkdf-force-iterations, see below.)
You can still use PBKDF2 even for LUKS2 by specifying --pbkdf pbkdf2 option.
(Then only iteration count is applied.)
* Use of kernel keyring
Kernel keyring is a storage for sensitive material (like cryptographic keys)
inside Linux kernel.
LUKS2 uses keyring for two major functions:
- To store volume key for dm-crypt where it avoids sending volume key in
every device-mapper ioctl structure. Volume key is also no longer directly
visible in a dm-crypt mapping table. The key is not available for the user
after dm-crypt configuration (obviously except direct memory scan).
Use of kernel keyring can be disabled in runtime by --disable-keyring option.
- As a tool to automatically unlock LUKS device if a passphrase is put into
kernel keyring and proper keyring token is configured.
This allows storing a secret (passphrase) to kernel per-user keyring by
some external tool (for example some TPM handler) and LUKS2, if configured,
will automatically search in the keyring and unlock the system.
For more info see Tokens section below.
* Persistent flags
The activation flags (like allow-discards) can be stored in metadata and used
automatically by all later activations (even without using crypttab).
To store activation flags permanently, use activation command with required
flags and add --persistent option.
For example, to mark device to always activate with TRIM enabled,
use (for LUKS2 type):
$ cryptsetup open <device> <name> --allow-discards --persistent
You can check persistent flags in dump command output:
$ cryptsetup luksDump <device>
* Tokens and auto-activation
A LUKS2 token is an object that can be described "how to get passphrase or key"
to unlock particular keyslot.
(Also it can be used to store any additional metadata, and with
the libcryptsetup interface it can be used to define user token types.)
Cryptsetup internally implements keyring token. Cryptsetup tries to use
available tokens before asking for the passphrase. For keyring token,
it means that if the passphrase is available under specified identifier
inside kernel keyring, the device is automatically activated using this
stored passphrase.
Example of using LUKS2 keyring token:
# Adding token to metadata with "my_token" identifier (by default it applies to all keyslots).
$ cryptsetup token add --key-description "my_token" <device>
# Storing passphrase to user keyring (this can be done by an external application)
$ echo -n <passphrase> | keyctl padd user my_token @u
# Now cryptsetup activates automatically if it finds correct passphrase
$ cryptsetup open <device> <name>
The main reason to use tokens this way is to separate possible hardware
handlers from cryptsetup code.
* Keyslot priorities
LUKS2 keyslot can have a new priority attribute.
The default is "normal". The "prefer" priority tell the keyslot to be tried
before other keyslots. Priority "ignore" means that keyslot will never be
used if not specified explicitly (it can be used for backup administrator
passwords that are used only situations when a user forgets own passphrase).
The priority of keyslot can be set with new config command, for example
$ cryptsetup config <device> --key-slot 1 --priority prefer
Setting priority to normal will reset slot to normal state.
* LUKS2 label and subsystem
The header now contains additional fields for label and subsystem (additional
label). These fields can be used similar to filesystem label and will be
visible in udev rules to possible filtering. (Note that blkid do not yet
contain the LUKS scanning code).
By default both labels are empty. Label and subsystem are always set together
(no option means clear the label) with the config command:
$ cryptsetup config <device> --label my_device --subsystem ""
* In-place conversion form LUKS1
To allow easy testing and transition to the new LUKS2 format, there is a new
convert command that allows in-place conversion from the LUKS1 format and,
if there are no incompatible options, also conversion back from LUKS2
to LUKS1 format.
Note this command can be used only on some LUKS1 devices (some device header
sizes are not supported).
This command is dangerous, never run it without header backup!
If something fails in the middle of conversion (IO error), the header
is destroyed. (Note that conversion requires move of keyslot data area to
a different offset.)
To convert header in-place to LUKS2 format, use
$ cryptsetup convert <device> --type luks2
To convert it back to LUKS1 format, use
$ cryptsetup convert <device> --type luks1
You can verify LUKS version with luksDump command.
$ cryptsetup luksDump <device>
Note that some LUKS2 features will make header incompatible with LUKS1 and
conversion will be rejected (for example using new Argon2 PBKDF or integrity
extensions). Some minor attributes can be lost in conversion.
Other changes
~~~~~~~~~~~~~
* Explicit KDF iterations count setting
With new PBKDF interface, there is also the possibility to setup PBKDF costs
directly, avoiding benchmarks. This can be useful if device is formatted to be
primarily used on a different system.
The option --pbkdf-force-iterations is available for both LUKS1 and LUKS2
format. Using this option can cause device to have either very low or very
high PBKDF costs.
In the first case it means bad protection to dictionary attacks, in the second
case, it can mean extremely high unlocking time or memory requirements.
Use only if you are sure what you are doing!
Not that this setting also affects iteration count for the key digest.
For LUKS1 iteration count for digest will be approximately 1/8 of requested
value, for LUKS2 and "pbkdf2" digest minimal PBKDF2 iteration count (1000)
will be used. You cannot set lower iteration count than the internal minimum
(1000 for PBKDF2).
To format LUKS1 device with forced iteration count (and no benchmarking), use
$ cryptsetup luksFormat <device> --pbkdf-force-iterations 22222
For LUKS2 it is always better to specify full settings (do not rely on default
cost values).
For example, we can set to use Argon2id with iteration cost 5, memory 128000
and parallel set 1:
$ cryptsetup luksFormat --type luks2 <device> \
--pbkdf argon2id --pbkdf-force-iterations 5 --pbkdf-memory 128000 --pbkdf-parallel 1
* VeraCrypt PIM
Cryptsetup can now also open VeraCrypt device that uses Personal Iteration
Multiplier (PIM). PIM is an integer value that user must remember additionally
to passphrase and influences PBKDF2 iteration count (without it VeraCrypt uses
a fixed number of iterations).
To open VeraCrypt device with PIM settings, use --veracrypt-pim (to specify
PIM on the command line) or --veracrypt-query-pim to query PIM interactively.
* Support for plain64be IV
The plain64be is big-endian variant of plain64 Initialization Vector. It is
used in some images of hardware-based disk encryption systems. Supporting this
variant allows using dm-crypt to map such images through cryptsetup.
* Deferral removal
Cryptsetup now can mark device for deferred removal by using a new option
--deferred. This means that close command will not fail if the device is still
in use, but will instruct the kernel to remove the device automatically after
use count drops to zero (for example, once the filesystem is unmounted).
* A lot of updates to man pages and many minor changes that would make this
release notes too long ;-)
Libcryptsetup API changes
~~~~~~~~~~~~~~~~~~~~~~~~~
These API functions were removed, libcryptsetup no longer handles password
retries from terminal (application should handle terminal operations itself):
crypt_set_password_callback;
crypt_set_timeout;
crypt_set_password_retry;
crypt_set_password_verify;
This call is removed (no need to keep typo backward compatibility,
the proper function is crypt_set_iteration_time :-)
crypt_set_iterarion_time;
These calls were removed because are not safe, use per-context
error callbacks instead:
crypt_last_error;
crypt_get_error;
The PBKDF benchmark was replaced by a new function that uses new KDF structure
crypt_benchmark_kdf; (removed)
crypt_benchmark_pbkdf; (new API call)
These new calls are now exported, for details see libcryptsetup.h:
crypt_keyslot_add_by_key;
crypt_keyslot_set_priority;
crypt_keyslot_get_priority;
crypt_token_json_get;
crypt_token_json_set;
crypt_token_status;
crypt_token_luks2_keyring_get;
crypt_token_luks2_keyring_set;
crypt_token_assign_keyslot;
crypt_token_unassign_keyslot;
crypt_token_register;
crypt_activate_by_token;
crypt_activate_by_keyring;
crypt_deactivate_by_name;
crypt_metadata_locking;
crypt_volume_key_keyring;
crypt_get_integrity_info;
crypt_get_sector_size;
crypt_persistent_flags_set;
crypt_persistent_flags_get;
crypt_set_pbkdf_type;
crypt_get_pbkdf_type;
crypt_convert;
crypt_keyfile_read;
crypt_wipe;
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* There will be better documentation and examples.
* There will be some more formal definition of the threat model for integrity
protection. (And a link to some papers discussing integrity protection,
once it is, hopefully, accepted and published.)
* Offline re-encrypt tool LUKS2 support is currently limited.
There will be online LUKS2 re-encryption tool in future.
* Authenticated encryption will use new algorithms from CAESAR competition
(https://competitions.cr.yp.to/caesar.html) once these algorithms are available
in kernel (more on this later).
NOTE: Currently available authenticated modes (GCM, Chacha20-poly1305)
in kernel have too small 96-bit nonces that are problematic with
randomly generated IVs (the collison probability is not negligible).
For the GCM, nonce collision is a fatal problem.
* Authenticated encryption do not set encryption for dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* Some utilities (blkid, systemd-cryptsetup) have already support for LUKS
but not yet in released version (support in crypttab etc).
* There are some examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be deprecated soon in favor
of python bindings to libblockdev library (that can already handle LUKS1 devices).

View File

@@ -1,109 +0,0 @@
Cryptsetup 2.0.1 Release Notes
==============================
Stable and bug-fix release with experimental features.
This version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
To provide all security features of authenticated encryption we need
a better nonce-reuse resistant algorithm in the kernel (see note below).
For now, please use authenticated encryption as an experimental feature.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* To store volume key into kernel keyring, kernel 4.15 with dm-crypt 1.18.1
is required. If a volume key is stored in keyring (LUKS2 only),
the dm-crypt v1.15.0 through v1.18.0 contains a serious bug that may cause
data corruption for ciphers with ESSIV.
(The key for ESSIV is zeroed because of code misplacement.)
This bug is not present for LUKS1 or any other IVs used in LUKS modes.
This change is not visible to the user (except dmsetup output).
* Increase maximum allowed PBKDF memory-cost limit to 4 GiB.
The Argon2 PBKDF uses 1GiB by default; this is also limited by the amount
of physical memory available (maximum is half of the physical memory).
* Use /run/cryptsetup as default for cryptsetup locking dir.
There were problems with sharing /run/lock with lockdev, and in the early
boot, the directory was missing.
The directory can be changed with --with-luks2-lock-path and
--with-luks2-lock-dir-perms configure switches.
* Introduce new 64-bit byte-offset *keyfile_device_offset functions.
The keyfile interface was designed, well, for keyfiles. Unfortunately,
there are user cases where a keyfile can be placed on a device, and
size_t offset can overflow on 32-bit systems.
New set of functions that allow 64-bit offsets even on 32bit systems
are now available:
- crypt_resume_by_keyfile_device_offset
- crypt_keyslot_add_by_keyfile_device_offset
- crypt_activate_by_keyfile_device_offset
- crypt_keyfile_device_read
The new functions have added the _device_ in name.
Old functions are just internal wrappers around these.
Also cryptsetup --keyfile-offset and --new-keyfile-offset now allows
64-bit offsets as parameters.
* Add error hint for wrongly formatted cipher strings in LUKS1 and
properly fail in luksFormat if cipher format is missing required IV.
For now, crypto API quietly used cipher without IV if a cipher
algorithm without IV specification was used (e.g., aes-xts).
This caused fail later during activation.
* Configure check for a recent Argon2 lib to support mandatory Argon2id.
* Fix for the cryptsetup-reencrypt static build if pwquality is enabled.
* Update LUKS1 standard doc (https links in the bibliography).
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* There will be better documentation and examples.
* There will be some more formal definition of the threat model for integrity
protection. (And a link to some papers discussing integrity protection,
once it is, hopefully, accepted and published.)
* Offline re-encrypt tool LUKS2 support is currently limited.
There will be online LUKS2 re-encryption tool in future.
* Authenticated encryption will use new algorithms from CAESAR competition
(https://competitions.cr.yp.to/caesar.html) once these algorithms are
available in the kernel (more on this later).
NOTE: Currently available authenticated modes (GCM, Chacha20-poly1305)
in the kernel have too small 96-bit nonces that are problematic with
randomly generated IVs (the collision probability is not negligible).
For the GCM, nonce collision is a fatal problem.
* Authenticated encryption do not set encryption for a dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* There are examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be deprecated soon in favor
of python bindings to the libblockdev library (that can already handle LUKS1
devices).

View File

@@ -1,93 +0,0 @@
Cryptsetup 2.0.2 Release Notes
==============================
Stable and bug-fix release with experimental features.
Cryptsetup 2.x version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
To provide all security features of authenticated encryption, we need
a better nonce-reuse resistant algorithm in the kernel (see note below).
For now, please use authenticated encryption as an experimental feature.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.1
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Fix a regression in early detection of inactive keyslot for luksKillSlot.
It tried to ask for passphrase even for already erased keyslot.
* Fix a regression in loopaesOpen processing for keyfile on standard input.
Use of "-" argument was not working properly.
* Add LUKS2 specific options for cryptsetup-reencrypt.
Tokens and persistent flags are now transferred during reencryption;
change of PBKDF keyslot parameters is now supported and allows
to set precalculated values (no benchmarks).
* Do not allow LUKS2 --persistent and --test-passphrase cryptsetup flags
combination. Persistent flags are now stored only if the device was
successfully activated with the specified flags.
* Fix integritysetup format after recent Linux kernel changes that
requires to setup key for HMAC in all cases.
Previously integritysetup allowed HMAC with zero key that behaves
like a plain hash.
* Fix VeraCrypt PIM handling that modified internal iteration counts
even for subsequent activations. The PIM count is no longer printed
in debug log as it is sensitive information.
Also, the code now skips legacy TrueCrypt algorithms if a PIM
is specified (they cannot be used with PIM anyway).
* PBKDF values cannot be set (even with force parameters) below
hardcoded minimums. For PBKDF2 is it 1000 iterations, for Argon2
it is 4 iterations and 32 KiB of memory cost.
* Introduce new crypt_token_is_assigned() API function for reporting
the binding between token and keyslots.
* Allow crypt_token_json_set() API function to create internal token types.
Do not allow unknown fields in internal token objects.
* Print message in cryptsetup that about was aborted if a user did not
answer YES in a query.
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* There will be better documentation and examples.
* There will be some more formal definition of the threat model for integrity
protection. (And a link to some papers discussing integrity protection,
once it is, hopefully, accepted and published.)
* Authenticated encryption will use new algorithms from CAESAR competition
https://competitions.cr.yp.to/caesar-submissions.html.
We plan to use AEGIS and MORUS, as CAESAR finalists.
NOTE: Currently available authenticated modes (GCM, Chacha20-poly1305)
in the kernel have too small 96-bit nonces that are problematic with
randomly generated IVs (the collision probability is not negligible).
* Authenticated encryption do not set encryption for a dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* There are examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be deprecated in version 2.1
in favor of python bindings to the libblockdev library.

View File

@@ -1,121 +0,0 @@
Cryptsetup 2.0.3 Release Notes
==============================
Stable bug-fix release with new features.
Cryptsetup 2.x version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
To provide all security features of authenticated encryption, we need
a better nonce-reuse resistant algorithm in the kernel (see note below).
For now, please use authenticated encryption as an experimental feature.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.2
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Expose interface to unbound LUKS2 keyslots.
Unbound LUKS2 keyslot allows storing a key material that is independent
of master volume key (it is not bound to encrypted data segment).
* New API extensions for unbound keyslots (LUKS2 only)
crypt_keyslot_get_key_size() and crypt_volume_key_get()
These functions allow to get key and key size for unbound keyslots.
* New enum value CRYPT_SLOT_UNBOUND for keyslot status (LUKS2 only).
* Add --unbound keyslot option to the cryptsetup luksAddKey command.
* Add crypt_get_active_integrity_failures() call to get integrity
failure count for dm-integrity devices.
* Add crypt_get_pbkdf_default() function to get per-type PBKDF default
setting.
* Add new flag to crypt_keyslot_add_by_key() to force update device
volume key. This call is mainly intended for a wrapped key change.
* Allow volume key store in a file with cryptsetup.
The --dump-master-key together with --master-key-file allows cryptsetup
to store the binary volume key to a file instead of standard output.
* Add support detached header for cryptsetup-reencrypt command.
* Fix VeraCrypt PIM handling - use proper iterations count formula
for PBKDF2-SHA512 and PBKDF2-Whirlpool used in system volumes.
* Fix cryptsetup tcryptDump for VeraCrypt PIM (support --veracrypt-pim).
* Add --with-default-luks-format configure time option.
(Option to override default LUKS format version.)
* Fix LUKS version conversion for detached (and trimmed) LUKS headers.
* Add luksConvertKey cryptsetup command that converts specific keyslot
from one PBKDF to another.
* Do not allow conversion to LUKS2 if LUKSMETA (external tool metadata)
header is detected.
* More cleanup and hardening of LUKS2 keyslot specific validation options.
Add more checks for cipher validity before writing metadata on-disk.
* Do not allow LUKS1 version downconversion if the header contains tokens.
* Add "paes" family ciphers (AES wrapped key scheme for mainframes)
to allowed ciphers.
Specific wrapped ley configuration logic must be done by 3rd party tool,
LUKS2 stores only keyslot material and allow activation of the device.
* Add support for --check-at-most-once option (kernel 4.17) to veritysetup.
This flag can be dangerous; if you can control underlying device
(you can change its content after it was verified) it will no longer
prevent reading tampered data and also it does not prevent silent
data corruptions that appear after the block was once read.
* Fix return code (EPERM instead of EINVAL) and retry count for bad
passphrase on non-tty input.
* Enable support for FEC decoding in veritysetup to check dm-verity devices
with additional Reed-Solomon code in userspace (verify command).
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* There will be better documentation and examples (planned for 2.0.4).
* There will be some more formal definition of the threat model for integrity
protection. (And a link to some papers discussing integrity protection,
once it is, hopefully, accepted and published.)
* Authenticated encryption will use new algorithms from CAESAR competition
https://competitions.cr.yp.to/caesar-submissions.html.
We plan to use AEGIS and MORUS, as CAESAR finalists.
NOTE: Currently available authenticated modes (GCM, Chacha20-poly1305)
in the kernel have too small 96-bit nonces that are problematic with
randomly generated IVs (the collision probability is not negligible).
* Authenticated encryption do not set encryption for a dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* There are examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be REMOVED in version 2.1
in favor of python bindings to the libblockdev library.
See https://github.com/storaged-project/libblockdev/releases/tag/2.17-1 that
already supports LUKS2 and VeraCrypt devices handling through libcryptsetup.

View File

@@ -1,119 +0,0 @@
Cryptsetup 2.0.4 Release Notes
==============================
Stable bug-fix release with new features.
Cryptsetup 2.x version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
To provide all security features of authenticated encryption, we need
a better nonce-reuse resistant algorithm in the kernel (see note below).
For now, please use authenticated encryption as an experimental feature.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.3
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Use the libblkid (blockid) library to detect foreign signatures
on a device before LUKS format and LUKS2 auto-recovery.
This change fixes an unexpected recovery using the secondary
LUKS2 header after a device was already overwritten with
another format (filesystem or LVM physical volume).
LUKS2 will not recreate a primary header if it detects a valid
foreign signature. In this situation, a user must always
use cryptsetup repair command for the recovery.
Note that libcryptsetup and utilities are now linked to libblkid
as a new dependence.
To compile code without blockid support (strongly discouraged),
use --disable-blkid configure switch.
* Add prompt for format and repair actions in cryptsetup and
integritysetup if foreign signatures are detected on the device
through the blockid library.
After the confirmation, all known signatures are then wiped as
part of the format or repair procedure.
* Print consistent verbose message about keyslot and token numbers.
For keyslot actions: Key slot <number> unlocked/created/removed.
For token actions: Token <number> created/removed.
* Print error, if a non-existent token is tried to be removed.
* Add support for LUKS2 token definition export and import.
The token command now can export/import customized token JSON file
directly from command line. See the man page for more details.
* Add support for new dm-integrity superblock version 2.
* Add an error message when nothing was read from a key file.
* Update cryptsetup man pages, including --type option usage.
* Add a snapshot of LUKS2 format specification to documentation
and accordingly fix supported secondary header offsets.
* Add bundled optimized Argon2 SSE (X86_64 platform) code.
If the bundled Argon2 code is used and the new configure switch
--enable-internal-sse-argon2 option is present, and compiler flags
support required optimization, the code will try to use optimized
and faster variant.
Always use the shared library (--enable-libargon2) if possible.
This option was added because an enterprise distribution
rejected to support the shared Argon2 library and native support
in generic cryptographic libraries is not ready yet.
* Fix compilation with crypto backend for LibreSSL >= 2.7.0.
LibreSSL introduced OpenSSL 1.1.x API functions, so compatibility
wrapper must be commented out.
* Fix on-disk header size calculation for LUKS2 format if a specific
data alignment is requested. Until now, the code used default size
that could be wrong for converted devices.
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Authenticated encryption will use new algorithms from CAESAR competition
https://competitions.cr.yp.to/caesar-submissions.html.
We plan to use AEGIS and MORUS (in kernel 4.18), as CAESAR finalists.
NOTE: Currently available authenticated modes (GCM, Chacha20-poly1305)
in the kernel have too small 96-bit nonces that are problematic with
randomly generated IVs (the collision probability is not negligible).
For more info about LUKS2 authenticated encryption, please see our paper
https://arxiv.org/abs/1807.00309
* Authenticated encryption do not set encryption for a dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* There are examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be REMOVED in version 2.1
in favor of python bindings to the libblockdev library.
See https://github.com/storaged-project/libblockdev/releases that
already supports LUKS2 and VeraCrypt devices handling through libcryptsetup.

View File

@@ -1,102 +0,0 @@
Cryptsetup 2.0.5 Release Notes
==============================
Stable bug-fix release with new features.
Cryptsetup 2.x version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.4
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Wipe full header areas (including unused) during LUKS format.
Since this version, the whole area up to the data offset is zeroed,
and subsequently, all keyslots areas are wiped with random data.
This ensures that no remaining old data remains in the LUKS header
areas, but it could slow down format operation on some devices.
Previously only first 4k (or 32k for LUKS2) and the used keyslot
was overwritten in the format operation.
* Several fixes to error messages that were unintentionally replaced
in previous versions with a silent exit code.
More descriptive error messages were added, including error
messages if
- a device is unusable (not a block device, no access, etc.),
- a LUKS device is not detected,
- LUKS header load code detects unsupported version,
- a keyslot decryption fails (also happens in the cipher check),
- converting an inactive keyslot.
* Device activation fails if data area overlaps with LUKS header.
* Code now uses explicit_bzero to wipe memory if available
(instead of own implementation).
* Additional VeraCrypt modes are now supported, including Camellia
and Kuznyechik symmetric ciphers (and cipher chains) and Streebog
hash function. These were introduced in a recent VeraCrypt upstream.
Note that Kuznyechik requires out-of-tree kernel module and
Streebog hash function is available only with the gcrypt cryptographic
backend for now.
* Fixes static build for integritysetup if the pwquality library is used.
* Allows passphrase change for unbound keyslots.
* Fixes removed keyslot number in verbose message for luksKillSlot,
luksRemoveKey and erase command.
* Adds blkid scan when attempting to open a plain device and warn the user
about existing device signatures in a ciphertext device.
* Remove LUKS header signature if luksFormat fails to add the first keyslot.
* Remove O_SYNC from device open and use fsync() to speed up
wipe operation considerably.
* Create --master-key-file in luksDump and fail if the file already exists.
* Fixes a bug when LUKS2 authenticated encryption with a detached header
wiped the header device instead of dm-integrity data device area (causing
unnecessary LUKS2 header auto recovery).
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Authenticated encryption should use new algorithms from CAESAR competition
https://competitions.cr.yp.to/caesar-submissions.html.
AEGIS and MORUS are already available in kernel 4.18.
For more info about LUKS2 authenticated encryption, please see our paper
https://arxiv.org/abs/1807.00309
Please note that authenticated encryption is still an experimental feature
and can have performance problems for hish-speed devices and device
with larger IO blocks (like RAID).
* Authenticated encryption do not set encryption for a dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* There are examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be REMOVED in version 2.1
in favor of python bindings to the libblockdev library.
See https://github.com/storaged-project/libblockdev/releases that
already supports LUKS2 and VeraCrypt devices handling through libcryptsetup.

View File

@@ -1,97 +0,0 @@
Cryptsetup 2.0.6 Release Notes
==============================
Stable bug-fix release.
All users of cryptsetup 2.0.x should upgrade to this version.
Cryptsetup 2.x version introduces a new on-disk LUKS2 format.
The legacy LUKS (referenced as LUKS1) will be fully supported
forever as well as a traditional and fully backward compatible format.
Please note that authenticated disk encryption, non-cryptographic
data integrity protection (dm-integrity), use of Argon2 Password-Based
Key Derivation Function and the LUKS2 on-disk format itself are new
features and can contain some bugs.
Please do not use LUKS2 without properly configured backup or in
production systems that need to be compatible with older systems.
Changes since version 2.0.5
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Fix support of larger metadata areas in LUKS2 header.
This release properly supports all specified metadata areas, as documented
in LUKS2 format description (see docs/on-disk-format-luks2.pdf in archive).
Currently, only default metadata area size is used (in format or convert).
Later cryptsetup versions will allow increasing this metadata area size.
* If AEAD (authenticated encryption) is used, cryptsetup now tries to check
if the requested AEAD algorithm with specified key size is available
in kernel crypto API.
This change avoids formatting a device that cannot be later activated.
For this function, the kernel must be compiled with the
CONFIG_CRYPTO_USER_API_AEAD option enabled.
Note that kernel user crypto API options (CONFIG_CRYPTO_USER_API and
CONFIG_CRYPTO_USER_API_SKCIPHER) are already mandatory for LUKS2.
* Fix setting of integrity no-journal flag.
Now you can store this flag to metadata using --persistent option.
* Fix cryptsetup-reencrypt to not keep temporary reencryption headers
if interrupted during initial password prompt.
* Adds early check to plain and LUKS2 formats to disallow device format
if device size is not aligned to requested sector size.
Previously it was possible, and the device was rejected to activate by
kernel later.
* Fix checking of hash algorithms availability for PBKDF early.
Previously LUKS2 format allowed non-existent hash algorithm with
invalid keyslot preventing the device from activation.
* Allow Adiantum cipher construction (a non-authenticated length-preserving
fast encryption scheme), so it can be used both for data encryption and
keyslot encryption in LUKS1/2 devices.
For benchmark, use:
# cryptsetup benchmark -c xchacha12,aes-adiantum
# cryptsetup benchmark -c xchacha20,aes-adiantum
For LUKS format:
# cryptsetup luksFormat -c xchacha20,aes-adiantum-plain64 -s 256 <device>
The support for Adiantum will be merged in Linux kernel 4.21.
For more info see the paper https://eprint.iacr.org/2018/720.
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Authenticated encryption should use new algorithms from CAESAR competition
https://competitions.cr.yp.to/caesar-submissions.html.
AEGIS and MORUS are already available in kernel 4.18.
For more info about LUKS2 authenticated encryption, please see our paper
https://arxiv.org/abs/1807.00309
Please note that authenticated encryption is still an experimental feature
and can have performance problems for high-speed devices and device
with larger IO blocks (like RAID).
* Authenticated encryption do not set encryption for a dm-integrity journal.
While it does not influence data confidentiality or integrity protection,
an attacker can get some more information from data journal or cause that
system will corrupt sectors after journal replay. (That corruption will be
detected though.)
* There are examples of user-defined tokens inside misc/luks2_keyslot_example
directory (like a simple external program that uses libssh to unlock LUKS2
using remote keyfile).
* The python binding (pycryptsetup) contains only basic functionality for LUKS1
(it is not updated for new features) and will be REMOVED in version 2.1
in favor of python bindings to the libblockdev library.
See https://github.com/storaged-project/libblockdev/releases that
already supports LUKS2 and VeraCrypt devices handling through libcryptsetup.

View File

@@ -1,210 +0,0 @@
Cryptsetup 2.1.0 Release Notes
==============================
Stable release with new features and bug fixes.
Cryptsetup 2.1 version uses a new on-disk LUKS2 format as the default
LUKS format and increases default LUKS2 header size.
The legacy LUKS (referenced as LUKS1) will be fully supported forever
as well as a traditional and fully backward compatible format.
When upgrading a stable distribution, please use configure option
--with-default-luks-format=LUKS1 to maintain backward compatibility.
This release also switches to OpenSSL as a default cryptographic
backend for LUKS header processing. Use --with-crypto_backend=gcrypt
configure option if you need to preserve legacy libgcrypt backend.
Please do not use LUKS2 without properly configured backup or
in production systems that need to be compatible with older systems.
Changes since version 2.0.6
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* The default for cryptsetup LUKS format action is now LUKS2.
You can use LUKS1 with cryptsetup option --type luks1.
* The default size of the LUKS2 header is increased to 16 MB.
It includes metadata and the area used for binary keyslots;
it means that LUKS header backup is now 16MB in size.
Note, that used keyslot area is much smaller, but this increase
of reserved space allows implementation of later extensions
(like online reencryption).
It is fully compatible with older cryptsetup 2.0.x versions.
If you require to create LUKS2 header with the same size as
in the 2.0.x version, use --offset 8192 option for luksFormat
(units are in 512-bytes sectors; see notes below).
* Cryptsetup now doubles LUKS default key size if XTS mode is used
(XTS mode uses two internal keys). This does not apply if key size
is explicitly specified on the command line and it does not apply
for the plain mode.
This fixes a confusion with AES and 256bit key in XTS mode where
code used AES128 and not AES256 as often expected.
Also, the default keyslot encryption algorithm (if cannot be derived
from data encryption algorithm) is now available as configure
options --with-luks2-keyslot-cipher and --with-luks2-keyslot-keybits.
The default is aes-xts-plain64 with 2 * 256-bits key.
* Default cryptographic backend used for LUKS header processing is now
OpenSSL. For years, OpenSSL provided better performance for PBKDF.
NOTE: Cryptsetup/libcryptsetup supports several cryptographic
library backends. The fully supported are libgcrypt, OpenSSL and
kernel crypto API. FIPS mode extensions are maintained only for
libgcrypt and OpenSSL. Nettle and NSS are usable only for some
subset of algorithms and cannot provide full backward compatibility.
You can always switch to other backends by using a configure switch,
for libgcrypt (compatibility for older distributions) use:
--with-crypto_backend=gcrypt
* The Python bindings are no longer supported and the code was removed
from cryptsetup distribution. Please use the libblockdev project
that already covers most of the libcryptsetup functionality
including LUKS2.
* Cryptsetup now allows using --offset option also for luksFormat.
It means that the specified offset value is used for data offset.
LUKS2 header areas are automatically adjusted according to this value.
(Note units are in 512-byte sectors due to the previous definition
of this option in plain mode.)
This option can replace --align-payload with absolute alignment value.
* Cryptsetup now supports new refresh action (that is the alias for
"open --refresh").
It allows changes of parameters for an active device (like root
device mapping), for example, it can enable or disable TRIM support
on-the-fly.
It is supported for LUKS1, LUKS2, plain and loop-AES devices.
* Integritysetup now supports mode with detached data device through
new --data-device option.
Since kernel 4.18 there is a possibility to specify external data
device for dm-integrity that stores all integrity tags.
* Integritysetup now supports automatic integrity recalculation
through new --integrity-recalculate option.
Linux kernel since version 4.18 supports automatic background
recalculation of integrity tags for dm-integrity.
Other changes and fixes
~~~~~~~~~~~~~~~~~~~~~~~
* Fix for crypt_wipe call to allocate space if the header is backed
by a file. This means that if you use detached header file, it will
now have always the full size after luksFormat, even if only
a few keyslots are used.
* Fixes to offline cryptsetup-reencrypt to preserve LUKS2 keyslots
area sizes after reencryption and fixes for some other issues when
creating temporary reencryption headers.
* Added some FIPS mode workarounds. We cannot (yet) use Argon2 in
FIPS mode, libcryptsetup now fallbacks to use PBKDF2 in FIPS mode.
* Rejects conversion to LUKS1 if PBKDF2 hash algorithms
in keyslots differ.
* The hash setting on command line now applies also to LUKS2 PBKDF2
digest. In previous versions, the LUKS2 key digest used PBKDF2-SHA256
(except for converted headers).
* Allow LUKS2 keyslots area to increase if data offset allows it.
Cryptsetup can fine-tune LUKS2 metadata area sizes through
--luks2-metadata-size=BYTES and --luks2-keyslots-size=BYTES.
Please DO NOT use these low-level options until you need it for
some very specific additional feature.
Also, the code now prints these LUKS2 header area sizes in dump
command.
* For LUKS2, keyslot can use different encryption that data with
new options --keyslot-key-size=BITS and --keyslot-cipher=STRING
in all commands that create new LUKS keyslot.
Please DO NOT use these low-level options until you need it for
some very specific additional feature.
* Code now avoids data flush when reading device status through
device-mapper.
* The Nettle crypto backend and the userspace kernel crypto API
backend were enhanced to allow more available hash functions
(like SHA3 variants).
* Upstream code now does not require libgcrypt-devel
for autoconfigure, because OpenSSL is the default.
The libgcrypt does not use standard pkgconfig detection and
requires specific macro (part of libgcrypt development files)
to be always present during autoconfigure.
With other crypto backends, like OpenSSL, this makes no sense,
so this part of autoconfigure is now optional.
* Cryptsetup now understands new --debug-json option that allows
an additional dump of some JSON information. These are no longer
present in standard debug output because it could contain some
specific LUKS header parameters.
* The luksDump contains the hash algorithm used in Anti-Forensic
function.
* All debug messages are now sent through configured log callback
functions, so an application can easily use own debug messages
handling. In previous versions debug messages were printed directly
to standard output.)
Libcryptsetup API additions
~~~~~~~~~~~~~~~~~~~~~~~~~~~
These new calls are now exported, for details see libcryptsetup.h:
* crypt_init_data_device
* crypt_get_metadata_device_name
functions to init devices with separate metadata and data device
before a format function is called.
* crypt_set_data_offset
sets the data offset for LUKS to the specified value
in 512-byte sectors.
It should replace alignment calculation in LUKS param structures.
* crypt_get_metadata_size
* crypt_set_metadata_size
allows to set/get area sizes in LUKS header
(according to specification).
* crypt_get_default_type
get default compiled-in LUKS type (version).
* crypt_get_pbkdf_type_params
allows to get compiled-in PBKDF parameters.
* crypt_keyslot_set_encryption
* crypt_keyslot_get_encryption
allows to set/get per-keyslot encryption algorithm for LUKS2.
* crypt_keyslot_get_pbkdf
allows to get PBKDF parameters per-keyslot.
and these new defines:
* CRYPT_LOG_DEBUG_JSON (message type for JSON debug)
* CRYPT_DEBUG_JSON (log level for JSON debug)
* CRYPT_ACTIVATE_RECALCULATE (dm-integrity recalculate flag)
* CRYPT_ACTIVATE_REFRESH (new open with refresh flag)
All existing API calls should remain backward compatible.
Unfinished things & TODO for next releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Optional authenticated encryption is still an experimental feature
and can have performance problems for high-speed devices and device
with larger IO blocks (like RAID).
* Authenticated encryption does not use encryption for a dm-integrity
journal. While it does not influence data confidentiality or
integrity protection, an attacker can get some more information
from data journal or cause that system will corrupt sectors after
journal replay. (That corruption will be detected though.)
* The LUKS2 metadata area increase is mainly needed for the new online
reencryption as the major feature for the next release.

70
lib/Makefile.am Normal file
View File

@@ -0,0 +1,70 @@
SUBDIRS = crypto_backend luks1 loopaes verity tcrypt
moduledir = $(libdir)/cryptsetup
pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = libcryptsetup.pc
AM_CPPFLAGS = -include config.h \
-I$(top_srcdir) \
-I$(top_srcdir)/lib/crypto_backend \
-I$(top_srcdir)/lib/luks1 \
-I$(top_srcdir)/lib/loopaes \
-I$(top_srcdir)/lib/verity \
-I$(top_srcdir)/lib/tcrypt \
-DDATADIR=\""$(datadir)"\" \
-DLIBDIR=\""$(libdir)"\" \
-DPREFIX=\""$(prefix)"\" \
-DSYSCONFDIR=\""$(sysconfdir)"\" \
-DVERSION=\""$(VERSION)"\"
lib_LTLIBRARIES = libcryptsetup.la
common_ldadd = \
crypto_backend/libcrypto_backend.la \
luks1/libluks1.la \
loopaes/libloopaes.la \
verity/libverity.la \
tcrypt/libtcrypt.la
libcryptsetup_la_DEPENDENCIES = $(common_ldadd) libcryptsetup.sym
libcryptsetup_la_LDFLAGS = $(AM_LDFLAGS) -no-undefined \
-Wl,--version-script=$(top_srcdir)/lib/libcryptsetup.sym \
-version-info @LIBCRYPTSETUP_VERSION_INFO@
libcryptsetup_la_CFLAGS = -Wall $(AM_CFLAGS) @CRYPTO_CFLAGS@
libcryptsetup_la_LIBADD = \
@UUID_LIBS@ \
@DEVMAPPER_LIBS@ \
@CRYPTO_LIBS@ \
$(common_ldadd)
libcryptsetup_la_SOURCES = \
setup.c \
internal.h \
bitops.h \
nls.h \
libcryptsetup.h \
utils.c \
utils_benchmark.c \
utils_crypt.c \
utils_crypt.h \
utils_loop.c \
utils_loop.h \
utils_devpath.c \
utils_wipe.c \
utils_fips.c \
utils_fips.h \
utils_device.c \
libdevmapper.c \
utils_dm.h \
volumekey.c \
random.c \
crypt_plain.c
include_HEADERS = libcryptsetup.h
EXTRA_DIST = libcryptsetup.pc.in libcryptsetup.sym

View File

@@ -1,105 +0,0 @@
pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = lib/libcryptsetup.pc
lib_LTLIBRARIES = libcryptsetup.la
noinst_LTLIBRARIES += libutils_io.la
include_HEADERS = lib/libcryptsetup.h
EXTRA_DIST += lib/libcryptsetup.pc.in lib/libcryptsetup.sym
libutils_io_la_CFLAGS = $(AM_CFLAGS)
libutils_io_la_SOURCES = \
lib/utils_io.c \
lib/utils_io.h
libcryptsetup_la_CPPFLAGS = $(AM_CPPFLAGS) \
-I $(top_srcdir)/lib/crypto_backend \
-I $(top_srcdir)/lib/luks1 \
-I $(top_srcdir)/lib/luks2 \
-I $(top_srcdir)/lib/loopaes \
-I $(top_srcdir)/lib/verity \
-I $(top_srcdir)/lib/tcrypt \
-I $(top_srcdir)/lib/integrity
libcryptsetup_la_DEPENDENCIES = libutils_io.la libcrypto_backend.la lib/libcryptsetup.sym
libcryptsetup_la_LDFLAGS = $(AM_LDFLAGS) -no-undefined \
-Wl,--version-script=$(top_srcdir)/lib/libcryptsetup.sym \
-version-info @LIBCRYPTSETUP_VERSION_INFO@
libcryptsetup_la_CFLAGS = $(AM_CFLAGS) @CRYPTO_CFLAGS@
libcryptsetup_la_LIBADD = \
@UUID_LIBS@ \
@DEVMAPPER_LIBS@ \
@CRYPTO_LIBS@ \
@LIBARGON2_LIBS@ \
@JSON_C_LIBS@ \
@BLKID_LIBS@ \
libcrypto_backend.la \
libutils_io.la
libcryptsetup_la_SOURCES = \
lib/setup.c \
lib/internal.h \
lib/bitops.h \
lib/nls.h \
lib/libcryptsetup.h \
lib/utils.c \
lib/utils_benchmark.c \
lib/utils_crypt.c \
lib/utils_crypt.h \
lib/utils_loop.c \
lib/utils_loop.h \
lib/utils_devpath.c \
lib/utils_wipe.c \
lib/utils_fips.c \
lib/utils_fips.h \
lib/utils_device.c \
lib/utils_keyring.c \
lib/utils_keyring.h \
lib/utils_device_locking.c \
lib/utils_device_locking.h \
lib/utils_pbkdf.c \
lib/libdevmapper.c \
lib/utils_dm.h \
lib/volumekey.c \
lib/random.c \
lib/crypt_plain.c \
lib/base64.h \
lib/base64.c \
lib/integrity/integrity.h \
lib/integrity/integrity.c \
lib/loopaes/loopaes.h \
lib/loopaes/loopaes.c \
lib/tcrypt/tcrypt.h \
lib/tcrypt/tcrypt.c \
lib/luks1/af.h \
lib/luks1/af.c \
lib/luks1/keyencryption.c \
lib/luks1/keymanage.c \
lib/luks1/luks.h \
lib/verity/verity_hash.c \
lib/verity/verity_fec.c \
lib/verity/verity.c \
lib/verity/verity.h \
lib/verity/rs_encode_char.c \
lib/verity/rs_decode_char.c \
lib/verity/rs.h \
lib/luks2/luks2_disk_metadata.c \
lib/luks2/luks2_json_format.c \
lib/luks2/luks2_json_metadata.c \
lib/luks2/luks2_luks1_convert.c \
lib/luks2/luks2_digest.c \
lib/luks2/luks2_digest_pbkdf2.c \
lib/luks2/luks2_keyslot.c \
lib/luks2/luks2_keyslot_luks2.c \
lib/luks2/luks2_token_keyring.c \
lib/luks2/luks2_token.c \
lib/luks2/luks2_internal.h \
lib/luks2/luks2.h \
lib/utils_blkid.c \
lib/utils_blkid.h

View File

@@ -1,605 +0,0 @@
/* base64.c -- Encode binary data using printable characters.
Copyright (C) 1999-2001, 2004-2006, 2009-2018 Free Software Foundation, Inc.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, see <https://www.gnu.org/licenses/>. */
/* Written by Simon Josefsson. Partially adapted from GNU MailUtils
* (mailbox/filter_trans.c, as of 2004-11-28). Improved by review
* from Paul Eggert, Bruno Haible, and Stepan Kasal.
*
* See also RFC 4648 <https://www.ietf.org/rfc/rfc4648.txt>.
*
* Be careful with error checking. Here is how you would typically
* use these functions:
*
* bool ok = base64_decode_alloc (in, inlen, &out, &outlen);
* if (!ok)
* FAIL: input was not valid base64
* if (out == NULL)
* FAIL: memory allocation error
* OK: data in OUT/OUTLEN
*
* size_t outlen = base64_encode_alloc (in, inlen, &out);
* if (out == NULL && outlen == 0 && inlen != 0)
* FAIL: input too long
* if (out == NULL)
* FAIL: memory allocation error
* OK: data in OUT/OUTLEN.
*
*/
#include <config.h>
/* Get prototype. */
#include "base64.h"
/* Get malloc. */
#include <stdlib.h>
/* Get UCHAR_MAX. */
#include <limits.h>
#include <string.h>
/* C89 compliant way to cast 'char' to 'unsigned char'. */
static unsigned char
to_uchar (char ch)
{
return ch;
}
static const char b64c[64] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
/* Base64 encode IN array of size INLEN into OUT array. OUT needs
to be of length >= BASE64_LENGTH(INLEN), and INLEN needs to be
a multiple of 3. */
static void
base64_encode_fast (const char *restrict in, size_t inlen, char *restrict out)
{
while (inlen)
{
*out++ = b64c[to_uchar (in[0]) >> 2];
*out++ = b64c[((to_uchar (in[0]) << 4) + (to_uchar (in[1]) >> 4)) & 0x3f];
*out++ = b64c[((to_uchar (in[1]) << 2) + (to_uchar (in[2]) >> 6)) & 0x3f];
*out++ = b64c[to_uchar (in[2]) & 0x3f];
inlen -= 3;
in += 3;
}
}
/* Base64 encode IN array of size INLEN into OUT array of size OUTLEN.
If OUTLEN is less than BASE64_LENGTH(INLEN), write as many bytes as
possible. If OUTLEN is larger than BASE64_LENGTH(INLEN), also zero
terminate the output buffer. */
void
base64_encode (const char *restrict in, size_t inlen,
char *restrict out, size_t outlen)
{
/* Note this outlen constraint can be enforced at compile time.
I.E. that the output buffer is exactly large enough to hold
the encoded inlen bytes. The inlen constraints (of corresponding
to outlen, and being a multiple of 3) can change at runtime
at the end of input. However the common case when reading
large inputs is to have both constraints satisfied, so we depend
on both in base_encode_fast(). */
if (outlen % 4 == 0 && inlen == outlen / 4 * 3)
{
base64_encode_fast (in, inlen, out);
return;
}
while (inlen && outlen)
{
*out++ = b64c[to_uchar (in[0]) >> 2];
if (!--outlen)
break;
*out++ = b64c[((to_uchar (in[0]) << 4)
+ (--inlen ? to_uchar (in[1]) >> 4 : 0))
& 0x3f];
if (!--outlen)
break;
*out++ =
(inlen
? b64c[((to_uchar (in[1]) << 2)
+ (--inlen ? to_uchar (in[2]) >> 6 : 0))
& 0x3f]
: '=');
if (!--outlen)
break;
*out++ = inlen ? b64c[to_uchar (in[2]) & 0x3f] : '=';
if (!--outlen)
break;
if (inlen)
inlen--;
if (inlen)
in += 3;
}
if (outlen)
*out = '\0';
}
/* Allocate a buffer and store zero terminated base64 encoded data
from array IN of size INLEN, returning BASE64_LENGTH(INLEN), i.e.,
the length of the encoded data, excluding the terminating zero. On
return, the OUT variable will hold a pointer to newly allocated
memory that must be deallocated by the caller. If output string
length would overflow, 0 is returned and OUT is set to NULL. If
memory allocation failed, OUT is set to NULL, and the return value
indicates length of the requested memory block, i.e.,
BASE64_LENGTH(inlen) + 1. */
size_t
base64_encode_alloc (const char *in, size_t inlen, char **out)
{
size_t outlen = 1 + BASE64_LENGTH (inlen);
/* Check for overflow in outlen computation.
*
* If there is no overflow, outlen >= inlen.
*
* If the operation (inlen + 2) overflows then it yields at most +1, so
* outlen is 0.
*
* If the multiplication overflows, we lose at least half of the
* correct value, so the result is < ((inlen + 2) / 3) * 2, which is
* less than (inlen + 2) * 0.66667, which is less than inlen as soon as
* (inlen > 4).
*/
if (inlen > outlen)
{
*out = NULL;
return 0;
}
*out = malloc (outlen);
if (!*out)
return outlen;
base64_encode (in, inlen, *out, outlen);
return outlen - 1;
}
/* With this approach this file works independent of the charset used
(think EBCDIC). However, it does assume that the characters in the
Base64 alphabet (A-Za-z0-9+/) are encoded in 0..255. POSIX
1003.1-2001 require that char and unsigned char are 8-bit
quantities, though, taking care of that problem. But this may be a
potential problem on non-POSIX C99 platforms.
IBM C V6 for AIX mishandles "#define B64(x) ...'x'...", so use "_"
as the formal parameter rather than "x". */
#define B64(_) \
((_) == 'A' ? 0 \
: (_) == 'B' ? 1 \
: (_) == 'C' ? 2 \
: (_) == 'D' ? 3 \
: (_) == 'E' ? 4 \
: (_) == 'F' ? 5 \
: (_) == 'G' ? 6 \
: (_) == 'H' ? 7 \
: (_) == 'I' ? 8 \
: (_) == 'J' ? 9 \
: (_) == 'K' ? 10 \
: (_) == 'L' ? 11 \
: (_) == 'M' ? 12 \
: (_) == 'N' ? 13 \
: (_) == 'O' ? 14 \
: (_) == 'P' ? 15 \
: (_) == 'Q' ? 16 \
: (_) == 'R' ? 17 \
: (_) == 'S' ? 18 \
: (_) == 'T' ? 19 \
: (_) == 'U' ? 20 \
: (_) == 'V' ? 21 \
: (_) == 'W' ? 22 \
: (_) == 'X' ? 23 \
: (_) == 'Y' ? 24 \
: (_) == 'Z' ? 25 \
: (_) == 'a' ? 26 \
: (_) == 'b' ? 27 \
: (_) == 'c' ? 28 \
: (_) == 'd' ? 29 \
: (_) == 'e' ? 30 \
: (_) == 'f' ? 31 \
: (_) == 'g' ? 32 \
: (_) == 'h' ? 33 \
: (_) == 'i' ? 34 \
: (_) == 'j' ? 35 \
: (_) == 'k' ? 36 \
: (_) == 'l' ? 37 \
: (_) == 'm' ? 38 \
: (_) == 'n' ? 39 \
: (_) == 'o' ? 40 \
: (_) == 'p' ? 41 \
: (_) == 'q' ? 42 \
: (_) == 'r' ? 43 \
: (_) == 's' ? 44 \
: (_) == 't' ? 45 \
: (_) == 'u' ? 46 \
: (_) == 'v' ? 47 \
: (_) == 'w' ? 48 \
: (_) == 'x' ? 49 \
: (_) == 'y' ? 50 \
: (_) == 'z' ? 51 \
: (_) == '0' ? 52 \
: (_) == '1' ? 53 \
: (_) == '2' ? 54 \
: (_) == '3' ? 55 \
: (_) == '4' ? 56 \
: (_) == '5' ? 57 \
: (_) == '6' ? 58 \
: (_) == '7' ? 59 \
: (_) == '8' ? 60 \
: (_) == '9' ? 61 \
: (_) == '+' ? 62 \
: (_) == '/' ? 63 \
: -1)
static const signed char b64[0x100] = {
B64 (0), B64 (1), B64 (2), B64 (3),
B64 (4), B64 (5), B64 (6), B64 (7),
B64 (8), B64 (9), B64 (10), B64 (11),
B64 (12), B64 (13), B64 (14), B64 (15),
B64 (16), B64 (17), B64 (18), B64 (19),
B64 (20), B64 (21), B64 (22), B64 (23),
B64 (24), B64 (25), B64 (26), B64 (27),
B64 (28), B64 (29), B64 (30), B64 (31),
B64 (32), B64 (33), B64 (34), B64 (35),
B64 (36), B64 (37), B64 (38), B64 (39),
B64 (40), B64 (41), B64 (42), B64 (43),
B64 (44), B64 (45), B64 (46), B64 (47),
B64 (48), B64 (49), B64 (50), B64 (51),
B64 (52), B64 (53), B64 (54), B64 (55),
B64 (56), B64 (57), B64 (58), B64 (59),
B64 (60), B64 (61), B64 (62), B64 (63),
B64 (64), B64 (65), B64 (66), B64 (67),
B64 (68), B64 (69), B64 (70), B64 (71),
B64 (72), B64 (73), B64 (74), B64 (75),
B64 (76), B64 (77), B64 (78), B64 (79),
B64 (80), B64 (81), B64 (82), B64 (83),
B64 (84), B64 (85), B64 (86), B64 (87),
B64 (88), B64 (89), B64 (90), B64 (91),
B64 (92), B64 (93), B64 (94), B64 (95),
B64 (96), B64 (97), B64 (98), B64 (99),
B64 (100), B64 (101), B64 (102), B64 (103),
B64 (104), B64 (105), B64 (106), B64 (107),
B64 (108), B64 (109), B64 (110), B64 (111),
B64 (112), B64 (113), B64 (114), B64 (115),
B64 (116), B64 (117), B64 (118), B64 (119),
B64 (120), B64 (121), B64 (122), B64 (123),
B64 (124), B64 (125), B64 (126), B64 (127),
B64 (128), B64 (129), B64 (130), B64 (131),
B64 (132), B64 (133), B64 (134), B64 (135),
B64 (136), B64 (137), B64 (138), B64 (139),
B64 (140), B64 (141), B64 (142), B64 (143),
B64 (144), B64 (145), B64 (146), B64 (147),
B64 (148), B64 (149), B64 (150), B64 (151),
B64 (152), B64 (153), B64 (154), B64 (155),
B64 (156), B64 (157), B64 (158), B64 (159),
B64 (160), B64 (161), B64 (162), B64 (163),
B64 (164), B64 (165), B64 (166), B64 (167),
B64 (168), B64 (169), B64 (170), B64 (171),
B64 (172), B64 (173), B64 (174), B64 (175),
B64 (176), B64 (177), B64 (178), B64 (179),
B64 (180), B64 (181), B64 (182), B64 (183),
B64 (184), B64 (185), B64 (186), B64 (187),
B64 (188), B64 (189), B64 (190), B64 (191),
B64 (192), B64 (193), B64 (194), B64 (195),
B64 (196), B64 (197), B64 (198), B64 (199),
B64 (200), B64 (201), B64 (202), B64 (203),
B64 (204), B64 (205), B64 (206), B64 (207),
B64 (208), B64 (209), B64 (210), B64 (211),
B64 (212), B64 (213), B64 (214), B64 (215),
B64 (216), B64 (217), B64 (218), B64 (219),
B64 (220), B64 (221), B64 (222), B64 (223),
B64 (224), B64 (225), B64 (226), B64 (227),
B64 (228), B64 (229), B64 (230), B64 (231),
B64 (232), B64 (233), B64 (234), B64 (235),
B64 (236), B64 (237), B64 (238), B64 (239),
B64 (240), B64 (241), B64 (242), B64 (243),
B64 (244), B64 (245), B64 (246), B64 (247),
B64 (248), B64 (249), B64 (250), B64 (251),
B64 (252), B64 (253), B64 (254), B64 (255)
};
#if UCHAR_MAX == 255
# define uchar_in_range(c) true
#else
# define uchar_in_range(c) ((c) <= 255)
#endif
/* Return true if CH is a character from the Base64 alphabet, and
false otherwise. Note that '=' is padding and not considered to be
part of the alphabet. */
bool
isbase64 (char ch)
{
return uchar_in_range (to_uchar (ch)) && 0 <= b64[to_uchar (ch)];
}
/* Initialize decode-context buffer, CTX. */
void
base64_decode_ctx_init (struct base64_decode_context *ctx)
{
ctx->i = 0;
}
/* If CTX->i is 0 or 4, there are four or more bytes in [*IN..IN_END), and
none of those four is a newline, then return *IN. Otherwise, copy up to
4 - CTX->i non-newline bytes from that range into CTX->buf, starting at
index CTX->i and setting CTX->i to reflect the number of bytes copied,
and return CTX->buf. In either case, advance *IN to point to the byte
after the last one processed, and set *N_NON_NEWLINE to the number of
verified non-newline bytes accessible through the returned pointer. */
static const char *
get_4 (struct base64_decode_context *ctx,
char const *restrict *in, char const *restrict in_end,
size_t *n_non_newline)
{
if (ctx->i == 4)
ctx->i = 0;
if (ctx->i == 0)
{
char const *t = *in;
if (4 <= in_end - *in && memchr (t, '\n', 4) == NULL)
{
/* This is the common case: no newline. */
*in += 4;
*n_non_newline = 4;
return (const char *) t;
}
}
{
/* Copy non-newline bytes into BUF. */
char const *p = *in;
while (p < in_end)
{
char c = *p++;
if (c != '\n')
{
ctx->buf[ctx->i++] = c;
if (ctx->i == 4)
break;
}
}
*in = p;
*n_non_newline = ctx->i;
return ctx->buf;
}
}
#define return_false \
do \
{ \
*outp = out; \
return false; \
} \
while (false)
/* Decode up to four bytes of base64-encoded data, IN, of length INLEN
into the output buffer, *OUT, of size *OUTLEN bytes. Return true if
decoding is successful, false otherwise. If *OUTLEN is too small,
as many bytes as possible are written to *OUT. On return, advance
*OUT to point to the byte after the last one written, and decrement
*OUTLEN to reflect the number of bytes remaining in *OUT. */
static bool
decode_4 (char const *restrict in, size_t inlen,
char *restrict *outp, size_t *outleft)
{
char *out = *outp;
if (inlen < 2)
return false;
if (!isbase64 (in[0]) || !isbase64 (in[1]))
return false;
if (*outleft)
{
*out++ = ((b64[to_uchar (in[0])] << 2)
| (b64[to_uchar (in[1])] >> 4));
--*outleft;
}
if (inlen == 2)
return_false;
if (in[2] == '=')
{
if (inlen != 4)
return_false;
if (in[3] != '=')
return_false;
}
else
{
if (!isbase64 (in[2]))
return_false;
if (*outleft)
{
*out++ = (((b64[to_uchar (in[1])] << 4) & 0xf0)
| (b64[to_uchar (in[2])] >> 2));
--*outleft;
}
if (inlen == 3)
return_false;
if (in[3] == '=')
{
if (inlen != 4)
return_false;
}
else
{
if (!isbase64 (in[3]))
return_false;
if (*outleft)
{
*out++ = (((b64[to_uchar (in[2])] << 6) & 0xc0)
| b64[to_uchar (in[3])]);
--*outleft;
}
}
}
*outp = out;
return true;
}
/* Decode base64-encoded input array IN of length INLEN to output array
OUT that can hold *OUTLEN bytes. The input data may be interspersed
with newlines. Return true if decoding was successful, i.e. if the
input was valid base64 data, false otherwise. If *OUTLEN is too
small, as many bytes as possible will be written to OUT. On return,
*OUTLEN holds the length of decoded bytes in OUT. Note that as soon
as any non-alphabet, non-newline character is encountered, decoding
is stopped and false is returned. If INLEN is zero, then process
only whatever data is stored in CTX.
Initially, CTX must have been initialized via base64_decode_ctx_init.
Subsequent calls to this function must reuse whatever state is recorded
in that buffer. It is necessary for when a quadruple of base64 input
bytes spans two input buffers.
If CTX is NULL then newlines are treated as garbage and the input
buffer is processed as a unit. */
bool
base64_decode_ctx (struct base64_decode_context *ctx,
const char *restrict in, size_t inlen,
char *restrict out, size_t *outlen)
{
size_t outleft = *outlen;
bool ignore_newlines = ctx != NULL;
bool flush_ctx = false;
unsigned int ctx_i = 0;
if (ignore_newlines)
{
ctx_i = ctx->i;
flush_ctx = inlen == 0;
}
while (true)
{
size_t outleft_save = outleft;
if (ctx_i == 0 && !flush_ctx)
{
while (true)
{
/* Save a copy of outleft, in case we need to re-parse this
block of four bytes. */
outleft_save = outleft;
if (!decode_4 (in, inlen, &out, &outleft))
break;
in += 4;
inlen -= 4;
}
}
if (inlen == 0 && !flush_ctx)
break;
/* Handle the common case of 72-byte wrapped lines.
This also handles any other multiple-of-4-byte wrapping. */
if (inlen && *in == '\n' && ignore_newlines)
{
++in;
--inlen;
continue;
}
/* Restore OUT and OUTLEFT. */
out -= outleft_save - outleft;
outleft = outleft_save;
{
char const *in_end = in + inlen;
char const *non_nl;
if (ignore_newlines)
non_nl = get_4 (ctx, &in, in_end, &inlen);
else
non_nl = in; /* Might have nl in this case. */
/* If the input is empty or consists solely of newlines (0 non-newlines),
then we're done. Likewise if there are fewer than 4 bytes when not
flushing context and not treating newlines as garbage. */
if (inlen == 0 || (inlen < 4 && !flush_ctx && ignore_newlines))
{
inlen = 0;
break;
}
if (!decode_4 (non_nl, inlen, &out, &outleft))
break;
inlen = in_end - in;
}
}
*outlen -= outleft;
return inlen == 0;
}
/* Allocate an output buffer in *OUT, and decode the base64 encoded
data stored in IN of size INLEN to the *OUT buffer. On return, the
size of the decoded data is stored in *OUTLEN. OUTLEN may be NULL,
if the caller is not interested in the decoded length. *OUT may be
NULL to indicate an out of memory error, in which case *OUTLEN
contains the size of the memory block needed. The function returns
true on successful decoding and memory allocation errors. (Use the
*OUT and *OUTLEN parameters to differentiate between successful
decoding and memory error.) The function returns false if the
input was invalid, in which case *OUT is NULL and *OUTLEN is
undefined. */
bool
base64_decode_alloc_ctx (struct base64_decode_context *ctx,
const char *in, size_t inlen, char **out,
size_t *outlen)
{
/* This may allocate a few bytes too many, depending on input,
but it's not worth the extra CPU time to compute the exact size.
The exact size is 3 * (inlen + (ctx ? ctx->i : 0)) / 4, minus 1 if the
input ends with "=" and minus another 1 if the input ends with "==".
Dividing before multiplying avoids the possibility of overflow. */
size_t needlen = 3 * (inlen / 4) + 3;
*out = malloc (needlen);
if (!*out)
return true;
if (!base64_decode_ctx (ctx, in, inlen, *out, &needlen))
{
free (*out);
*out = NULL;
return false;
}
if (outlen)
*outlen = needlen;
return true;
}

View File

@@ -1,68 +0,0 @@
/* base64.h -- Encode binary data using printable characters.
Copyright (C) 2004-2006, 2009-2018 Free Software Foundation, Inc.
Written by Simon Josefsson.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, see <https://www.gnu.org/licenses/>. */
#ifndef BASE64_H
# define BASE64_H
/* Get size_t. */
# include <stddef.h>
/* Get bool. */
# include <stdbool.h>
# ifdef __cplusplus
extern "C" {
# endif
/* This uses that the expression (n+(k-1))/k means the smallest
integer >= n/k, i.e., the ceiling of n/k. */
# define BASE64_LENGTH(inlen) ((((inlen) + 2) / 3) * 4)
struct base64_decode_context
{
unsigned int i;
char buf[4];
};
extern bool isbase64 (char ch) __attribute__ ((__const__));
extern void base64_encode (const char *restrict in, size_t inlen,
char *restrict out, size_t outlen);
extern size_t base64_encode_alloc (const char *in, size_t inlen, char **out);
extern void base64_decode_ctx_init (struct base64_decode_context *ctx);
extern bool base64_decode_ctx (struct base64_decode_context *ctx,
const char *restrict in, size_t inlen,
char *restrict out, size_t *outlen);
extern bool base64_decode_alloc_ctx (struct base64_decode_context *ctx,
const char *in, size_t inlen,
char **out, size_t *outlen);
#define base64_decode(in, inlen, out, outlen) \
base64_decode_ctx (NULL, in, inlen, out, outlen)
#define base64_decode_alloc(in, inlen, out, outlen) \
base64_decode_alloc_ctx (NULL, in, inlen, out, outlen)
# ifdef __cplusplus
}
# endif
#endif /* BASE64_H */

View File

@@ -1,9 +1,9 @@
/*
* cryptsetup plain device helper functions
*
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2010-2012 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2012, Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -64,7 +64,7 @@ static int hash(const char *hash_name, size_t key_size, char *key,
#define PLAIN_HASH_LEN_MAX 256
int crypt_plain_hash(struct crypt_device *cd,
int crypt_plain_hash(struct crypt_device *ctx __attribute__((unused)),
const char *hash_name,
char *key, size_t key_size,
const char *passphrase, size_t passphrase_size)
@@ -73,7 +73,7 @@ int crypt_plain_hash(struct crypt_device *cd,
size_t hash_size, pad_size;
int r;
log_dbg(cd, "Plain: hashing passphrase using %s.", hash_name);
log_dbg("Plain: hashing passphrase using %s.", hash_name);
if (strlen(hash_name) >= PLAIN_HASH_LEN_MAX)
return -EINVAL;
@@ -85,11 +85,11 @@ int crypt_plain_hash(struct crypt_device *cd,
*s = '\0';
s++;
if (!*s || sscanf(s, "%zd", &hash_size) != 1) {
log_dbg(cd, "Hash length is not a number");
log_dbg("Hash length is not a number");
return -EINVAL;
}
if (hash_size > key_size) {
log_dbg(cd, "Hash length %zd > key length %zd",
log_dbg("Hash length %zd > key length %zd",
hash_size, key_size);
return -EINVAL;
}
@@ -102,7 +102,7 @@ int crypt_plain_hash(struct crypt_device *cd,
/* No hash, copy passphrase directly */
if (!strcmp(hash_name_buf, "plain")) {
if (passphrase_size < hash_size) {
log_dbg(cd, "Too short plain passphrase.");
log_dbg("Too short plain passphrase.");
return -EINVAL;
}
memcpy(key, passphrase, hash_size);

View File

@@ -0,0 +1,30 @@
moduledir = $(libdir)/cryptsetup
noinst_LTLIBRARIES = libcrypto_backend.la
libcrypto_backend_la_CFLAGS = $(AM_CFLAGS) -Wall @CRYPTO_CFLAGS@
libcrypto_backend_la_SOURCES = crypto_backend.h \
crypto_cipher_kernel.c crypto_storage.c pbkdf_check.c crc32.c
if CRYPTO_BACKEND_GCRYPT
libcrypto_backend_la_SOURCES += crypto_gcrypt.c
endif
if CRYPTO_BACKEND_OPENSSL
libcrypto_backend_la_SOURCES += crypto_openssl.c
endif
if CRYPTO_BACKEND_NSS
libcrypto_backend_la_SOURCES += crypto_nss.c
endif
if CRYPTO_BACKEND_KERNEL
libcrypto_backend_la_SOURCES += crypto_kernel.c
endif
if CRYPTO_BACKEND_NETTLE
libcrypto_backend_la_SOURCES += crypto_nettle.c
endif
if CRYPTO_INTERNAL_PBKDF2
libcrypto_backend_la_SOURCES += pbkdf2_generic.c
endif
AM_CPPFLAGS = -include config.h -I$(top_srcdir)/lib

View File

@@ -1,37 +0,0 @@
noinst_LTLIBRARIES += libcrypto_backend.la
libcrypto_backend_la_CFLAGS = $(AM_CFLAGS) @CRYPTO_CFLAGS@
libcrypto_backend_la_SOURCES = \
lib/crypto_backend/crypto_backend.h \
lib/crypto_backend/crypto_cipher_kernel.c \
lib/crypto_backend/crypto_storage.c \
lib/crypto_backend/pbkdf_check.c \
lib/crypto_backend/crc32.c \
lib/crypto_backend/argon2_generic.c \
lib/crypto_backend/cipher_generic.c
if CRYPTO_BACKEND_GCRYPT
libcrypto_backend_la_SOURCES += lib/crypto_backend/crypto_gcrypt.c
endif
if CRYPTO_BACKEND_OPENSSL
libcrypto_backend_la_SOURCES += lib/crypto_backend/crypto_openssl.c
endif
if CRYPTO_BACKEND_NSS
libcrypto_backend_la_SOURCES += lib/crypto_backend/crypto_nss.c
endif
if CRYPTO_BACKEND_KERNEL
libcrypto_backend_la_SOURCES += lib/crypto_backend/crypto_kernel.c
endif
if CRYPTO_BACKEND_NETTLE
libcrypto_backend_la_SOURCES += lib/crypto_backend/crypto_nettle.c
endif
if CRYPTO_INTERNAL_PBKDF2
libcrypto_backend_la_SOURCES += lib/crypto_backend/pbkdf2_generic.c
endif
if CRYPTO_INTERNAL_ARGON2
libcrypto_backend_la_DEPENDENCIES = libargon2.la
libcrypto_backend_la_LIBADD = libargon2.la
endif

View File

@@ -1,30 +0,0 @@
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an "owner") of an original work of authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works ("Commons") that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others.
For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the "Affirmer"), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights ("Copyright and Related Rights"). Copyright and Related Rights include, but are not limited to, the following:
the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work;
moral rights retained by the original author(s) and/or performer(s);
publicity and privacy rights pertaining to a person's image or likeness depicted in a Work;
rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below;
rights protecting the extraction, dissemination, use and reuse of data in a Work;
database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and
other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer's heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer's express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer's Copyright and Related Rights in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "License"). The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not (i) exercise any of his or her remaining Copyright and Related Rights in the Work or (ii) assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer's express Statement of Purpose.
4. Limitations and Disclaimers.
No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.
Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law.
Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person's Copyright and Related Rights in the Work. Further, Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work.
Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work.

View File

@@ -1,30 +0,0 @@
noinst_LTLIBRARIES += libargon2.la
libargon2_la_CFLAGS = $(AM_CFLAGS) -std=c89 -pthread -O3
libargon2_la_CPPFLAGS = $(AM_CPPFLAGS) \
-I lib/crypto_backend/argon2 \
-I lib/crypto_backend/argon2/blake2
libargon2_la_SOURCES = \
lib/crypto_backend/argon2/blake2/blake2b.c \
lib/crypto_backend/argon2/blake2/blake2.h \
lib/crypto_backend/argon2/blake2/blake2-impl.h \
lib/crypto_backend/argon2/argon2.c \
lib/crypto_backend/argon2/argon2.h \
lib/crypto_backend/argon2/core.c \
lib/crypto_backend/argon2/core.h \
lib/crypto_backend/argon2/encoding.c \
lib/crypto_backend/argon2/encoding.h \
lib/crypto_backend/argon2/thread.c \
lib/crypto_backend/argon2/thread.h
if CRYPTO_INTERNAL_SSE_ARGON2
libargon2_la_SOURCES += lib/crypto_backend/argon2/blake2/blamka-round-opt.h \
lib/crypto_backend/argon2/opt.c
else
libargon2_la_SOURCES += lib/crypto_backend/argon2/blake2/blamka-round-ref.h \
lib/crypto_backend/argon2/ref.c
endif
EXTRA_DIST += lib/crypto_backend/argon2/LICENSE
EXTRA_DIST += lib/crypto_backend/argon2/README

View File

@@ -1,5 +0,0 @@
This is bundled Argon2 algorithm library, copied from
https://github.com/P-H-C/phc-winner-argon2
For more info see Password Hashing Competition site:
https://password-hashing.net/

View File

@@ -1,456 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include "argon2.h"
#include "encoding.h"
#include "core.h"
/* to silent gcc -Wcast-qual for const cast */
#define CONST_CAST(x) (x)(uintptr_t)
const char *argon2_type2string(argon2_type type, int uppercase) {
switch (type) {
case Argon2_d:
return uppercase ? "Argon2d" : "argon2d";
case Argon2_i:
return uppercase ? "Argon2i" : "argon2i";
case Argon2_id:
return uppercase ? "Argon2id" : "argon2id";
}
return NULL;
}
int argon2_ctx(argon2_context *context, argon2_type type) {
/* 1. Validate all inputs */
int result = validate_inputs(context);
uint32_t memory_blocks, segment_length;
argon2_instance_t instance;
if (ARGON2_OK != result) {
return result;
}
if (Argon2_d != type && Argon2_i != type && Argon2_id != type) {
return ARGON2_INCORRECT_TYPE;
}
/* 2. Align memory size */
/* Minimum memory_blocks = 8L blocks, where L is the number of lanes */
memory_blocks = context->m_cost;
if (memory_blocks < 2 * ARGON2_SYNC_POINTS * context->lanes) {
memory_blocks = 2 * ARGON2_SYNC_POINTS * context->lanes;
}
segment_length = memory_blocks / (context->lanes * ARGON2_SYNC_POINTS);
/* Ensure that all segments have equal length */
memory_blocks = segment_length * (context->lanes * ARGON2_SYNC_POINTS);
instance.version = context->version;
instance.memory = NULL;
instance.passes = context->t_cost;
instance.memory_blocks = memory_blocks;
instance.segment_length = segment_length;
instance.lane_length = segment_length * ARGON2_SYNC_POINTS;
instance.lanes = context->lanes;
instance.threads = context->threads;
instance.type = type;
if (instance.threads > instance.lanes) {
instance.threads = instance.lanes;
}
/* 3. Initialization: Hashing inputs, allocating memory, filling first
* blocks
*/
result = initialize(&instance, context);
if (ARGON2_OK != result) {
return result;
}
/* 4. Filling memory */
result = fill_memory_blocks(&instance);
if (ARGON2_OK != result) {
return result;
}
/* 5. Finalization */
finalize(context, &instance);
return ARGON2_OK;
}
int argon2_hash(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt, const size_t saltlen,
void *hash, const size_t hashlen, char *encoded,
const size_t encodedlen, argon2_type type,
const uint32_t version){
argon2_context context;
int result;
uint8_t *out;
if (pwdlen > ARGON2_MAX_PWD_LENGTH) {
return ARGON2_PWD_TOO_LONG;
}
if (saltlen > ARGON2_MAX_SALT_LENGTH) {
return ARGON2_SALT_TOO_LONG;
}
if (hashlen > ARGON2_MAX_OUTLEN) {
return ARGON2_OUTPUT_TOO_LONG;
}
if (hashlen < ARGON2_MIN_OUTLEN) {
return ARGON2_OUTPUT_TOO_SHORT;
}
out = malloc(hashlen);
if (!out) {
return ARGON2_MEMORY_ALLOCATION_ERROR;
}
context.out = (uint8_t *)out;
context.outlen = (uint32_t)hashlen;
context.pwd = CONST_CAST(uint8_t *)pwd;
context.pwdlen = (uint32_t)pwdlen;
context.salt = CONST_CAST(uint8_t *)salt;
context.saltlen = (uint32_t)saltlen;
context.secret = NULL;
context.secretlen = 0;
context.ad = NULL;
context.adlen = 0;
context.t_cost = t_cost;
context.m_cost = m_cost;
context.lanes = parallelism;
context.threads = parallelism;
context.allocate_cbk = NULL;
context.free_cbk = NULL;
context.flags = ARGON2_DEFAULT_FLAGS;
context.version = version;
result = argon2_ctx(&context, type);
if (result != ARGON2_OK) {
clear_internal_memory(out, hashlen);
free(out);
return result;
}
/* if raw hash requested, write it */
if (hash) {
memcpy(hash, out, hashlen);
}
/* if encoding requested, write it */
if (encoded && encodedlen) {
if (encode_string(encoded, encodedlen, &context, type) != ARGON2_OK) {
clear_internal_memory(out, hashlen); /* wipe buffers if error */
clear_internal_memory(encoded, encodedlen);
free(out);
return ARGON2_ENCODING_FAIL;
}
}
clear_internal_memory(out, hashlen);
free(out);
return ARGON2_OK;
}
int argon2i_hash_encoded(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, const size_t hashlen,
char *encoded, const size_t encodedlen) {
return argon2_hash(t_cost, m_cost, parallelism, pwd, pwdlen, salt, saltlen,
NULL, hashlen, encoded, encodedlen, Argon2_i,
ARGON2_VERSION_NUMBER);
}
int argon2i_hash_raw(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, void *hash, const size_t hashlen) {
return argon2_hash(t_cost, m_cost, parallelism, pwd, pwdlen, salt, saltlen,
hash, hashlen, NULL, 0, Argon2_i, ARGON2_VERSION_NUMBER);
}
int argon2d_hash_encoded(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, const size_t hashlen,
char *encoded, const size_t encodedlen) {
return argon2_hash(t_cost, m_cost, parallelism, pwd, pwdlen, salt, saltlen,
NULL, hashlen, encoded, encodedlen, Argon2_d,
ARGON2_VERSION_NUMBER);
}
int argon2d_hash_raw(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, void *hash, const size_t hashlen) {
return argon2_hash(t_cost, m_cost, parallelism, pwd, pwdlen, salt, saltlen,
hash, hashlen, NULL, 0, Argon2_d, ARGON2_VERSION_NUMBER);
}
int argon2id_hash_encoded(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, const size_t hashlen,
char *encoded, const size_t encodedlen) {
return argon2_hash(t_cost, m_cost, parallelism, pwd, pwdlen, salt, saltlen,
NULL, hashlen, encoded, encodedlen, Argon2_id,
ARGON2_VERSION_NUMBER);
}
int argon2id_hash_raw(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, void *hash, const size_t hashlen) {
return argon2_hash(t_cost, m_cost, parallelism, pwd, pwdlen, salt, saltlen,
hash, hashlen, NULL, 0, Argon2_id,
ARGON2_VERSION_NUMBER);
}
static int argon2_compare(const uint8_t *b1, const uint8_t *b2, size_t len) {
size_t i;
uint8_t d = 0U;
for (i = 0U; i < len; i++) {
d |= b1[i] ^ b2[i];
}
return (int)((1 & ((d - 1) >> 8)) - 1);
}
int argon2_verify(const char *encoded, const void *pwd, const size_t pwdlen,
argon2_type type) {
argon2_context ctx;
uint8_t *desired_result = NULL;
int ret = ARGON2_OK;
size_t encoded_len;
uint32_t max_field_len;
if (pwdlen > ARGON2_MAX_PWD_LENGTH) {
return ARGON2_PWD_TOO_LONG;
}
if (encoded == NULL) {
return ARGON2_DECODING_FAIL;
}
encoded_len = strlen(encoded);
if (encoded_len > UINT32_MAX) {
return ARGON2_DECODING_FAIL;
}
/* No field can be longer than the encoded length */
/* coverity[strlen_assign] */
max_field_len = (uint32_t)encoded_len;
ctx.saltlen = max_field_len;
ctx.outlen = max_field_len;
ctx.salt = malloc(ctx.saltlen);
ctx.out = malloc(ctx.outlen);
if (!ctx.salt || !ctx.out) {
ret = ARGON2_MEMORY_ALLOCATION_ERROR;
goto fail;
}
ctx.pwd = CONST_CAST(uint8_t *)pwd;
ctx.pwdlen = (uint32_t)pwdlen;
ret = decode_string(&ctx, encoded, type);
if (ret != ARGON2_OK) {
goto fail;
}
/* Set aside the desired result, and get a new buffer. */
desired_result = ctx.out;
ctx.out = malloc(ctx.outlen);
if (!ctx.out) {
ret = ARGON2_MEMORY_ALLOCATION_ERROR;
goto fail;
}
ret = argon2_verify_ctx(&ctx, (char *)desired_result, type);
if (ret != ARGON2_OK) {
goto fail;
}
fail:
free(ctx.salt);
free(ctx.out);
free(desired_result);
return ret;
}
int argon2i_verify(const char *encoded, const void *pwd, const size_t pwdlen) {
return argon2_verify(encoded, pwd, pwdlen, Argon2_i);
}
int argon2d_verify(const char *encoded, const void *pwd, const size_t pwdlen) {
return argon2_verify(encoded, pwd, pwdlen, Argon2_d);
}
int argon2id_verify(const char *encoded, const void *pwd, const size_t pwdlen) {
return argon2_verify(encoded, pwd, pwdlen, Argon2_id);
}
int argon2d_ctx(argon2_context *context) {
return argon2_ctx(context, Argon2_d);
}
int argon2i_ctx(argon2_context *context) {
return argon2_ctx(context, Argon2_i);
}
int argon2id_ctx(argon2_context *context) {
return argon2_ctx(context, Argon2_id);
}
int argon2_verify_ctx(argon2_context *context, const char *hash,
argon2_type type) {
int ret = argon2_ctx(context, type);
if (ret != ARGON2_OK) {
return ret;
}
if (argon2_compare(CONST_CAST(uint8_t *)hash, context->out, context->outlen)) {
return ARGON2_VERIFY_MISMATCH;
}
return ARGON2_OK;
}
int argon2d_verify_ctx(argon2_context *context, const char *hash) {
return argon2_verify_ctx(context, hash, Argon2_d);
}
int argon2i_verify_ctx(argon2_context *context, const char *hash) {
return argon2_verify_ctx(context, hash, Argon2_i);
}
int argon2id_verify_ctx(argon2_context *context, const char *hash) {
return argon2_verify_ctx(context, hash, Argon2_id);
}
const char *argon2_error_message(int error_code) {
switch (error_code) {
case ARGON2_OK:
return "OK";
case ARGON2_OUTPUT_PTR_NULL:
return "Output pointer is NULL";
case ARGON2_OUTPUT_TOO_SHORT:
return "Output is too short";
case ARGON2_OUTPUT_TOO_LONG:
return "Output is too long";
case ARGON2_PWD_TOO_SHORT:
return "Password is too short";
case ARGON2_PWD_TOO_LONG:
return "Password is too long";
case ARGON2_SALT_TOO_SHORT:
return "Salt is too short";
case ARGON2_SALT_TOO_LONG:
return "Salt is too long";
case ARGON2_AD_TOO_SHORT:
return "Associated data is too short";
case ARGON2_AD_TOO_LONG:
return "Associated data is too long";
case ARGON2_SECRET_TOO_SHORT:
return "Secret is too short";
case ARGON2_SECRET_TOO_LONG:
return "Secret is too long";
case ARGON2_TIME_TOO_SMALL:
return "Time cost is too small";
case ARGON2_TIME_TOO_LARGE:
return "Time cost is too large";
case ARGON2_MEMORY_TOO_LITTLE:
return "Memory cost is too small";
case ARGON2_MEMORY_TOO_MUCH:
return "Memory cost is too large";
case ARGON2_LANES_TOO_FEW:
return "Too few lanes";
case ARGON2_LANES_TOO_MANY:
return "Too many lanes";
case ARGON2_PWD_PTR_MISMATCH:
return "Password pointer is NULL, but password length is not 0";
case ARGON2_SALT_PTR_MISMATCH:
return "Salt pointer is NULL, but salt length is not 0";
case ARGON2_SECRET_PTR_MISMATCH:
return "Secret pointer is NULL, but secret length is not 0";
case ARGON2_AD_PTR_MISMATCH:
return "Associated data pointer is NULL, but ad length is not 0";
case ARGON2_MEMORY_ALLOCATION_ERROR:
return "Memory allocation error";
case ARGON2_FREE_MEMORY_CBK_NULL:
return "The free memory callback is NULL";
case ARGON2_ALLOCATE_MEMORY_CBK_NULL:
return "The allocate memory callback is NULL";
case ARGON2_INCORRECT_PARAMETER:
return "Argon2_Context context is NULL";
case ARGON2_INCORRECT_TYPE:
return "There is no such version of Argon2";
case ARGON2_OUT_PTR_MISMATCH:
return "Output pointer mismatch";
case ARGON2_THREADS_TOO_FEW:
return "Not enough threads";
case ARGON2_THREADS_TOO_MANY:
return "Too many threads";
case ARGON2_MISSING_ARGS:
return "Missing arguments";
case ARGON2_ENCODING_FAIL:
return "Encoding failed";
case ARGON2_DECODING_FAIL:
return "Decoding failed";
case ARGON2_THREAD_FAIL:
return "Threading failure";
case ARGON2_DECODING_LENGTH_FAIL:
return "Some of encoded parameters are too long or too short";
case ARGON2_VERIFY_MISMATCH:
return "The password does not match the supplied hash";
default:
return "Unknown error code";
}
}
size_t argon2_encodedlen(uint32_t t_cost, uint32_t m_cost, uint32_t parallelism,
uint32_t saltlen, uint32_t hashlen, argon2_type type) {
return strlen("$$v=$m=,t=,p=$$") + strlen(argon2_type2string(type, 0)) +
numlen(t_cost) + numlen(m_cost) + numlen(parallelism) +
b64len(saltlen) + b64len(hashlen) + numlen(ARGON2_VERSION_NUMBER) + 1;
}

View File

@@ -1,437 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef ARGON2_H
#define ARGON2_H
#include <stdint.h>
#include <stddef.h>
#include <limits.h>
#if defined(__cplusplus)
extern "C" {
#endif
/* Symbols visibility control */
#ifdef A2_VISCTL
#define ARGON2_PUBLIC __attribute__((visibility("default")))
#define ARGON2_LOCAL __attribute__ ((visibility ("hidden")))
#elif _MSC_VER
#define ARGON2_PUBLIC __declspec(dllexport)
#define ARGON2_LOCAL
#else
#define ARGON2_PUBLIC
#define ARGON2_LOCAL
#endif
/*
* Argon2 input parameter restrictions
*/
/* Minimum and maximum number of lanes (degree of parallelism) */
#define ARGON2_MIN_LANES UINT32_C(1)
#define ARGON2_MAX_LANES UINT32_C(0xFFFFFF)
/* Minimum and maximum number of threads */
#define ARGON2_MIN_THREADS UINT32_C(1)
#define ARGON2_MAX_THREADS UINT32_C(0xFFFFFF)
/* Number of synchronization points between lanes per pass */
#define ARGON2_SYNC_POINTS UINT32_C(4)
/* Minimum and maximum digest size in bytes */
#define ARGON2_MIN_OUTLEN UINT32_C(4)
#define ARGON2_MAX_OUTLEN UINT32_C(0xFFFFFFFF)
/* Minimum and maximum number of memory blocks (each of BLOCK_SIZE bytes) */
#define ARGON2_MIN_MEMORY (2 * ARGON2_SYNC_POINTS) /* 2 blocks per slice */
#define ARGON2_MIN(a, b) ((a) < (b) ? (a) : (b))
/* Max memory size is addressing-space/2, topping at 2^32 blocks (4 TB) */
#define ARGON2_MAX_MEMORY_BITS \
ARGON2_MIN(UINT32_C(32), (sizeof(void *) * CHAR_BIT - 10 - 1))
#define ARGON2_MAX_MEMORY \
ARGON2_MIN(UINT32_C(0xFFFFFFFF), UINT64_C(1) << ARGON2_MAX_MEMORY_BITS)
/* Minimum and maximum number of passes */
#define ARGON2_MIN_TIME UINT32_C(1)
#define ARGON2_MAX_TIME UINT32_C(0xFFFFFFFF)
/* Minimum and maximum password length in bytes */
#define ARGON2_MIN_PWD_LENGTH UINT32_C(0)
#define ARGON2_MAX_PWD_LENGTH UINT32_C(0xFFFFFFFF)
/* Minimum and maximum associated data length in bytes */
#define ARGON2_MIN_AD_LENGTH UINT32_C(0)
#define ARGON2_MAX_AD_LENGTH UINT32_C(0xFFFFFFFF)
/* Minimum and maximum salt length in bytes */
#define ARGON2_MIN_SALT_LENGTH UINT32_C(8)
#define ARGON2_MAX_SALT_LENGTH UINT32_C(0xFFFFFFFF)
/* Minimum and maximum key length in bytes */
#define ARGON2_MIN_SECRET UINT32_C(0)
#define ARGON2_MAX_SECRET UINT32_C(0xFFFFFFFF)
/* Flags to determine which fields are securely wiped (default = no wipe). */
#define ARGON2_DEFAULT_FLAGS UINT32_C(0)
#define ARGON2_FLAG_CLEAR_PASSWORD (UINT32_C(1) << 0)
#define ARGON2_FLAG_CLEAR_SECRET (UINT32_C(1) << 1)
/* Global flag to determine if we are wiping internal memory buffers. This flag
* is defined in core.c and defaults to 1 (wipe internal memory). */
extern int FLAG_clear_internal_memory;
/* Error codes */
typedef enum Argon2_ErrorCodes {
ARGON2_OK = 0,
ARGON2_OUTPUT_PTR_NULL = -1,
ARGON2_OUTPUT_TOO_SHORT = -2,
ARGON2_OUTPUT_TOO_LONG = -3,
ARGON2_PWD_TOO_SHORT = -4,
ARGON2_PWD_TOO_LONG = -5,
ARGON2_SALT_TOO_SHORT = -6,
ARGON2_SALT_TOO_LONG = -7,
ARGON2_AD_TOO_SHORT = -8,
ARGON2_AD_TOO_LONG = -9,
ARGON2_SECRET_TOO_SHORT = -10,
ARGON2_SECRET_TOO_LONG = -11,
ARGON2_TIME_TOO_SMALL = -12,
ARGON2_TIME_TOO_LARGE = -13,
ARGON2_MEMORY_TOO_LITTLE = -14,
ARGON2_MEMORY_TOO_MUCH = -15,
ARGON2_LANES_TOO_FEW = -16,
ARGON2_LANES_TOO_MANY = -17,
ARGON2_PWD_PTR_MISMATCH = -18, /* NULL ptr with non-zero length */
ARGON2_SALT_PTR_MISMATCH = -19, /* NULL ptr with non-zero length */
ARGON2_SECRET_PTR_MISMATCH = -20, /* NULL ptr with non-zero length */
ARGON2_AD_PTR_MISMATCH = -21, /* NULL ptr with non-zero length */
ARGON2_MEMORY_ALLOCATION_ERROR = -22,
ARGON2_FREE_MEMORY_CBK_NULL = -23,
ARGON2_ALLOCATE_MEMORY_CBK_NULL = -24,
ARGON2_INCORRECT_PARAMETER = -25,
ARGON2_INCORRECT_TYPE = -26,
ARGON2_OUT_PTR_MISMATCH = -27,
ARGON2_THREADS_TOO_FEW = -28,
ARGON2_THREADS_TOO_MANY = -29,
ARGON2_MISSING_ARGS = -30,
ARGON2_ENCODING_FAIL = -31,
ARGON2_DECODING_FAIL = -32,
ARGON2_THREAD_FAIL = -33,
ARGON2_DECODING_LENGTH_FAIL = -34,
ARGON2_VERIFY_MISMATCH = -35
} argon2_error_codes;
/* Memory allocator types --- for external allocation */
typedef int (*allocate_fptr)(uint8_t **memory, size_t bytes_to_allocate);
typedef void (*deallocate_fptr)(uint8_t *memory, size_t bytes_to_allocate);
/* Argon2 external data structures */
/*
*****
* Context: structure to hold Argon2 inputs:
* output array and its length,
* password and its length,
* salt and its length,
* secret and its length,
* associated data and its length,
* number of passes, amount of used memory (in KBytes, can be rounded up a bit)
* number of parallel threads that will be run.
* All the parameters above affect the output hash value.
* Additionally, two function pointers can be provided to allocate and
* deallocate the memory (if NULL, memory will be allocated internally).
* Also, three flags indicate whether to erase password, secret as soon as they
* are pre-hashed (and thus not needed anymore), and the entire memory
*****
* Simplest situation: you have output array out[8], password is stored in
* pwd[32], salt is stored in salt[16], you do not have keys nor associated
* data. You need to spend 1 GB of RAM and you run 5 passes of Argon2d with
* 4 parallel lanes.
* You want to erase the password, but you're OK with last pass not being
* erased. You want to use the default memory allocator.
* Then you initialize:
Argon2_Context(out,8,pwd,32,salt,16,NULL,0,NULL,0,5,1<<20,4,4,NULL,NULL,true,false,false,false)
*/
typedef struct Argon2_Context {
uint8_t *out; /* output array */
uint32_t outlen; /* digest length */
uint8_t *pwd; /* password array */
uint32_t pwdlen; /* password length */
uint8_t *salt; /* salt array */
uint32_t saltlen; /* salt length */
uint8_t *secret; /* key array */
uint32_t secretlen; /* key length */
uint8_t *ad; /* associated data array */
uint32_t adlen; /* associated data length */
uint32_t t_cost; /* number of passes */
uint32_t m_cost; /* amount of memory requested (KB) */
uint32_t lanes; /* number of lanes */
uint32_t threads; /* maximum number of threads */
uint32_t version; /* version number */
allocate_fptr allocate_cbk; /* pointer to memory allocator */
deallocate_fptr free_cbk; /* pointer to memory deallocator */
uint32_t flags; /* array of bool options */
} argon2_context;
/* Argon2 primitive type */
typedef enum Argon2_type {
Argon2_d = 0,
Argon2_i = 1,
Argon2_id = 2
} argon2_type;
/* Version of the algorithm */
typedef enum Argon2_version {
ARGON2_VERSION_10 = 0x10,
ARGON2_VERSION_13 = 0x13,
ARGON2_VERSION_NUMBER = ARGON2_VERSION_13
} argon2_version;
/*
* Function that gives the string representation of an argon2_type.
* @param type The argon2_type that we want the string for
* @param uppercase Whether the string should have the first letter uppercase
* @return NULL if invalid type, otherwise the string representation.
*/
ARGON2_PUBLIC const char *argon2_type2string(argon2_type type, int uppercase);
/*
* Function that performs memory-hard hashing with certain degree of parallelism
* @param context Pointer to the Argon2 internal structure
* @return Error code if smth is wrong, ARGON2_OK otherwise
*/
ARGON2_PUBLIC int argon2_ctx(argon2_context *context, argon2_type type);
/**
* Hashes a password with Argon2i, producing an encoded hash
* @param t_cost Number of iterations
* @param m_cost Sets memory usage to m_cost kibibytes
* @param parallelism Number of threads and compute lanes
* @param pwd Pointer to password
* @param pwdlen Password size in bytes
* @param salt Pointer to salt
* @param saltlen Salt size in bytes
* @param hashlen Desired length of the hash in bytes
* @param encoded Buffer where to write the encoded hash
* @param encodedlen Size of the buffer (thus max size of the encoded hash)
* @pre Different parallelism levels will give different results
* @pre Returns ARGON2_OK if successful
*/
ARGON2_PUBLIC int argon2i_hash_encoded(const uint32_t t_cost,
const uint32_t m_cost,
const uint32_t parallelism,
const void *pwd, const size_t pwdlen,
const void *salt, const size_t saltlen,
const size_t hashlen, char *encoded,
const size_t encodedlen);
/**
* Hashes a password with Argon2i, producing a raw hash at @hash
* @param t_cost Number of iterations
* @param m_cost Sets memory usage to m_cost kibibytes
* @param parallelism Number of threads and compute lanes
* @param pwd Pointer to password
* @param pwdlen Password size in bytes
* @param salt Pointer to salt
* @param saltlen Salt size in bytes
* @param hash Buffer where to write the raw hash - updated by the function
* @param hashlen Desired length of the hash in bytes
* @pre Different parallelism levels will give different results
* @pre Returns ARGON2_OK if successful
*/
ARGON2_PUBLIC int argon2i_hash_raw(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, void *hash,
const size_t hashlen);
ARGON2_PUBLIC int argon2d_hash_encoded(const uint32_t t_cost,
const uint32_t m_cost,
const uint32_t parallelism,
const void *pwd, const size_t pwdlen,
const void *salt, const size_t saltlen,
const size_t hashlen, char *encoded,
const size_t encodedlen);
ARGON2_PUBLIC int argon2d_hash_raw(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, void *hash,
const size_t hashlen);
ARGON2_PUBLIC int argon2id_hash_encoded(const uint32_t t_cost,
const uint32_t m_cost,
const uint32_t parallelism,
const void *pwd, const size_t pwdlen,
const void *salt, const size_t saltlen,
const size_t hashlen, char *encoded,
const size_t encodedlen);
ARGON2_PUBLIC int argon2id_hash_raw(const uint32_t t_cost,
const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, void *hash,
const size_t hashlen);
/* generic function underlying the above ones */
ARGON2_PUBLIC int argon2_hash(const uint32_t t_cost, const uint32_t m_cost,
const uint32_t parallelism, const void *pwd,
const size_t pwdlen, const void *salt,
const size_t saltlen, void *hash,
const size_t hashlen, char *encoded,
const size_t encodedlen, argon2_type type,
const uint32_t version);
/**
* Verifies a password against an encoded string
* Encoded string is restricted as in validate_inputs()
* @param encoded String encoding parameters, salt, hash
* @param pwd Pointer to password
* @pre Returns ARGON2_OK if successful
*/
ARGON2_PUBLIC int argon2i_verify(const char *encoded, const void *pwd,
const size_t pwdlen);
ARGON2_PUBLIC int argon2d_verify(const char *encoded, const void *pwd,
const size_t pwdlen);
ARGON2_PUBLIC int argon2id_verify(const char *encoded, const void *pwd,
const size_t pwdlen);
/* generic function underlying the above ones */
ARGON2_PUBLIC int argon2_verify(const char *encoded, const void *pwd,
const size_t pwdlen, argon2_type type);
/**
* Argon2d: Version of Argon2 that picks memory blocks depending
* on the password and salt. Only for side-channel-free
* environment!!
*****
* @param context Pointer to current Argon2 context
* @return Zero if successful, a non zero error code otherwise
*/
ARGON2_PUBLIC int argon2d_ctx(argon2_context *context);
/**
* Argon2i: Version of Argon2 that picks memory blocks
* independent on the password and salt. Good for side-channels,
* but worse w.r.t. tradeoff attacks if only one pass is used.
*****
* @param context Pointer to current Argon2 context
* @return Zero if successful, a non zero error code otherwise
*/
ARGON2_PUBLIC int argon2i_ctx(argon2_context *context);
/**
* Argon2id: Version of Argon2 where the first half-pass over memory is
* password-independent, the rest are password-dependent (on the password and
* salt). OK against side channels (they reduce to 1/2-pass Argon2i), and
* better with w.r.t. tradeoff attacks (similar to Argon2d).
*****
* @param context Pointer to current Argon2 context
* @return Zero if successful, a non zero error code otherwise
*/
ARGON2_PUBLIC int argon2id_ctx(argon2_context *context);
/**
* Verify if a given password is correct for Argon2d hashing
* @param context Pointer to current Argon2 context
* @param hash The password hash to verify. The length of the hash is
* specified by the context outlen member
* @return Zero if successful, a non zero error code otherwise
*/
ARGON2_PUBLIC int argon2d_verify_ctx(argon2_context *context, const char *hash);
/**
* Verify if a given password is correct for Argon2i hashing
* @param context Pointer to current Argon2 context
* @param hash The password hash to verify. The length of the hash is
* specified by the context outlen member
* @return Zero if successful, a non zero error code otherwise
*/
ARGON2_PUBLIC int argon2i_verify_ctx(argon2_context *context, const char *hash);
/**
* Verify if a given password is correct for Argon2id hashing
* @param context Pointer to current Argon2 context
* @param hash The password hash to verify. The length of the hash is
* specified by the context outlen member
* @return Zero if successful, a non zero error code otherwise
*/
ARGON2_PUBLIC int argon2id_verify_ctx(argon2_context *context,
const char *hash);
/* generic function underlying the above ones */
ARGON2_PUBLIC int argon2_verify_ctx(argon2_context *context, const char *hash,
argon2_type type);
/**
* Get the associated error message for given error code
* @return The error message associated with the given error code
*/
ARGON2_PUBLIC const char *argon2_error_message(int error_code);
/**
* Returns the encoded hash length for the given input parameters
* @param t_cost Number of iterations
* @param m_cost Memory usage in kibibytes
* @param parallelism Number of threads; used to compute lanes
* @param saltlen Salt size in bytes
* @param hashlen Hash size in bytes
* @param type The argon2_type that we want the encoded length for
* @return The encoded hash length in bytes
*/
ARGON2_PUBLIC size_t argon2_encodedlen(uint32_t t_cost, uint32_t m_cost,
uint32_t parallelism, uint32_t saltlen,
uint32_t hashlen, argon2_type type);
#if defined(__cplusplus)
}
#endif
#endif

View File

@@ -1,154 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef PORTABLE_BLAKE2_IMPL_H
#define PORTABLE_BLAKE2_IMPL_H
#include <stdint.h>
#include <string.h>
#if defined(_MSC_VER)
#define BLAKE2_INLINE __inline
#elif defined(__GNUC__) || defined(__clang__)
#define BLAKE2_INLINE __inline__
#else
#define BLAKE2_INLINE
#endif
/* Argon2 Team - Begin Code */
/*
Not an exhaustive list, but should cover the majority of modern platforms
Additionally, the code will always be correct---this is only a performance
tweak.
*/
#if (defined(__BYTE_ORDER__) && \
(__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)) || \
defined(__LITTLE_ENDIAN__) || defined(__ARMEL__) || defined(__MIPSEL__) || \
defined(__AARCH64EL__) || defined(__amd64__) || defined(__i386__) || \
defined(_M_IX86) || defined(_M_X64) || defined(_M_AMD64) || \
defined(_M_ARM)
#define NATIVE_LITTLE_ENDIAN
#endif
/* Argon2 Team - End Code */
static BLAKE2_INLINE uint32_t load32(const void *src) {
#if defined(NATIVE_LITTLE_ENDIAN)
uint32_t w;
memcpy(&w, src, sizeof w);
return w;
#else
const uint8_t *p = (const uint8_t *)src;
uint32_t w = *p++;
w |= (uint32_t)(*p++) << 8;
w |= (uint32_t)(*p++) << 16;
w |= (uint32_t)(*p++) << 24;
return w;
#endif
}
static BLAKE2_INLINE uint64_t load64(const void *src) {
#if defined(NATIVE_LITTLE_ENDIAN)
uint64_t w;
memcpy(&w, src, sizeof w);
return w;
#else
const uint8_t *p = (const uint8_t *)src;
uint64_t w = *p++;
w |= (uint64_t)(*p++) << 8;
w |= (uint64_t)(*p++) << 16;
w |= (uint64_t)(*p++) << 24;
w |= (uint64_t)(*p++) << 32;
w |= (uint64_t)(*p++) << 40;
w |= (uint64_t)(*p++) << 48;
w |= (uint64_t)(*p++) << 56;
return w;
#endif
}
static BLAKE2_INLINE void store32(void *dst, uint32_t w) {
#if defined(NATIVE_LITTLE_ENDIAN)
memcpy(dst, &w, sizeof w);
#else
uint8_t *p = (uint8_t *)dst;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
#endif
}
static BLAKE2_INLINE void store64(void *dst, uint64_t w) {
#if defined(NATIVE_LITTLE_ENDIAN)
memcpy(dst, &w, sizeof w);
#else
uint8_t *p = (uint8_t *)dst;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
#endif
}
static BLAKE2_INLINE uint64_t load48(const void *src) {
const uint8_t *p = (const uint8_t *)src;
uint64_t w = *p++;
w |= (uint64_t)(*p++) << 8;
w |= (uint64_t)(*p++) << 16;
w |= (uint64_t)(*p++) << 24;
w |= (uint64_t)(*p++) << 32;
w |= (uint64_t)(*p++) << 40;
return w;
}
static BLAKE2_INLINE void store48(void *dst, uint64_t w) {
uint8_t *p = (uint8_t *)dst;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
w >>= 8;
*p++ = (uint8_t)w;
}
static BLAKE2_INLINE uint32_t rotr32(const uint32_t w, const unsigned c) {
return (w >> c) | (w << (32 - c));
}
static BLAKE2_INLINE uint64_t rotr64(const uint64_t w, const unsigned c) {
return (w >> c) | (w << (64 - c));
}
#endif

View File

@@ -1,89 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef PORTABLE_BLAKE2_H
#define PORTABLE_BLAKE2_H
#include "../argon2.h"
#if defined(__cplusplus)
extern "C" {
#endif
enum blake2b_constant {
BLAKE2B_BLOCKBYTES = 128,
BLAKE2B_OUTBYTES = 64,
BLAKE2B_KEYBYTES = 64,
BLAKE2B_SALTBYTES = 16,
BLAKE2B_PERSONALBYTES = 16
};
#pragma pack(push, 1)
typedef struct __blake2b_param {
uint8_t digest_length; /* 1 */
uint8_t key_length; /* 2 */
uint8_t fanout; /* 3 */
uint8_t depth; /* 4 */
uint32_t leaf_length; /* 8 */
uint64_t node_offset; /* 16 */
uint8_t node_depth; /* 17 */
uint8_t inner_length; /* 18 */
uint8_t reserved[14]; /* 32 */
uint8_t salt[BLAKE2B_SALTBYTES]; /* 48 */
uint8_t personal[BLAKE2B_PERSONALBYTES]; /* 64 */
} blake2b_param;
#pragma pack(pop)
typedef struct __blake2b_state {
uint64_t h[8];
uint64_t t[2];
uint64_t f[2];
uint8_t buf[BLAKE2B_BLOCKBYTES];
unsigned buflen;
unsigned outlen;
uint8_t last_node;
} blake2b_state;
/* Ensure param structs have not been wrongly padded */
/* Poor man's static_assert */
enum {
blake2_size_check_0 = 1 / !!(CHAR_BIT == 8),
blake2_size_check_2 =
1 / !!(sizeof(blake2b_param) == sizeof(uint64_t) * CHAR_BIT)
};
/* Streaming API */
ARGON2_LOCAL int blake2b_init(blake2b_state *S, size_t outlen);
ARGON2_LOCAL int blake2b_init_key(blake2b_state *S, size_t outlen, const void *key,
size_t keylen);
ARGON2_LOCAL int blake2b_init_param(blake2b_state *S, const blake2b_param *P);
ARGON2_LOCAL int blake2b_update(blake2b_state *S, const void *in, size_t inlen);
ARGON2_LOCAL int blake2b_final(blake2b_state *S, void *out, size_t outlen);
/* Simple API */
ARGON2_LOCAL int blake2b(void *out, size_t outlen, const void *in, size_t inlen,
const void *key, size_t keylen);
/* Argon2 Team - Begin Code */
ARGON2_LOCAL int blake2b_long(void *out, size_t outlen, const void *in, size_t inlen);
/* Argon2 Team - End Code */
#if defined(__cplusplus)
}
#endif
#endif

View File

@@ -1,392 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#include <stdint.h>
#include <string.h>
#include <stdio.h>
#include "blake2.h"
#include "blake2-impl.h"
void clear_internal_memory(void *v, size_t n);
static const uint64_t blake2b_IV[8] = {
UINT64_C(0x6a09e667f3bcc908), UINT64_C(0xbb67ae8584caa73b),
UINT64_C(0x3c6ef372fe94f82b), UINT64_C(0xa54ff53a5f1d36f1),
UINT64_C(0x510e527fade682d1), UINT64_C(0x9b05688c2b3e6c1f),
UINT64_C(0x1f83d9abfb41bd6b), UINT64_C(0x5be0cd19137e2179)};
static const unsigned int blake2b_sigma[12][16] = {
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15},
{14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3},
{11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4},
{7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8},
{9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13},
{2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9},
{12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11},
{13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10},
{6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5},
{10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13, 0},
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15},
{14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3},
};
static BLAKE2_INLINE void blake2b_set_lastnode(blake2b_state *S) {
S->f[1] = (uint64_t)-1;
}
static BLAKE2_INLINE void blake2b_set_lastblock(blake2b_state *S) {
if (S->last_node) {
blake2b_set_lastnode(S);
}
S->f[0] = (uint64_t)-1;
}
static BLAKE2_INLINE void blake2b_increment_counter(blake2b_state *S,
uint64_t inc) {
S->t[0] += inc;
S->t[1] += (S->t[0] < inc);
}
static BLAKE2_INLINE void blake2b_invalidate_state(blake2b_state *S) {
clear_internal_memory(S, sizeof(*S)); /* wipe */
blake2b_set_lastblock(S); /* invalidate for further use */
}
static BLAKE2_INLINE void blake2b_init0(blake2b_state *S) {
memset(S, 0, sizeof(*S));
memcpy(S->h, blake2b_IV, sizeof(S->h));
}
int blake2b_init_param(blake2b_state *S, const blake2b_param *P) {
const unsigned char *p = (const unsigned char *)P;
unsigned int i;
if (NULL == P || NULL == S) {
return -1;
}
blake2b_init0(S);
/* IV XOR Parameter Block */
for (i = 0; i < 8; ++i) {
S->h[i] ^= load64(&p[i * sizeof(S->h[i])]);
}
S->outlen = P->digest_length;
return 0;
}
/* Sequential blake2b initialization */
int blake2b_init(blake2b_state *S, size_t outlen) {
blake2b_param P;
if (S == NULL) {
return -1;
}
if ((outlen == 0) || (outlen > BLAKE2B_OUTBYTES)) {
blake2b_invalidate_state(S);
return -1;
}
/* Setup Parameter Block for unkeyed BLAKE2 */
P.digest_length = (uint8_t)outlen;
P.key_length = 0;
P.fanout = 1;
P.depth = 1;
P.leaf_length = 0;
P.node_offset = 0;
P.node_depth = 0;
P.inner_length = 0;
memset(P.reserved, 0, sizeof(P.reserved));
memset(P.salt, 0, sizeof(P.salt));
memset(P.personal, 0, sizeof(P.personal));
return blake2b_init_param(S, &P);
}
int blake2b_init_key(blake2b_state *S, size_t outlen, const void *key,
size_t keylen) {
blake2b_param P;
if (S == NULL) {
return -1;
}
if ((outlen == 0) || (outlen > BLAKE2B_OUTBYTES)) {
blake2b_invalidate_state(S);
return -1;
}
if ((key == 0) || (keylen == 0) || (keylen > BLAKE2B_KEYBYTES)) {
blake2b_invalidate_state(S);
return -1;
}
/* Setup Parameter Block for keyed BLAKE2 */
P.digest_length = (uint8_t)outlen;
P.key_length = (uint8_t)keylen;
P.fanout = 1;
P.depth = 1;
P.leaf_length = 0;
P.node_offset = 0;
P.node_depth = 0;
P.inner_length = 0;
memset(P.reserved, 0, sizeof(P.reserved));
memset(P.salt, 0, sizeof(P.salt));
memset(P.personal, 0, sizeof(P.personal));
if (blake2b_init_param(S, &P) < 0) {
blake2b_invalidate_state(S);
return -1;
}
{
uint8_t block[BLAKE2B_BLOCKBYTES];
memset(block, 0, BLAKE2B_BLOCKBYTES);
memcpy(block, key, keylen);
blake2b_update(S, block, BLAKE2B_BLOCKBYTES);
/* Burn the key from stack */
clear_internal_memory(block, BLAKE2B_BLOCKBYTES);
}
return 0;
}
static void blake2b_compress(blake2b_state *S, const uint8_t *block) {
uint64_t m[16];
uint64_t v[16];
unsigned int i, r;
for (i = 0; i < 16; ++i) {
m[i] = load64(block + i * sizeof(m[i]));
}
for (i = 0; i < 8; ++i) {
v[i] = S->h[i];
}
v[8] = blake2b_IV[0];
v[9] = blake2b_IV[1];
v[10] = blake2b_IV[2];
v[11] = blake2b_IV[3];
v[12] = blake2b_IV[4] ^ S->t[0];
v[13] = blake2b_IV[5] ^ S->t[1];
v[14] = blake2b_IV[6] ^ S->f[0];
v[15] = blake2b_IV[7] ^ S->f[1];
#define G(r, i, a, b, c, d) \
do { \
a = a + b + m[blake2b_sigma[r][2 * i + 0]]; \
d = rotr64(d ^ a, 32); \
c = c + d; \
b = rotr64(b ^ c, 24); \
a = a + b + m[blake2b_sigma[r][2 * i + 1]]; \
d = rotr64(d ^ a, 16); \
c = c + d; \
b = rotr64(b ^ c, 63); \
} while ((void)0, 0)
#define ROUND(r) \
do { \
G(r, 0, v[0], v[4], v[8], v[12]); \
G(r, 1, v[1], v[5], v[9], v[13]); \
G(r, 2, v[2], v[6], v[10], v[14]); \
G(r, 3, v[3], v[7], v[11], v[15]); \
G(r, 4, v[0], v[5], v[10], v[15]); \
G(r, 5, v[1], v[6], v[11], v[12]); \
G(r, 6, v[2], v[7], v[8], v[13]); \
G(r, 7, v[3], v[4], v[9], v[14]); \
} while ((void)0, 0)
for (r = 0; r < 12; ++r) {
ROUND(r);
}
for (i = 0; i < 8; ++i) {
S->h[i] = S->h[i] ^ v[i] ^ v[i + 8];
}
#undef G
#undef ROUND
}
int blake2b_update(blake2b_state *S, const void *in, size_t inlen) {
const uint8_t *pin = (const uint8_t *)in;
if (inlen == 0) {
return 0;
}
/* Sanity check */
if (S == NULL || in == NULL) {
return -1;
}
/* Is this a reused state? */
if (S->f[0] != 0) {
return -1;
}
if (S->buflen + inlen > BLAKE2B_BLOCKBYTES) {
/* Complete current block */
size_t left = S->buflen;
size_t fill = BLAKE2B_BLOCKBYTES - left;
memcpy(&S->buf[left], pin, fill);
blake2b_increment_counter(S, BLAKE2B_BLOCKBYTES);
blake2b_compress(S, S->buf);
S->buflen = 0;
inlen -= fill;
pin += fill;
/* Avoid buffer copies when possible */
while (inlen > BLAKE2B_BLOCKBYTES) {
blake2b_increment_counter(S, BLAKE2B_BLOCKBYTES);
blake2b_compress(S, pin);
inlen -= BLAKE2B_BLOCKBYTES;
pin += BLAKE2B_BLOCKBYTES;
}
}
memcpy(&S->buf[S->buflen], pin, inlen);
S->buflen += (unsigned int)inlen;
return 0;
}
int blake2b_final(blake2b_state *S, void *out, size_t outlen) {
uint8_t buffer[BLAKE2B_OUTBYTES] = {0};
unsigned int i;
/* Sanity checks */
if (S == NULL || out == NULL || outlen < S->outlen) {
return -1;
}
/* Is this a reused state? */
if (S->f[0] != 0) {
return -1;
}
blake2b_increment_counter(S, S->buflen);
blake2b_set_lastblock(S);
memset(&S->buf[S->buflen], 0, BLAKE2B_BLOCKBYTES - S->buflen); /* Padding */
blake2b_compress(S, S->buf);
for (i = 0; i < 8; ++i) { /* Output full hash to temp buffer */
store64(buffer + sizeof(S->h[i]) * i, S->h[i]);
}
memcpy(out, buffer, S->outlen);
clear_internal_memory(buffer, sizeof(buffer));
clear_internal_memory(S->buf, sizeof(S->buf));
clear_internal_memory(S->h, sizeof(S->h));
return 0;
}
int blake2b(void *out, size_t outlen, const void *in, size_t inlen,
const void *key, size_t keylen) {
blake2b_state S;
int ret = -1;
/* Verify parameters */
if (NULL == in && inlen > 0) {
goto fail;
}
if (NULL == out || outlen == 0 || outlen > BLAKE2B_OUTBYTES) {
goto fail;
}
if ((NULL == key && keylen > 0) || keylen > BLAKE2B_KEYBYTES) {
goto fail;
}
if (keylen > 0) {
if (blake2b_init_key(&S, outlen, key, keylen) < 0) {
goto fail;
}
} else {
if (blake2b_init(&S, outlen) < 0) {
goto fail;
}
}
if (blake2b_update(&S, in, inlen) < 0) {
goto fail;
}
ret = blake2b_final(&S, out, outlen);
fail:
clear_internal_memory(&S, sizeof(S));
return ret;
}
/* Argon2 Team - Begin Code */
int blake2b_long(void *pout, size_t outlen, const void *in, size_t inlen) {
uint8_t *out = (uint8_t *)pout;
blake2b_state blake_state;
uint8_t outlen_bytes[sizeof(uint32_t)] = {0};
int ret = -1;
if (outlen > UINT32_MAX) {
goto fail;
}
/* Ensure little-endian byte order! */
store32(outlen_bytes, (uint32_t)outlen);
#define TRY(statement) \
do { \
ret = statement; \
if (ret < 0) { \
goto fail; \
} \
} while ((void)0, 0)
if (outlen <= BLAKE2B_OUTBYTES) {
TRY(blake2b_init(&blake_state, outlen));
TRY(blake2b_update(&blake_state, outlen_bytes, sizeof(outlen_bytes)));
TRY(blake2b_update(&blake_state, in, inlen));
TRY(blake2b_final(&blake_state, out, outlen));
} else {
uint32_t toproduce;
uint8_t out_buffer[BLAKE2B_OUTBYTES];
uint8_t in_buffer[BLAKE2B_OUTBYTES];
TRY(blake2b_init(&blake_state, BLAKE2B_OUTBYTES));
TRY(blake2b_update(&blake_state, outlen_bytes, sizeof(outlen_bytes)));
TRY(blake2b_update(&blake_state, in, inlen));
TRY(blake2b_final(&blake_state, out_buffer, BLAKE2B_OUTBYTES));
memcpy(out, out_buffer, BLAKE2B_OUTBYTES / 2);
out += BLAKE2B_OUTBYTES / 2;
toproduce = (uint32_t)outlen - BLAKE2B_OUTBYTES / 2;
while (toproduce > BLAKE2B_OUTBYTES) {
memcpy(in_buffer, out_buffer, BLAKE2B_OUTBYTES);
TRY(blake2b(out_buffer, BLAKE2B_OUTBYTES, in_buffer,
BLAKE2B_OUTBYTES, NULL, 0));
memcpy(out, out_buffer, BLAKE2B_OUTBYTES / 2);
out += BLAKE2B_OUTBYTES / 2;
toproduce -= BLAKE2B_OUTBYTES / 2;
}
memcpy(in_buffer, out_buffer, BLAKE2B_OUTBYTES);
TRY(blake2b(out_buffer, toproduce, in_buffer, BLAKE2B_OUTBYTES, NULL,
0));
memcpy(out, out_buffer, toproduce);
}
fail:
clear_internal_memory(&blake_state, sizeof(blake_state));
return ret;
#undef TRY
}
/* Argon2 Team - End Code */

View File

@@ -1,471 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef BLAKE_ROUND_MKA_OPT_H
#define BLAKE_ROUND_MKA_OPT_H
#include "blake2-impl.h"
#include <emmintrin.h>
#if defined(__SSSE3__)
#include <tmmintrin.h> /* for _mm_shuffle_epi8 and _mm_alignr_epi8 */
#endif
#if defined(__XOP__) && (defined(__GNUC__) || defined(__clang__))
#include <x86intrin.h>
#endif
#if !defined(__AVX512F__)
#if !defined(__AVX2__)
#if !defined(__XOP__)
#if defined(__SSSE3__)
#define r16 \
(_mm_setr_epi8(2, 3, 4, 5, 6, 7, 0, 1, 10, 11, 12, 13, 14, 15, 8, 9))
#define r24 \
(_mm_setr_epi8(3, 4, 5, 6, 7, 0, 1, 2, 11, 12, 13, 14, 15, 8, 9, 10))
#define _mm_roti_epi64(x, c) \
(-(c) == 32) \
? _mm_shuffle_epi32((x), _MM_SHUFFLE(2, 3, 0, 1)) \
: (-(c) == 24) \
? _mm_shuffle_epi8((x), r24) \
: (-(c) == 16) \
? _mm_shuffle_epi8((x), r16) \
: (-(c) == 63) \
? _mm_xor_si128(_mm_srli_epi64((x), -(c)), \
_mm_add_epi64((x), (x))) \
: _mm_xor_si128(_mm_srli_epi64((x), -(c)), \
_mm_slli_epi64((x), 64 - (-(c))))
#else /* defined(__SSE2__) */
#define _mm_roti_epi64(r, c) \
_mm_xor_si128(_mm_srli_epi64((r), -(c)), _mm_slli_epi64((r), 64 - (-(c))))
#endif
#else
#endif
static BLAKE2_INLINE __m128i fBlaMka(__m128i x, __m128i y) {
const __m128i z = _mm_mul_epu32(x, y);
return _mm_add_epi64(_mm_add_epi64(x, y), _mm_add_epi64(z, z));
}
#define G1(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
A0 = fBlaMka(A0, B0); \
A1 = fBlaMka(A1, B1); \
\
D0 = _mm_xor_si128(D0, A0); \
D1 = _mm_xor_si128(D1, A1); \
\
D0 = _mm_roti_epi64(D0, -32); \
D1 = _mm_roti_epi64(D1, -32); \
\
C0 = fBlaMka(C0, D0); \
C1 = fBlaMka(C1, D1); \
\
B0 = _mm_xor_si128(B0, C0); \
B1 = _mm_xor_si128(B1, C1); \
\
B0 = _mm_roti_epi64(B0, -24); \
B1 = _mm_roti_epi64(B1, -24); \
} while ((void)0, 0)
#define G2(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
A0 = fBlaMka(A0, B0); \
A1 = fBlaMka(A1, B1); \
\
D0 = _mm_xor_si128(D0, A0); \
D1 = _mm_xor_si128(D1, A1); \
\
D0 = _mm_roti_epi64(D0, -16); \
D1 = _mm_roti_epi64(D1, -16); \
\
C0 = fBlaMka(C0, D0); \
C1 = fBlaMka(C1, D1); \
\
B0 = _mm_xor_si128(B0, C0); \
B1 = _mm_xor_si128(B1, C1); \
\
B0 = _mm_roti_epi64(B0, -63); \
B1 = _mm_roti_epi64(B1, -63); \
} while ((void)0, 0)
#if defined(__SSSE3__)
#define DIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
__m128i t0 = _mm_alignr_epi8(B1, B0, 8); \
__m128i t1 = _mm_alignr_epi8(B0, B1, 8); \
B0 = t0; \
B1 = t1; \
\
t0 = C0; \
C0 = C1; \
C1 = t0; \
\
t0 = _mm_alignr_epi8(D1, D0, 8); \
t1 = _mm_alignr_epi8(D0, D1, 8); \
D0 = t1; \
D1 = t0; \
} while ((void)0, 0)
#define UNDIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
__m128i t0 = _mm_alignr_epi8(B0, B1, 8); \
__m128i t1 = _mm_alignr_epi8(B1, B0, 8); \
B0 = t0; \
B1 = t1; \
\
t0 = C0; \
C0 = C1; \
C1 = t0; \
\
t0 = _mm_alignr_epi8(D0, D1, 8); \
t1 = _mm_alignr_epi8(D1, D0, 8); \
D0 = t1; \
D1 = t0; \
} while ((void)0, 0)
#else /* SSE2 */
#define DIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
__m128i t0 = D0; \
__m128i t1 = B0; \
D0 = C0; \
C0 = C1; \
C1 = D0; \
D0 = _mm_unpackhi_epi64(D1, _mm_unpacklo_epi64(t0, t0)); \
D1 = _mm_unpackhi_epi64(t0, _mm_unpacklo_epi64(D1, D1)); \
B0 = _mm_unpackhi_epi64(B0, _mm_unpacklo_epi64(B1, B1)); \
B1 = _mm_unpackhi_epi64(B1, _mm_unpacklo_epi64(t1, t1)); \
} while ((void)0, 0)
#define UNDIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
__m128i t0, t1; \
t0 = C0; \
C0 = C1; \
C1 = t0; \
t0 = B0; \
t1 = D0; \
B0 = _mm_unpackhi_epi64(B1, _mm_unpacklo_epi64(B0, B0)); \
B1 = _mm_unpackhi_epi64(t0, _mm_unpacklo_epi64(B1, B1)); \
D0 = _mm_unpackhi_epi64(D0, _mm_unpacklo_epi64(D1, D1)); \
D1 = _mm_unpackhi_epi64(D1, _mm_unpacklo_epi64(t1, t1)); \
} while ((void)0, 0)
#endif
#define BLAKE2_ROUND(A0, A1, B0, B1, C0, C1, D0, D1) \
do { \
G1(A0, B0, C0, D0, A1, B1, C1, D1); \
G2(A0, B0, C0, D0, A1, B1, C1, D1); \
\
DIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1); \
\
G1(A0, B0, C0, D0, A1, B1, C1, D1); \
G2(A0, B0, C0, D0, A1, B1, C1, D1); \
\
UNDIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1); \
} while ((void)0, 0)
#else /* __AVX2__ */
#include <immintrin.h>
#define rotr32(x) _mm256_shuffle_epi32(x, _MM_SHUFFLE(2, 3, 0, 1))
#define rotr24(x) _mm256_shuffle_epi8(x, _mm256_setr_epi8(3, 4, 5, 6, 7, 0, 1, 2, 11, 12, 13, 14, 15, 8, 9, 10, 3, 4, 5, 6, 7, 0, 1, 2, 11, 12, 13, 14, 15, 8, 9, 10))
#define rotr16(x) _mm256_shuffle_epi8(x, _mm256_setr_epi8(2, 3, 4, 5, 6, 7, 0, 1, 10, 11, 12, 13, 14, 15, 8, 9, 2, 3, 4, 5, 6, 7, 0, 1, 10, 11, 12, 13, 14, 15, 8, 9))
#define rotr63(x) _mm256_xor_si256(_mm256_srli_epi64((x), 63), _mm256_add_epi64((x), (x)))
#define G1_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
do { \
__m256i ml = _mm256_mul_epu32(A0, B0); \
ml = _mm256_add_epi64(ml, ml); \
A0 = _mm256_add_epi64(A0, _mm256_add_epi64(B0, ml)); \
D0 = _mm256_xor_si256(D0, A0); \
D0 = rotr32(D0); \
\
ml = _mm256_mul_epu32(C0, D0); \
ml = _mm256_add_epi64(ml, ml); \
C0 = _mm256_add_epi64(C0, _mm256_add_epi64(D0, ml)); \
\
B0 = _mm256_xor_si256(B0, C0); \
B0 = rotr24(B0); \
\
ml = _mm256_mul_epu32(A1, B1); \
ml = _mm256_add_epi64(ml, ml); \
A1 = _mm256_add_epi64(A1, _mm256_add_epi64(B1, ml)); \
D1 = _mm256_xor_si256(D1, A1); \
D1 = rotr32(D1); \
\
ml = _mm256_mul_epu32(C1, D1); \
ml = _mm256_add_epi64(ml, ml); \
C1 = _mm256_add_epi64(C1, _mm256_add_epi64(D1, ml)); \
\
B1 = _mm256_xor_si256(B1, C1); \
B1 = rotr24(B1); \
} while((void)0, 0);
#define G2_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
do { \
__m256i ml = _mm256_mul_epu32(A0, B0); \
ml = _mm256_add_epi64(ml, ml); \
A0 = _mm256_add_epi64(A0, _mm256_add_epi64(B0, ml)); \
D0 = _mm256_xor_si256(D0, A0); \
D0 = rotr16(D0); \
\
ml = _mm256_mul_epu32(C0, D0); \
ml = _mm256_add_epi64(ml, ml); \
C0 = _mm256_add_epi64(C0, _mm256_add_epi64(D0, ml)); \
B0 = _mm256_xor_si256(B0, C0); \
B0 = rotr63(B0); \
\
ml = _mm256_mul_epu32(A1, B1); \
ml = _mm256_add_epi64(ml, ml); \
A1 = _mm256_add_epi64(A1, _mm256_add_epi64(B1, ml)); \
D1 = _mm256_xor_si256(D1, A1); \
D1 = rotr16(D1); \
\
ml = _mm256_mul_epu32(C1, D1); \
ml = _mm256_add_epi64(ml, ml); \
C1 = _mm256_add_epi64(C1, _mm256_add_epi64(D1, ml)); \
B1 = _mm256_xor_si256(B1, C1); \
B1 = rotr63(B1); \
} while((void)0, 0);
#define DIAGONALIZE_1(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
B0 = _mm256_permute4x64_epi64(B0, _MM_SHUFFLE(0, 3, 2, 1)); \
C0 = _mm256_permute4x64_epi64(C0, _MM_SHUFFLE(1, 0, 3, 2)); \
D0 = _mm256_permute4x64_epi64(D0, _MM_SHUFFLE(2, 1, 0, 3)); \
\
B1 = _mm256_permute4x64_epi64(B1, _MM_SHUFFLE(0, 3, 2, 1)); \
C1 = _mm256_permute4x64_epi64(C1, _MM_SHUFFLE(1, 0, 3, 2)); \
D1 = _mm256_permute4x64_epi64(D1, _MM_SHUFFLE(2, 1, 0, 3)); \
} while((void)0, 0);
#define DIAGONALIZE_2(A0, A1, B0, B1, C0, C1, D0, D1) \
do { \
__m256i tmp1 = _mm256_blend_epi32(B0, B1, 0xCC); \
__m256i tmp2 = _mm256_blend_epi32(B0, B1, 0x33); \
B1 = _mm256_permute4x64_epi64(tmp1, _MM_SHUFFLE(2,3,0,1)); \
B0 = _mm256_permute4x64_epi64(tmp2, _MM_SHUFFLE(2,3,0,1)); \
\
tmp1 = C0; \
C0 = C1; \
C1 = tmp1; \
\
tmp1 = _mm256_blend_epi32(D0, D1, 0xCC); \
tmp2 = _mm256_blend_epi32(D0, D1, 0x33); \
D0 = _mm256_permute4x64_epi64(tmp1, _MM_SHUFFLE(2,3,0,1)); \
D1 = _mm256_permute4x64_epi64(tmp2, _MM_SHUFFLE(2,3,0,1)); \
} while(0);
#define UNDIAGONALIZE_1(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
B0 = _mm256_permute4x64_epi64(B0, _MM_SHUFFLE(2, 1, 0, 3)); \
C0 = _mm256_permute4x64_epi64(C0, _MM_SHUFFLE(1, 0, 3, 2)); \
D0 = _mm256_permute4x64_epi64(D0, _MM_SHUFFLE(0, 3, 2, 1)); \
\
B1 = _mm256_permute4x64_epi64(B1, _MM_SHUFFLE(2, 1, 0, 3)); \
C1 = _mm256_permute4x64_epi64(C1, _MM_SHUFFLE(1, 0, 3, 2)); \
D1 = _mm256_permute4x64_epi64(D1, _MM_SHUFFLE(0, 3, 2, 1)); \
} while((void)0, 0);
#define UNDIAGONALIZE_2(A0, A1, B0, B1, C0, C1, D0, D1) \
do { \
__m256i tmp1 = _mm256_blend_epi32(B0, B1, 0xCC); \
__m256i tmp2 = _mm256_blend_epi32(B0, B1, 0x33); \
B0 = _mm256_permute4x64_epi64(tmp1, _MM_SHUFFLE(2,3,0,1)); \
B1 = _mm256_permute4x64_epi64(tmp2, _MM_SHUFFLE(2,3,0,1)); \
\
tmp1 = C0; \
C0 = C1; \
C1 = tmp1; \
\
tmp1 = _mm256_blend_epi32(D0, D1, 0x33); \
tmp2 = _mm256_blend_epi32(D0, D1, 0xCC); \
D0 = _mm256_permute4x64_epi64(tmp1, _MM_SHUFFLE(2,3,0,1)); \
D1 = _mm256_permute4x64_epi64(tmp2, _MM_SHUFFLE(2,3,0,1)); \
} while((void)0, 0);
#define BLAKE2_ROUND_1(A0, A1, B0, B1, C0, C1, D0, D1) \
do{ \
G1_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
G2_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
\
DIAGONALIZE_1(A0, B0, C0, D0, A1, B1, C1, D1) \
\
G1_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
G2_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
\
UNDIAGONALIZE_1(A0, B0, C0, D0, A1, B1, C1, D1) \
} while((void)0, 0);
#define BLAKE2_ROUND_2(A0, A1, B0, B1, C0, C1, D0, D1) \
do{ \
G1_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
G2_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
\
DIAGONALIZE_2(A0, A1, B0, B1, C0, C1, D0, D1) \
\
G1_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
G2_AVX2(A0, A1, B0, B1, C0, C1, D0, D1) \
\
UNDIAGONALIZE_2(A0, A1, B0, B1, C0, C1, D0, D1) \
} while((void)0, 0);
#endif /* __AVX2__ */
#else /* __AVX512F__ */
#include <immintrin.h>
#define ror64(x, n) _mm512_ror_epi64((x), (n))
static __m512i muladd(__m512i x, __m512i y)
{
__m512i z = _mm512_mul_epu32(x, y);
return _mm512_add_epi64(_mm512_add_epi64(x, y), _mm512_add_epi64(z, z));
}
#define G1(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
A0 = muladd(A0, B0); \
A1 = muladd(A1, B1); \
\
D0 = _mm512_xor_si512(D0, A0); \
D1 = _mm512_xor_si512(D1, A1); \
\
D0 = ror64(D0, 32); \
D1 = ror64(D1, 32); \
\
C0 = muladd(C0, D0); \
C1 = muladd(C1, D1); \
\
B0 = _mm512_xor_si512(B0, C0); \
B1 = _mm512_xor_si512(B1, C1); \
\
B0 = ror64(B0, 24); \
B1 = ror64(B1, 24); \
} while ((void)0, 0)
#define G2(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
A0 = muladd(A0, B0); \
A1 = muladd(A1, B1); \
\
D0 = _mm512_xor_si512(D0, A0); \
D1 = _mm512_xor_si512(D1, A1); \
\
D0 = ror64(D0, 16); \
D1 = ror64(D1, 16); \
\
C0 = muladd(C0, D0); \
C1 = muladd(C1, D1); \
\
B0 = _mm512_xor_si512(B0, C0); \
B1 = _mm512_xor_si512(B1, C1); \
\
B0 = ror64(B0, 63); \
B1 = ror64(B1, 63); \
} while ((void)0, 0)
#define DIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
B0 = _mm512_permutex_epi64(B0, _MM_SHUFFLE(0, 3, 2, 1)); \
B1 = _mm512_permutex_epi64(B1, _MM_SHUFFLE(0, 3, 2, 1)); \
\
C0 = _mm512_permutex_epi64(C0, _MM_SHUFFLE(1, 0, 3, 2)); \
C1 = _mm512_permutex_epi64(C1, _MM_SHUFFLE(1, 0, 3, 2)); \
\
D0 = _mm512_permutex_epi64(D0, _MM_SHUFFLE(2, 1, 0, 3)); \
D1 = _mm512_permutex_epi64(D1, _MM_SHUFFLE(2, 1, 0, 3)); \
} while ((void)0, 0)
#define UNDIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
B0 = _mm512_permutex_epi64(B0, _MM_SHUFFLE(2, 1, 0, 3)); \
B1 = _mm512_permutex_epi64(B1, _MM_SHUFFLE(2, 1, 0, 3)); \
\
C0 = _mm512_permutex_epi64(C0, _MM_SHUFFLE(1, 0, 3, 2)); \
C1 = _mm512_permutex_epi64(C1, _MM_SHUFFLE(1, 0, 3, 2)); \
\
D0 = _mm512_permutex_epi64(D0, _MM_SHUFFLE(0, 3, 2, 1)); \
D1 = _mm512_permutex_epi64(D1, _MM_SHUFFLE(0, 3, 2, 1)); \
} while ((void)0, 0)
#define BLAKE2_ROUND(A0, B0, C0, D0, A1, B1, C1, D1) \
do { \
G1(A0, B0, C0, D0, A1, B1, C1, D1); \
G2(A0, B0, C0, D0, A1, B1, C1, D1); \
\
DIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1); \
\
G1(A0, B0, C0, D0, A1, B1, C1, D1); \
G2(A0, B0, C0, D0, A1, B1, C1, D1); \
\
UNDIAGONALIZE(A0, B0, C0, D0, A1, B1, C1, D1); \
} while ((void)0, 0)
#define SWAP_HALVES(A0, A1) \
do { \
__m512i t0, t1; \
t0 = _mm512_shuffle_i64x2(A0, A1, _MM_SHUFFLE(1, 0, 1, 0)); \
t1 = _mm512_shuffle_i64x2(A0, A1, _MM_SHUFFLE(3, 2, 3, 2)); \
A0 = t0; \
A1 = t1; \
} while((void)0, 0)
#define SWAP_QUARTERS(A0, A1) \
do { \
SWAP_HALVES(A0, A1); \
A0 = _mm512_permutexvar_epi64(_mm512_setr_epi64(0, 1, 4, 5, 2, 3, 6, 7), A0); \
A1 = _mm512_permutexvar_epi64(_mm512_setr_epi64(0, 1, 4, 5, 2, 3, 6, 7), A1); \
} while((void)0, 0)
#define UNSWAP_QUARTERS(A0, A1) \
do { \
A0 = _mm512_permutexvar_epi64(_mm512_setr_epi64(0, 1, 4, 5, 2, 3, 6, 7), A0); \
A1 = _mm512_permutexvar_epi64(_mm512_setr_epi64(0, 1, 4, 5, 2, 3, 6, 7), A1); \
SWAP_HALVES(A0, A1); \
} while((void)0, 0)
#define BLAKE2_ROUND_1(A0, C0, B0, D0, A1, C1, B1, D1) \
do { \
SWAP_HALVES(A0, B0); \
SWAP_HALVES(C0, D0); \
SWAP_HALVES(A1, B1); \
SWAP_HALVES(C1, D1); \
BLAKE2_ROUND(A0, B0, C0, D0, A1, B1, C1, D1); \
SWAP_HALVES(A0, B0); \
SWAP_HALVES(C0, D0); \
SWAP_HALVES(A1, B1); \
SWAP_HALVES(C1, D1); \
} while ((void)0, 0)
#define BLAKE2_ROUND_2(A0, A1, B0, B1, C0, C1, D0, D1) \
do { \
SWAP_QUARTERS(A0, A1); \
SWAP_QUARTERS(B0, B1); \
SWAP_QUARTERS(C0, C1); \
SWAP_QUARTERS(D0, D1); \
BLAKE2_ROUND(A0, B0, C0, D0, A1, B1, C1, D1); \
UNSWAP_QUARTERS(A0, A1); \
UNSWAP_QUARTERS(B0, B1); \
UNSWAP_QUARTERS(C0, C1); \
UNSWAP_QUARTERS(D0, D1); \
} while ((void)0, 0)
#endif /* __AVX512F__ */
#endif /* BLAKE_ROUND_MKA_OPT_H */

View File

@@ -1,56 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef BLAKE_ROUND_MKA_H
#define BLAKE_ROUND_MKA_H
#include "blake2.h"
#include "blake2-impl.h"
/* designed by the Lyra PHC team */
static BLAKE2_INLINE uint64_t fBlaMka(uint64_t x, uint64_t y) {
const uint64_t m = UINT64_C(0xFFFFFFFF);
const uint64_t xy = (x & m) * (y & m);
return x + y + 2 * xy;
}
#define G(a, b, c, d) \
do { \
a = fBlaMka(a, b); \
d = rotr64(d ^ a, 32); \
c = fBlaMka(c, d); \
b = rotr64(b ^ c, 24); \
a = fBlaMka(a, b); \
d = rotr64(d ^ a, 16); \
c = fBlaMka(c, d); \
b = rotr64(b ^ c, 63); \
} while ((void)0, 0)
#define BLAKE2_ROUND_NOMSG(v0, v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, \
v12, v13, v14, v15) \
do { \
G(v0, v4, v8, v12); \
G(v1, v5, v9, v13); \
G(v2, v6, v10, v14); \
G(v3, v7, v11, v15); \
G(v0, v5, v10, v15); \
G(v1, v6, v11, v12); \
G(v2, v7, v8, v13); \
G(v3, v4, v9, v14); \
} while ((void)0, 0)
#endif

View File

@@ -1,638 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
/*For memory wiping*/
#ifdef _MSC_VER
#include <windows.h>
#include <winbase.h> /* For SecureZeroMemory */
#endif
#if defined __STDC_LIB_EXT1__
#define __STDC_WANT_LIB_EXT1__ 1
#endif
#define VC_GE_2005(version) (version >= 1400)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "core.h"
#include "thread.h"
#include "blake2/blake2.h"
#include "blake2/blake2-impl.h"
#ifdef GENKAT
#include "genkat.h"
#endif
#if defined(__clang__)
#if __has_attribute(optnone)
#define NOT_OPTIMIZED __attribute__((optnone))
#endif
#elif defined(__GNUC__)
#define GCC_VERSION \
(__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__)
#if GCC_VERSION >= 40400
#define NOT_OPTIMIZED __attribute__((optimize("O0")))
#endif
#endif
#ifndef NOT_OPTIMIZED
#define NOT_OPTIMIZED
#endif
/***************Instance and Position constructors**********/
void init_block_value(block *b, uint8_t in) { memset(b->v, in, sizeof(b->v)); }
void copy_block(block *dst, const block *src) {
memcpy(dst->v, src->v, sizeof(uint64_t) * ARGON2_QWORDS_IN_BLOCK);
}
void xor_block(block *dst, const block *src) {
int i;
for (i = 0; i < ARGON2_QWORDS_IN_BLOCK; ++i) {
dst->v[i] ^= src->v[i];
}
}
static void load_block(block *dst, const void *input) {
unsigned i;
for (i = 0; i < ARGON2_QWORDS_IN_BLOCK; ++i) {
dst->v[i] = load64((const uint8_t *)input + i * sizeof(dst->v[i]));
}
}
static void store_block(void *output, const block *src) {
unsigned i;
for (i = 0; i < ARGON2_QWORDS_IN_BLOCK; ++i) {
store64((uint8_t *)output + i * sizeof(src->v[i]), src->v[i]);
}
}
/***************Memory functions*****************/
int allocate_memory(const argon2_context *context, uint8_t **memory,
size_t num, size_t size) {
size_t memory_size = num*size;
if (memory == NULL) {
return ARGON2_MEMORY_ALLOCATION_ERROR;
}
/* 1. Check for multiplication overflow */
if (size != 0 && memory_size / size != num) {
return ARGON2_MEMORY_ALLOCATION_ERROR;
}
/* 2. Try to allocate with appropriate allocator */
if (context->allocate_cbk) {
(context->allocate_cbk)(memory, memory_size);
} else {
*memory = malloc(memory_size);
}
if (*memory == NULL) {
return ARGON2_MEMORY_ALLOCATION_ERROR;
}
return ARGON2_OK;
}
void free_memory(const argon2_context *context, uint8_t *memory,
size_t num, size_t size) {
size_t memory_size = num*size;
clear_internal_memory(memory, memory_size);
if (context->free_cbk) {
(context->free_cbk)(memory, memory_size);
} else {
free(memory);
}
}
void NOT_OPTIMIZED secure_wipe_memory(void *v, size_t n) {
#if defined(_MSC_VER) && VC_GE_2005(_MSC_VER)
SecureZeroMemory(v, n);
#elif defined memset_s
memset_s(v, n, 0, n);
#elif defined(__OpenBSD__)
explicit_bzero(v, n);
#else
static void *(*const volatile memset_sec)(void *, int, size_t) = &memset;
memset_sec(v, 0, n);
#endif
}
/* Memory clear flag defaults to true. */
int FLAG_clear_internal_memory = 1;
void clear_internal_memory(void *v, size_t n) {
if (FLAG_clear_internal_memory && v) {
secure_wipe_memory(v, n);
}
}
void finalize(const argon2_context *context, argon2_instance_t *instance) {
if (context != NULL && instance != NULL) {
block blockhash;
uint32_t l;
copy_block(&blockhash, instance->memory + instance->lane_length - 1);
/* XOR the last blocks */
for (l = 1; l < instance->lanes; ++l) {
uint32_t last_block_in_lane =
l * instance->lane_length + (instance->lane_length - 1);
xor_block(&blockhash, instance->memory + last_block_in_lane);
}
/* Hash the result */
{
uint8_t blockhash_bytes[ARGON2_BLOCK_SIZE];
store_block(blockhash_bytes, &blockhash);
blake2b_long(context->out, context->outlen, blockhash_bytes,
ARGON2_BLOCK_SIZE);
/* clear blockhash and blockhash_bytes */
clear_internal_memory(blockhash.v, ARGON2_BLOCK_SIZE);
clear_internal_memory(blockhash_bytes, ARGON2_BLOCK_SIZE);
}
#ifdef GENKAT
print_tag(context->out, context->outlen);
#endif
free_memory(context, (uint8_t *)instance->memory,
instance->memory_blocks, sizeof(block));
}
}
uint32_t index_alpha(const argon2_instance_t *instance,
const argon2_position_t *position, uint32_t pseudo_rand,
int same_lane) {
/*
* Pass 0:
* This lane : all already finished segments plus already constructed
* blocks in this segment
* Other lanes : all already finished segments
* Pass 1+:
* This lane : (SYNC_POINTS - 1) last segments plus already constructed
* blocks in this segment
* Other lanes : (SYNC_POINTS - 1) last segments
*/
uint32_t reference_area_size;
uint64_t relative_position;
uint32_t start_position, absolute_position;
if (0 == position->pass) {
/* First pass */
if (0 == position->slice) {
/* First slice */
reference_area_size =
position->index - 1; /* all but the previous */
} else {
if (same_lane) {
/* The same lane => add current segment */
reference_area_size =
position->slice * instance->segment_length +
position->index - 1;
} else {
reference_area_size =
position->slice * instance->segment_length +
((position->index == 0) ? (-1) : 0);
}
}
} else {
/* Second pass */
if (same_lane) {
reference_area_size = instance->lane_length -
instance->segment_length + position->index -
1;
} else {
reference_area_size = instance->lane_length -
instance->segment_length +
((position->index == 0) ? (-1) : 0);
}
}
/* 1.2.4. Mapping pseudo_rand to 0..<reference_area_size-1> and produce
* relative position */
relative_position = pseudo_rand;
relative_position = relative_position * relative_position >> 32;
relative_position = reference_area_size - 1 -
(reference_area_size * relative_position >> 32);
/* 1.2.5 Computing starting position */
start_position = 0;
if (0 != position->pass) {
start_position = (position->slice == ARGON2_SYNC_POINTS - 1)
? 0
: (position->slice + 1) * instance->segment_length;
}
/* 1.2.6. Computing absolute position */
absolute_position = (start_position + relative_position) %
instance->lane_length; /* absolute position */
return absolute_position;
}
/* Single-threaded version for p=1 case */
static int fill_memory_blocks_st(argon2_instance_t *instance) {
uint32_t r, s, l;
for (r = 0; r < instance->passes; ++r) {
for (s = 0; s < ARGON2_SYNC_POINTS; ++s) {
for (l = 0; l < instance->lanes; ++l) {
argon2_position_t position = {r, l, (uint8_t)s, 0};
fill_segment(instance, position);
}
}
#ifdef GENKAT
internal_kat(instance, r); /* Print all memory blocks */
#endif
}
return ARGON2_OK;
}
#if !defined(ARGON2_NO_THREADS)
#ifdef _WIN32
static unsigned __stdcall fill_segment_thr(void *thread_data)
#else
static void *fill_segment_thr(void *thread_data)
#endif
{
argon2_thread_data *my_data = thread_data;
fill_segment(my_data->instance_ptr, my_data->pos);
argon2_thread_exit();
return 0;
}
/* Multi-threaded version for p > 1 case */
static int fill_memory_blocks_mt(argon2_instance_t *instance) {
uint32_t r, s;
argon2_thread_handle_t *thread = NULL;
argon2_thread_data *thr_data = NULL;
int rc = ARGON2_OK;
/* 1. Allocating space for threads */
thread = calloc(instance->lanes, sizeof(argon2_thread_handle_t));
if (thread == NULL) {
rc = ARGON2_MEMORY_ALLOCATION_ERROR;
goto fail;
}
thr_data = calloc(instance->lanes, sizeof(argon2_thread_data));
if (thr_data == NULL) {
rc = ARGON2_MEMORY_ALLOCATION_ERROR;
goto fail;
}
for (r = 0; r < instance->passes; ++r) {
for (s = 0; s < ARGON2_SYNC_POINTS; ++s) {
uint32_t l;
/* 2. Calling threads */
for (l = 0; l < instance->lanes; ++l) {
argon2_position_t position;
/* 2.1 Join a thread if limit is exceeded */
if (l >= instance->threads) {
if (argon2_thread_join(thread[l - instance->threads])) {
rc = ARGON2_THREAD_FAIL;
goto fail;
}
}
/* 2.2 Create thread */
position.pass = r;
position.lane = l;
position.slice = (uint8_t)s;
position.index = 0;
thr_data[l].instance_ptr =
instance; /* preparing the thread input */
memcpy(&(thr_data[l].pos), &position,
sizeof(argon2_position_t));
if (argon2_thread_create(&thread[l], &fill_segment_thr,
(void *)&thr_data[l])) {
rc = ARGON2_THREAD_FAIL;
goto fail;
}
/* fill_segment(instance, position); */
/*Non-thread equivalent of the lines above */
}
/* 3. Joining remaining threads */
for (l = instance->lanes - instance->threads; l < instance->lanes;
++l) {
if (argon2_thread_join(thread[l])) {
rc = ARGON2_THREAD_FAIL;
goto fail;
}
}
}
#ifdef GENKAT
internal_kat(instance, r); /* Print all memory blocks */
#endif
}
fail:
if (thread != NULL) {
free(thread);
}
if (thr_data != NULL) {
free(thr_data);
}
return rc;
}
#endif /* ARGON2_NO_THREADS */
int fill_memory_blocks(argon2_instance_t *instance) {
if (instance == NULL || instance->lanes == 0) {
return ARGON2_INCORRECT_PARAMETER;
}
#if defined(ARGON2_NO_THREADS)
return fill_memory_blocks_st(instance);
#else
return instance->threads == 1 ?
fill_memory_blocks_st(instance) : fill_memory_blocks_mt(instance);
#endif
}
int validate_inputs(const argon2_context *context) {
if (NULL == context) {
return ARGON2_INCORRECT_PARAMETER;
}
if (NULL == context->out) {
return ARGON2_OUTPUT_PTR_NULL;
}
/* Validate output length */
if (ARGON2_MIN_OUTLEN > context->outlen) {
return ARGON2_OUTPUT_TOO_SHORT;
}
if (ARGON2_MAX_OUTLEN < context->outlen) {
return ARGON2_OUTPUT_TOO_LONG;
}
/* Validate password (required param) */
if (NULL == context->pwd) {
if (0 != context->pwdlen) {
return ARGON2_PWD_PTR_MISMATCH;
}
}
#if ARGON2_MIN_PWD_LENGTH > 0 /* cryptsetup: fix gcc warning */
if (ARGON2_MIN_PWD_LENGTH > context->pwdlen) {
return ARGON2_PWD_TOO_SHORT;
}
#endif
if (ARGON2_MAX_PWD_LENGTH < context->pwdlen) {
return ARGON2_PWD_TOO_LONG;
}
/* Validate salt (required param) */
if (NULL == context->salt) {
if (0 != context->saltlen) {
return ARGON2_SALT_PTR_MISMATCH;
}
}
if (ARGON2_MIN_SALT_LENGTH > context->saltlen) {
return ARGON2_SALT_TOO_SHORT;
}
if (ARGON2_MAX_SALT_LENGTH < context->saltlen) {
return ARGON2_SALT_TOO_LONG;
}
/* Validate secret (optional param) */
if (NULL == context->secret) {
if (0 != context->secretlen) {
return ARGON2_SECRET_PTR_MISMATCH;
}
} else {
#if ARGON2_MIN_SECRET > 0 /* cryptsetup: fix gcc warning */
if (ARGON2_MIN_SECRET > context->secretlen) {
return ARGON2_SECRET_TOO_SHORT;
}
#endif
if (ARGON2_MAX_SECRET < context->secretlen) {
return ARGON2_SECRET_TOO_LONG;
}
}
/* Validate associated data (optional param) */
if (NULL == context->ad) {
if (0 != context->adlen) {
return ARGON2_AD_PTR_MISMATCH;
}
} else {
#if ARGON2_MIN_AD_LENGTH > 0 /* cryptsetup: fix gcc warning */
if (ARGON2_MIN_AD_LENGTH > context->adlen) {
return ARGON2_AD_TOO_SHORT;
}
#endif
if (ARGON2_MAX_AD_LENGTH < context->adlen) {
return ARGON2_AD_TOO_LONG;
}
}
/* Validate memory cost */
if (ARGON2_MIN_MEMORY > context->m_cost) {
return ARGON2_MEMORY_TOO_LITTLE;
}
#if 0 /* UINT32_MAX, cryptsetup: fix gcc warning */
if (ARGON2_MAX_MEMORY < context->m_cost) {
return ARGON2_MEMORY_TOO_MUCH;
}
#endif
if (context->m_cost < 8 * context->lanes) {
return ARGON2_MEMORY_TOO_LITTLE;
}
/* Validate time cost */
if (ARGON2_MIN_TIME > context->t_cost) {
return ARGON2_TIME_TOO_SMALL;
}
if (ARGON2_MAX_TIME < context->t_cost) {
return ARGON2_TIME_TOO_LARGE;
}
/* Validate lanes */
if (ARGON2_MIN_LANES > context->lanes) {
return ARGON2_LANES_TOO_FEW;
}
if (ARGON2_MAX_LANES < context->lanes) {
return ARGON2_LANES_TOO_MANY;
}
/* Validate threads */
if (ARGON2_MIN_THREADS > context->threads) {
return ARGON2_THREADS_TOO_FEW;
}
if (ARGON2_MAX_THREADS < context->threads) {
return ARGON2_THREADS_TOO_MANY;
}
if (NULL != context->allocate_cbk && NULL == context->free_cbk) {
return ARGON2_FREE_MEMORY_CBK_NULL;
}
if (NULL == context->allocate_cbk && NULL != context->free_cbk) {
return ARGON2_ALLOCATE_MEMORY_CBK_NULL;
}
return ARGON2_OK;
}
void fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance) {
uint32_t l;
/* Make the first and second block in each lane as G(H0||0||i) or
G(H0||1||i) */
uint8_t blockhash_bytes[ARGON2_BLOCK_SIZE];
for (l = 0; l < instance->lanes; ++l) {
store32(blockhash + ARGON2_PREHASH_DIGEST_LENGTH, 0);
store32(blockhash + ARGON2_PREHASH_DIGEST_LENGTH + 4, l);
blake2b_long(blockhash_bytes, ARGON2_BLOCK_SIZE, blockhash,
ARGON2_PREHASH_SEED_LENGTH);
load_block(&instance->memory[l * instance->lane_length + 0],
blockhash_bytes);
store32(blockhash + ARGON2_PREHASH_DIGEST_LENGTH, 1);
blake2b_long(blockhash_bytes, ARGON2_BLOCK_SIZE, blockhash,
ARGON2_PREHASH_SEED_LENGTH);
load_block(&instance->memory[l * instance->lane_length + 1],
blockhash_bytes);
}
clear_internal_memory(blockhash_bytes, ARGON2_BLOCK_SIZE);
}
void initial_hash(uint8_t *blockhash, argon2_context *context,
argon2_type type) {
blake2b_state BlakeHash;
uint8_t value[sizeof(uint32_t)];
if (NULL == context || NULL == blockhash) {
return;
}
blake2b_init(&BlakeHash, ARGON2_PREHASH_DIGEST_LENGTH);
store32(&value, context->lanes);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->outlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->m_cost);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->t_cost);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->version);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, (uint32_t)type);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->pwdlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->pwd != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->pwd,
context->pwdlen);
if (context->flags & ARGON2_FLAG_CLEAR_PASSWORD) {
secure_wipe_memory(context->pwd, context->pwdlen);
context->pwdlen = 0;
}
}
store32(&value, context->saltlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->salt != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->salt,
context->saltlen);
}
store32(&value, context->secretlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->secret != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->secret,
context->secretlen);
if (context->flags & ARGON2_FLAG_CLEAR_SECRET) {
secure_wipe_memory(context->secret, context->secretlen);
context->secretlen = 0;
}
}
store32(&value, context->adlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->ad != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->ad,
context->adlen);
}
blake2b_final(&BlakeHash, blockhash, ARGON2_PREHASH_DIGEST_LENGTH);
}
int initialize(argon2_instance_t *instance, argon2_context *context) {
uint8_t blockhash[ARGON2_PREHASH_SEED_LENGTH];
int result = ARGON2_OK;
if (instance == NULL || context == NULL)
return ARGON2_INCORRECT_PARAMETER;
instance->context_ptr = context;
/* 1. Memory allocation */
result = allocate_memory(context, (uint8_t **)&(instance->memory),
instance->memory_blocks, sizeof(block));
if (result != ARGON2_OK) {
return result;
}
/* 2. Initial hashing */
/* H_0 + 8 extra bytes to produce the first blocks */
/* uint8_t blockhash[ARGON2_PREHASH_SEED_LENGTH]; */
/* Hashing all inputs */
initial_hash(blockhash, context, instance->type);
/* Zeroing 8 extra bytes */
clear_internal_memory(blockhash + ARGON2_PREHASH_DIGEST_LENGTH,
ARGON2_PREHASH_SEED_LENGTH -
ARGON2_PREHASH_DIGEST_LENGTH);
#ifdef GENKAT
initial_kat(blockhash, context, instance->type);
#endif
/* 3. Creating first blocks, we always have at least two blocks in a slice
*/
fill_first_blocks(blockhash, instance);
/* Clearing the hash */
clear_internal_memory(blockhash, ARGON2_PREHASH_SEED_LENGTH);
return ARGON2_OK;
}

View File

@@ -1,228 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef ARGON2_CORE_H
#define ARGON2_CORE_H
#include "argon2.h"
#define CONST_CAST(x) (x)(uintptr_t)
/**********************Argon2 internal constants*******************************/
enum argon2_core_constants {
/* Memory block size in bytes */
ARGON2_BLOCK_SIZE = 1024,
ARGON2_QWORDS_IN_BLOCK = ARGON2_BLOCK_SIZE / 8,
ARGON2_OWORDS_IN_BLOCK = ARGON2_BLOCK_SIZE / 16,
ARGON2_HWORDS_IN_BLOCK = ARGON2_BLOCK_SIZE / 32,
ARGON2_512BIT_WORDS_IN_BLOCK = ARGON2_BLOCK_SIZE / 64,
/* Number of pseudo-random values generated by one call to Blake in Argon2i
to
generate reference block positions */
ARGON2_ADDRESSES_IN_BLOCK = 128,
/* Pre-hashing digest length and its extension*/
ARGON2_PREHASH_DIGEST_LENGTH = 64,
ARGON2_PREHASH_SEED_LENGTH = 72
};
/*************************Argon2 internal data types***********************/
/*
* Structure for the (1KB) memory block implemented as 128 64-bit words.
* Memory blocks can be copied, XORed. Internal words can be accessed by [] (no
* bounds checking).
*/
typedef struct block_ { uint64_t v[ARGON2_QWORDS_IN_BLOCK]; } block;
/*****************Functions that work with the block******************/
/* Initialize each byte of the block with @in */
void init_block_value(block *b, uint8_t in);
/* Copy block @src to block @dst */
void copy_block(block *dst, const block *src);
/* XOR @src onto @dst bytewise */
void xor_block(block *dst, const block *src);
/*
* Argon2 instance: memory pointer, number of passes, amount of memory, type,
* and derived values.
* Used to evaluate the number and location of blocks to construct in each
* thread
*/
typedef struct Argon2_instance_t {
block *memory; /* Memory pointer */
uint32_t version;
uint32_t passes; /* Number of passes */
uint32_t memory_blocks; /* Number of blocks in memory */
uint32_t segment_length;
uint32_t lane_length;
uint32_t lanes;
uint32_t threads;
argon2_type type;
int print_internals; /* whether to print the memory blocks */
argon2_context *context_ptr; /* points back to original context */
} argon2_instance_t;
/*
* Argon2 position: where we construct the block right now. Used to distribute
* work between threads.
*/
typedef struct Argon2_position_t {
uint32_t pass;
uint32_t lane;
uint8_t slice;
uint32_t index;
} argon2_position_t;
/*Struct that holds the inputs for thread handling FillSegment*/
typedef struct Argon2_thread_data {
argon2_instance_t *instance_ptr;
argon2_position_t pos;
} argon2_thread_data;
/*************************Argon2 core functions********************************/
/* Allocates memory to the given pointer, uses the appropriate allocator as
* specified in the context. Total allocated memory is num*size.
* @param context argon2_context which specifies the allocator
* @param memory pointer to the pointer to the memory
* @param size the size in bytes for each element to be allocated
* @param num the number of elements to be allocated
* @return ARGON2_OK if @memory is a valid pointer and memory is allocated
*/
int allocate_memory(const argon2_context *context, uint8_t **memory,
size_t num, size_t size);
/*
* Frees memory at the given pointer, uses the appropriate deallocator as
* specified in the context. Also cleans the memory using clear_internal_memory.
* @param context argon2_context which specifies the deallocator
* @param memory pointer to buffer to be freed
* @param size the size in bytes for each element to be deallocated
* @param num the number of elements to be deallocated
*/
void free_memory(const argon2_context *context, uint8_t *memory,
size_t num, size_t size);
/* Function that securely cleans the memory. This ignores any flags set
* regarding clearing memory. Usually one just calls clear_internal_memory.
* @param mem Pointer to the memory
* @param s Memory size in bytes
*/
void secure_wipe_memory(void *v, size_t n);
/* Function that securely clears the memory if FLAG_clear_internal_memory is
* set. If the flag isn't set, this function does nothing.
* @param mem Pointer to the memory
* @param s Memory size in bytes
*/
void clear_internal_memory(void *v, size_t n);
/*
* Computes absolute position of reference block in the lane following a skewed
* distribution and using a pseudo-random value as input
* @param instance Pointer to the current instance
* @param position Pointer to the current position
* @param pseudo_rand 32-bit pseudo-random value used to determine the position
* @param same_lane Indicates if the block will be taken from the current lane.
* If so we can reference the current segment
* @pre All pointers must be valid
*/
uint32_t index_alpha(const argon2_instance_t *instance,
const argon2_position_t *position, uint32_t pseudo_rand,
int same_lane);
/*
* Function that validates all inputs against predefined restrictions and return
* an error code
* @param context Pointer to current Argon2 context
* @return ARGON2_OK if everything is all right, otherwise one of error codes
* (all defined in <argon2.h>
*/
int validate_inputs(const argon2_context *context);
/*
* Hashes all the inputs into @a blockhash[PREHASH_DIGEST_LENGTH], clears
* password and secret if needed
* @param context Pointer to the Argon2 internal structure containing memory
* pointer, and parameters for time and space requirements.
* @param blockhash Buffer for pre-hashing digest
* @param type Argon2 type
* @pre @a blockhash must have at least @a PREHASH_DIGEST_LENGTH bytes
* allocated
*/
void initial_hash(uint8_t *blockhash, argon2_context *context,
argon2_type type);
/*
* Function creates first 2 blocks per lane
* @param instance Pointer to the current instance
* @param blockhash Pointer to the pre-hashing digest
* @pre blockhash must point to @a PREHASH_SEED_LENGTH allocated values
*/
void fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance);
/*
* Function allocates memory, hashes the inputs with Blake, and creates first
* two blocks. Returns the pointer to the main memory with 2 blocks per lane
* initialized
* @param context Pointer to the Argon2 internal structure containing memory
* pointer, and parameters for time and space requirements.
* @param instance Current Argon2 instance
* @return Zero if successful, -1 if memory failed to allocate. @context->state
* will be modified if successful.
*/
int initialize(argon2_instance_t *instance, argon2_context *context);
/*
* XORing the last block of each lane, hashing it, making the tag. Deallocates
* the memory.
* @param context Pointer to current Argon2 context (use only the out parameters
* from it)
* @param instance Pointer to current instance of Argon2
* @pre instance->state must point to necessary amount of memory
* @pre context->out must point to outlen bytes of memory
* @pre if context->free_cbk is not NULL, it should point to a function that
* deallocates memory
*/
void finalize(const argon2_context *context, argon2_instance_t *instance);
/*
* Function that fills the segment using previous segments also from other
* threads
* @param context current context
* @param instance Pointer to the current instance
* @param position Current position
* @pre all block pointers must be valid
*/
void fill_segment(const argon2_instance_t *instance,
argon2_position_t position);
/*
* Function that fills the entire memory t_cost times based on the first two
* blocks in each lane
* @param instance Pointer to the current instance
* @return ARGON2_OK if successful, @context->state
*/
int fill_memory_blocks(argon2_instance_t *instance);
#endif

View File

@@ -1,462 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
#include "encoding.h"
#include "core.h"
/*
* Example code for a decoder and encoder of "hash strings", with Argon2
* parameters.
*
* This code comprises three sections:
*
* -- The first section contains generic Base64 encoding and decoding
* functions. It is conceptually applicable to any hash function
* implementation that uses Base64 to encode and decode parameters,
* salts and outputs. It could be made into a library, provided that
* the relevant functions are made public (non-static) and be given
* reasonable names to avoid collisions with other functions.
*
* -- The second section is specific to Argon2. It encodes and decodes
* the parameters, salts and outputs. It does not compute the hash
* itself.
*
* The code was originally written by Thomas Pornin <pornin@bolet.org>,
* to whom comments and remarks may be sent. It is released under what
* should amount to Public Domain or its closest equivalent; the
* following mantra is supposed to incarnate that fact with all the
* proper legal rituals:
*
* ---------------------------------------------------------------------
* This file is provided under the terms of Creative Commons CC0 1.0
* Public Domain Dedication. To the extent possible under law, the
* author (Thomas Pornin) has waived all copyright and related or
* neighboring rights to this file. This work is published from: Canada.
* ---------------------------------------------------------------------
*
* Copyright (c) 2015 Thomas Pornin
*/
/* ==================================================================== */
/*
* Common code; could be shared between different hash functions.
*
* Note: the Base64 functions below assume that uppercase letters (resp.
* lowercase letters) have consecutive numerical codes, that fit on 8
* bits. All modern systems use ASCII-compatible charsets, where these
* properties are true. If you are stuck with a dinosaur of a system
* that still defaults to EBCDIC then you already have much bigger
* interoperability issues to deal with.
*/
/*
* Some macros for constant-time comparisons. These work over values in
* the 0..255 range. Returned value is 0x00 on "false", 0xFF on "true".
*/
#define EQ(x, y) ((((0U - ((unsigned)(x) ^ (unsigned)(y))) >> 8) & 0xFF) ^ 0xFF)
#define GT(x, y) ((((unsigned)(y) - (unsigned)(x)) >> 8) & 0xFF)
#define GE(x, y) (GT(y, x) ^ 0xFF)
#define LT(x, y) GT(y, x)
#define LE(x, y) GE(y, x)
/*
* Convert value x (0..63) to corresponding Base64 character.
*/
static int b64_byte_to_char(unsigned x) {
return (LT(x, 26) & (x + 'A')) |
(GE(x, 26) & LT(x, 52) & (x + ('a' - 26))) |
(GE(x, 52) & LT(x, 62) & (x + ('0' - 52))) | (EQ(x, 62) & '+') |
(EQ(x, 63) & '/');
}
/*
* Convert character c to the corresponding 6-bit value. If character c
* is not a Base64 character, then 0xFF (255) is returned.
*/
static unsigned b64_char_to_byte(int c) {
unsigned x;
x = (GE(c, 'A') & LE(c, 'Z') & (c - 'A')) |
(GE(c, 'a') & LE(c, 'z') & (c - ('a' - 26))) |
(GE(c, '0') & LE(c, '9') & (c - ('0' - 52))) | (EQ(c, '+') & 62) |
(EQ(c, '/') & 63);
return x | (EQ(x, 0) & (EQ(c, 'A') ^ 0xFF));
}
/*
* Convert some bytes to Base64. 'dst_len' is the length (in characters)
* of the output buffer 'dst'; if that buffer is not large enough to
* receive the result (including the terminating 0), then (size_t)-1
* is returned. Otherwise, the zero-terminated Base64 string is written
* in the buffer, and the output length (counted WITHOUT the terminating
* zero) is returned.
*/
static size_t to_base64(char *dst, size_t dst_len, const void *src,
size_t src_len) {
size_t olen;
const unsigned char *buf;
unsigned acc, acc_len;
olen = (src_len / 3) << 2;
switch (src_len % 3) {
case 2:
olen++;
/* fall through */
case 1:
olen += 2;
break;
}
if (dst_len <= olen) {
return (size_t)-1;
}
acc = 0;
acc_len = 0;
buf = (const unsigned char *)src;
while (src_len-- > 0) {
acc = (acc << 8) + (*buf++);
acc_len += 8;
while (acc_len >= 6) {
acc_len -= 6;
*dst++ = (char)b64_byte_to_char((acc >> acc_len) & 0x3F);
}
}
if (acc_len > 0) {
*dst++ = (char)b64_byte_to_char((acc << (6 - acc_len)) & 0x3F);
}
*dst++ = 0;
return olen;
}
/*
* Decode Base64 chars into bytes. The '*dst_len' value must initially
* contain the length of the output buffer '*dst'; when the decoding
* ends, the actual number of decoded bytes is written back in
* '*dst_len'.
*
* Decoding stops when a non-Base64 character is encountered, or when
* the output buffer capacity is exceeded. If an error occurred (output
* buffer is too small, invalid last characters leading to unprocessed
* buffered bits), then NULL is returned; otherwise, the returned value
* points to the first non-Base64 character in the source stream, which
* may be the terminating zero.
*/
static const char *from_base64(void *dst, size_t *dst_len, const char *src) {
size_t len;
unsigned char *buf;
unsigned acc, acc_len;
buf = (unsigned char *)dst;
len = 0;
acc = 0;
acc_len = 0;
for (;;) {
unsigned d;
d = b64_char_to_byte(*src);
if (d == 0xFF) {
break;
}
src++;
acc = (acc << 6) + d;
acc_len += 6;
if (acc_len >= 8) {
acc_len -= 8;
if ((len++) >= *dst_len) {
return NULL;
}
*buf++ = (acc >> acc_len) & 0xFF;
}
}
/*
* If the input length is equal to 1 modulo 4 (which is
* invalid), then there will remain 6 unprocessed bits;
* otherwise, only 0, 2 or 4 bits are buffered. The buffered
* bits must also all be zero.
*/
if (acc_len > 4 || (acc & (((unsigned)1 << acc_len) - 1)) != 0) {
return NULL;
}
*dst_len = len;
return src;
}
/*
* Decode decimal integer from 'str'; the value is written in '*v'.
* Returned value is a pointer to the next non-decimal character in the
* string. If there is no digit at all, or the value encoding is not
* minimal (extra leading zeros), or the value does not fit in an
* 'unsigned long', then NULL is returned.
*/
static const char *decode_decimal(const char *str, unsigned long *v) {
const char *orig;
unsigned long acc;
acc = 0;
for (orig = str;; str++) {
int c;
c = *str;
if (c < '0' || c > '9') {
break;
}
c -= '0';
if (acc > (ULONG_MAX / 10)) {
return NULL;
}
acc *= 10;
if ((unsigned long)c > (ULONG_MAX - acc)) {
return NULL;
}
acc += (unsigned long)c;
}
if (str == orig || (*orig == '0' && str != (orig + 1))) {
return NULL;
}
*v = acc;
return str;
}
/* ==================================================================== */
/*
* Code specific to Argon2.
*
* The code below applies the following format:
*
* $argon2<T>[$v=<num>]$m=<num>,t=<num>,p=<num>$<bin>$<bin>
*
* where <T> is either 'd', 'id', or 'i', <num> is a decimal integer (positive,
* fits in an 'unsigned long'), and <bin> is Base64-encoded data (no '=' padding
* characters, no newline or whitespace).
*
* The last two binary chunks (encoded in Base64) are, in that order,
* the salt and the output. Both are required. The binary salt length and the
* output length must be in the allowed ranges defined in argon2.h.
*
* The ctx struct must contain buffers large enough to hold the salt and pwd
* when it is fed into decode_string.
*/
int decode_string(argon2_context *ctx, const char *str, argon2_type type) {
/* check for prefix */
#define CC(prefix) \
do { \
size_t cc_len = strlen(prefix); \
if (strncmp(str, prefix, cc_len) != 0) { \
return ARGON2_DECODING_FAIL; \
} \
str += cc_len; \
} while ((void)0, 0)
/* optional prefix checking with supplied code */
#define CC_opt(prefix, code) \
do { \
size_t cc_len = strlen(prefix); \
if (strncmp(str, prefix, cc_len) == 0) { \
str += cc_len; \
{ code; } \
} \
} while ((void)0, 0)
/* Decoding prefix into decimal */
#define DECIMAL(x) \
do { \
unsigned long dec_x; \
str = decode_decimal(str, &dec_x); \
if (str == NULL) { \
return ARGON2_DECODING_FAIL; \
} \
(x) = dec_x; \
} while ((void)0, 0)
/* Decoding prefix into uint32_t decimal */
#define DECIMAL_U32(x) \
do { \
unsigned long dec_x; \
str = decode_decimal(str, &dec_x); \
if (str == NULL || dec_x > UINT32_MAX) { \
return ARGON2_DECODING_FAIL; \
} \
(x) = (uint32_t)dec_x; \
} while ((void)0, 0)
/* Decoding base64 into a binary buffer */
#define BIN(buf, max_len, len) \
do { \
size_t bin_len = (max_len); \
str = from_base64(buf, &bin_len, str); \
if (str == NULL || bin_len > UINT32_MAX) { \
return ARGON2_DECODING_FAIL; \
} \
(len) = (uint32_t)bin_len; \
} while ((void)0, 0)
size_t maxsaltlen = ctx->saltlen;
size_t maxoutlen = ctx->outlen;
int validation_result;
const char* type_string;
/* We should start with the argon2_type we are using */
type_string = argon2_type2string(type, 0);
if (!type_string) {
return ARGON2_INCORRECT_TYPE;
}
CC("$");
CC(type_string);
/* Reading the version number if the default is suppressed */
ctx->version = ARGON2_VERSION_10;
CC_opt("$v=", DECIMAL_U32(ctx->version));
CC("$m=");
DECIMAL_U32(ctx->m_cost);
CC(",t=");
DECIMAL_U32(ctx->t_cost);
CC(",p=");
DECIMAL_U32(ctx->lanes);
ctx->threads = ctx->lanes;
CC("$");
BIN(ctx->salt, maxsaltlen, ctx->saltlen);
CC("$");
BIN(ctx->out, maxoutlen, ctx->outlen);
/* The rest of the fields get the default values */
ctx->secret = NULL;
ctx->secretlen = 0;
ctx->ad = NULL;
ctx->adlen = 0;
ctx->allocate_cbk = NULL;
ctx->free_cbk = NULL;
ctx->flags = ARGON2_DEFAULT_FLAGS;
/* On return, must have valid context */
validation_result = validate_inputs(ctx);
if (validation_result != ARGON2_OK) {
return validation_result;
}
/* Can't have any additional characters */
if (*str == 0) {
return ARGON2_OK;
} else {
return ARGON2_DECODING_FAIL;
}
#undef CC
#undef CC_opt
#undef DECIMAL
#undef BIN
}
int encode_string(char *dst, size_t dst_len, argon2_context *ctx,
argon2_type type) {
#define SS(str) \
do { \
size_t pp_len = strlen(str); \
if (pp_len >= dst_len) { \
return ARGON2_ENCODING_FAIL; \
} \
memcpy(dst, str, pp_len + 1); \
dst += pp_len; \
dst_len -= pp_len; \
} while ((void)0, 0)
#define SX(x) \
do { \
char tmp[30]; \
sprintf(tmp, "%lu", (unsigned long)(x)); \
SS(tmp); \
} while ((void)0, 0)
#define SB(buf, len) \
do { \
size_t sb_len = to_base64(dst, dst_len, buf, len); \
if (sb_len == (size_t)-1) { \
return ARGON2_ENCODING_FAIL; \
} \
dst += sb_len; \
dst_len -= sb_len; \
} while ((void)0, 0)
const char* type_string = argon2_type2string(type, 0);
int validation_result = validate_inputs(ctx);
if (!type_string) {
return ARGON2_ENCODING_FAIL;
}
if (validation_result != ARGON2_OK) {
return validation_result;
}
SS("$");
SS(type_string);
SS("$v=");
SX(ctx->version);
SS("$m=");
SX(ctx->m_cost);
SS(",t=");
SX(ctx->t_cost);
SS(",p=");
SX(ctx->lanes);
SS("$");
SB(ctx->salt, ctx->saltlen);
SS("$");
SB(ctx->out, ctx->outlen);
return ARGON2_OK;
#undef SS
#undef SX
#undef SB
}
size_t b64len(uint32_t len) {
size_t olen = ((size_t)len / 3) << 2;
switch (len % 3) {
case 2:
olen++;
/* fall through */
case 1:
olen += 2;
break;
}
return olen;
}
size_t numlen(uint32_t num) {
size_t len = 1;
while (num >= 10) {
++len;
num = num / 10;
}
return len;
}

View File

@@ -1,57 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef ENCODING_H
#define ENCODING_H
#include "argon2.h"
#define ARGON2_MAX_DECODED_LANES UINT32_C(255)
#define ARGON2_MIN_DECODED_SALT_LEN UINT32_C(8)
#define ARGON2_MIN_DECODED_OUT_LEN UINT32_C(12)
/*
* encode an Argon2 hash string into the provided buffer. 'dst_len'
* contains the size, in characters, of the 'dst' buffer; if 'dst_len'
* is less than the number of required characters (including the
* terminating 0), then this function returns ARGON2_ENCODING_ERROR.
*
* on success, ARGON2_OK is returned.
*/
int encode_string(char *dst, size_t dst_len, argon2_context *ctx,
argon2_type type);
/*
* Decodes an Argon2 hash string into the provided structure 'ctx'.
* The only fields that must be set prior to this call are ctx.saltlen and
* ctx.outlen (which must be the maximal salt and out length values that are
* allowed), ctx.salt and ctx.out (which must be buffers of the specified
* length), and ctx.pwd and ctx.pwdlen which must hold a valid password.
*
* Invalid input string causes an error. On success, the ctx is valid and all
* fields have been initialized.
*
* Returned value is ARGON2_OK on success, other ARGON2_ codes on error.
*/
int decode_string(argon2_context *ctx, const char *str, argon2_type type);
/* Returns the length of the encoded byte stream with length len */
size_t b64len(uint32_t len);
/* Returns the length of the encoded number num */
size_t numlen(uint32_t num);
#endif

View File

@@ -1,283 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include "argon2.h"
#include "core.h"
#include "blake2/blake2.h"
#include "blake2/blamka-round-opt.h"
/*
* Function fills a new memory block and optionally XORs the old block over the new one.
* Memory must be initialized.
* @param state Pointer to the just produced block. Content will be updated(!)
* @param ref_block Pointer to the reference block
* @param next_block Pointer to the block to be XORed over. May coincide with @ref_block
* @param with_xor Whether to XOR into the new block (1) or just overwrite (0)
* @pre all block pointers must be valid
*/
#if defined(__AVX512F__)
static void fill_block(__m512i *state, const block *ref_block,
block *next_block, int with_xor) {
__m512i block_XY[ARGON2_512BIT_WORDS_IN_BLOCK];
unsigned int i;
if (with_xor) {
for (i = 0; i < ARGON2_512BIT_WORDS_IN_BLOCK; i++) {
state[i] = _mm512_xor_si512(
state[i], _mm512_loadu_si512((const __m512i *)ref_block->v + i));
block_XY[i] = _mm512_xor_si512(
state[i], _mm512_loadu_si512((const __m512i *)next_block->v + i));
}
} else {
for (i = 0; i < ARGON2_512BIT_WORDS_IN_BLOCK; i++) {
block_XY[i] = state[i] = _mm512_xor_si512(
state[i], _mm512_loadu_si512((const __m512i *)ref_block->v + i));
}
}
for (i = 0; i < 2; ++i) {
BLAKE2_ROUND_1(
state[8 * i + 0], state[8 * i + 1], state[8 * i + 2], state[8 * i + 3],
state[8 * i + 4], state[8 * i + 5], state[8 * i + 6], state[8 * i + 7]);
}
for (i = 0; i < 2; ++i) {
BLAKE2_ROUND_2(
state[2 * 0 + i], state[2 * 1 + i], state[2 * 2 + i], state[2 * 3 + i],
state[2 * 4 + i], state[2 * 5 + i], state[2 * 6 + i], state[2 * 7 + i]);
}
for (i = 0; i < ARGON2_512BIT_WORDS_IN_BLOCK; i++) {
state[i] = _mm512_xor_si512(state[i], block_XY[i]);
_mm512_storeu_si512((__m512i *)next_block->v + i, state[i]);
}
}
#elif defined(__AVX2__)
static void fill_block(__m256i *state, const block *ref_block,
block *next_block, int with_xor) {
__m256i block_XY[ARGON2_HWORDS_IN_BLOCK];
unsigned int i;
if (with_xor) {
for (i = 0; i < ARGON2_HWORDS_IN_BLOCK; i++) {
state[i] = _mm256_xor_si256(
state[i], _mm256_loadu_si256((const __m256i *)ref_block->v + i));
block_XY[i] = _mm256_xor_si256(
state[i], _mm256_loadu_si256((const __m256i *)next_block->v + i));
}
} else {
for (i = 0; i < ARGON2_HWORDS_IN_BLOCK; i++) {
block_XY[i] = state[i] = _mm256_xor_si256(
state[i], _mm256_loadu_si256((const __m256i *)ref_block->v + i));
}
}
for (i = 0; i < 4; ++i) {
BLAKE2_ROUND_1(state[8 * i + 0], state[8 * i + 4], state[8 * i + 1], state[8 * i + 5],
state[8 * i + 2], state[8 * i + 6], state[8 * i + 3], state[8 * i + 7]);
}
for (i = 0; i < 4; ++i) {
BLAKE2_ROUND_2(state[ 0 + i], state[ 4 + i], state[ 8 + i], state[12 + i],
state[16 + i], state[20 + i], state[24 + i], state[28 + i]);
}
for (i = 0; i < ARGON2_HWORDS_IN_BLOCK; i++) {
state[i] = _mm256_xor_si256(state[i], block_XY[i]);
_mm256_storeu_si256((__m256i *)next_block->v + i, state[i]);
}
}
#else
static void fill_block(__m128i *state, const block *ref_block,
block *next_block, int with_xor) {
__m128i block_XY[ARGON2_OWORDS_IN_BLOCK];
unsigned int i;
if (with_xor) {
for (i = 0; i < ARGON2_OWORDS_IN_BLOCK; i++) {
state[i] = _mm_xor_si128(
state[i], _mm_loadu_si128((const __m128i *)ref_block->v + i));
block_XY[i] = _mm_xor_si128(
state[i], _mm_loadu_si128((const __m128i *)next_block->v + i));
}
} else {
for (i = 0; i < ARGON2_OWORDS_IN_BLOCK; i++) {
block_XY[i] = state[i] = _mm_xor_si128(
state[i], _mm_loadu_si128((const __m128i *)ref_block->v + i));
}
}
for (i = 0; i < 8; ++i) {
BLAKE2_ROUND(state[8 * i + 0], state[8 * i + 1], state[8 * i + 2],
state[8 * i + 3], state[8 * i + 4], state[8 * i + 5],
state[8 * i + 6], state[8 * i + 7]);
}
for (i = 0; i < 8; ++i) {
BLAKE2_ROUND(state[8 * 0 + i], state[8 * 1 + i], state[8 * 2 + i],
state[8 * 3 + i], state[8 * 4 + i], state[8 * 5 + i],
state[8 * 6 + i], state[8 * 7 + i]);
}
for (i = 0; i < ARGON2_OWORDS_IN_BLOCK; i++) {
state[i] = _mm_xor_si128(state[i], block_XY[i]);
_mm_storeu_si128((__m128i *)next_block->v + i, state[i]);
}
}
#endif
static void next_addresses(block *address_block, block *input_block) {
/*Temporary zero-initialized blocks*/
#if defined(__AVX512F__)
__m512i zero_block[ARGON2_512BIT_WORDS_IN_BLOCK];
__m512i zero2_block[ARGON2_512BIT_WORDS_IN_BLOCK];
#elif defined(__AVX2__)
__m256i zero_block[ARGON2_HWORDS_IN_BLOCK];
__m256i zero2_block[ARGON2_HWORDS_IN_BLOCK];
#else
__m128i zero_block[ARGON2_OWORDS_IN_BLOCK];
__m128i zero2_block[ARGON2_OWORDS_IN_BLOCK];
#endif
memset(zero_block, 0, sizeof(zero_block));
memset(zero2_block, 0, sizeof(zero2_block));
/*Increasing index counter*/
input_block->v[6]++;
/*First iteration of G*/
fill_block(zero_block, input_block, address_block, 0);
/*Second iteration of G*/
fill_block(zero2_block, address_block, address_block, 0);
}
void fill_segment(const argon2_instance_t *instance,
argon2_position_t position) {
block *ref_block = NULL, *curr_block = NULL;
block address_block, input_block;
uint64_t pseudo_rand, ref_index, ref_lane;
uint32_t prev_offset, curr_offset;
uint32_t starting_index, i;
#if defined(__AVX512F__)
__m512i state[ARGON2_512BIT_WORDS_IN_BLOCK];
#elif defined(__AVX2__)
__m256i state[ARGON2_HWORDS_IN_BLOCK];
#else
__m128i state[ARGON2_OWORDS_IN_BLOCK];
#endif
int data_independent_addressing;
if (instance == NULL) {
return;
}
data_independent_addressing =
(instance->type == Argon2_i) ||
(instance->type == Argon2_id && (position.pass == 0) &&
(position.slice < ARGON2_SYNC_POINTS / 2));
if (data_independent_addressing) {
init_block_value(&input_block, 0);
input_block.v[0] = position.pass;
input_block.v[1] = position.lane;
input_block.v[2] = position.slice;
input_block.v[3] = instance->memory_blocks;
input_block.v[4] = instance->passes;
input_block.v[5] = instance->type;
}
starting_index = 0;
if ((0 == position.pass) && (0 == position.slice)) {
starting_index = 2; /* we have already generated the first two blocks */
/* Don't forget to generate the first block of addresses: */
if (data_independent_addressing) {
next_addresses(&address_block, &input_block);
}
}
/* Offset of the current block */
curr_offset = position.lane * instance->lane_length +
position.slice * instance->segment_length + starting_index;
if (0 == curr_offset % instance->lane_length) {
/* Last block in this lane */
prev_offset = curr_offset + instance->lane_length - 1;
} else {
/* Previous block */
prev_offset = curr_offset - 1;
}
memcpy(state, ((instance->memory + prev_offset)->v), ARGON2_BLOCK_SIZE);
for (i = starting_index; i < instance->segment_length;
++i, ++curr_offset, ++prev_offset) {
/*1.1 Rotating prev_offset if needed */
if (curr_offset % instance->lane_length == 1) {
prev_offset = curr_offset - 1;
}
/* 1.2 Computing the index of the reference block */
/* 1.2.1 Taking pseudo-random value from the previous block */
if (data_independent_addressing) {
if (i % ARGON2_ADDRESSES_IN_BLOCK == 0) {
next_addresses(&address_block, &input_block);
}
pseudo_rand = address_block.v[i % ARGON2_ADDRESSES_IN_BLOCK];
} else {
pseudo_rand = instance->memory[prev_offset].v[0];
}
/* 1.2.2 Computing the lane of the reference block */
ref_lane = ((pseudo_rand >> 32)) % instance->lanes;
if ((position.pass == 0) && (position.slice == 0)) {
/* Can not reference other lanes yet */
ref_lane = position.lane;
}
/* 1.2.3 Computing the number of possible reference block within the
* lane.
*/
position.index = i;
ref_index = index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF,
ref_lane == position.lane);
/* 2 Creating a new block */
ref_block =
instance->memory + instance->lane_length * ref_lane + ref_index;
curr_block = instance->memory + curr_offset;
if (ARGON2_VERSION_10 == instance->version) {
/* version 1.2.1 and earlier: overwrite, not XOR */
fill_block(state, ref_block, curr_block, 0);
} else {
if(0 == position.pass) {
fill_block(state, ref_block, curr_block, 0);
} else {
fill_block(state, ref_block, curr_block, 1);
}
}
}
}

View File

@@ -1,194 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include "argon2.h"
#include "core.h"
#include "blake2/blamka-round-ref.h"
#include "blake2/blake2-impl.h"
#include "blake2/blake2.h"
/*
* Function fills a new memory block and optionally XORs the old block over the new one.
* @next_block must be initialized.
* @param prev_block Pointer to the previous block
* @param ref_block Pointer to the reference block
* @param next_block Pointer to the block to be constructed
* @param with_xor Whether to XOR into the new block (1) or just overwrite (0)
* @pre all block pointers must be valid
*/
static void fill_block(const block *prev_block, const block *ref_block,
block *next_block, int with_xor) {
block blockR, block_tmp;
unsigned i;
copy_block(&blockR, ref_block);
xor_block(&blockR, prev_block);
copy_block(&block_tmp, &blockR);
/* Now blockR = ref_block + prev_block and block_tmp = ref_block + prev_block */
if (with_xor) {
/* Saving the next block contents for XOR over: */
xor_block(&block_tmp, next_block);
/* Now blockR = ref_block + prev_block and
block_tmp = ref_block + prev_block + next_block */
}
/* Apply Blake2 on columns of 64-bit words: (0,1,...,15) , then
(16,17,..31)... finally (112,113,...127) */
for (i = 0; i < 8; ++i) {
BLAKE2_ROUND_NOMSG(
blockR.v[16 * i], blockR.v[16 * i + 1], blockR.v[16 * i + 2],
blockR.v[16 * i + 3], blockR.v[16 * i + 4], blockR.v[16 * i + 5],
blockR.v[16 * i + 6], blockR.v[16 * i + 7], blockR.v[16 * i + 8],
blockR.v[16 * i + 9], blockR.v[16 * i + 10], blockR.v[16 * i + 11],
blockR.v[16 * i + 12], blockR.v[16 * i + 13], blockR.v[16 * i + 14],
blockR.v[16 * i + 15]);
}
/* Apply Blake2 on rows of 64-bit words: (0,1,16,17,...112,113), then
(2,3,18,19,...,114,115).. finally (14,15,30,31,...,126,127) */
for (i = 0; i < 8; i++) {
BLAKE2_ROUND_NOMSG(
blockR.v[2 * i], blockR.v[2 * i + 1], blockR.v[2 * i + 16],
blockR.v[2 * i + 17], blockR.v[2 * i + 32], blockR.v[2 * i + 33],
blockR.v[2 * i + 48], blockR.v[2 * i + 49], blockR.v[2 * i + 64],
blockR.v[2 * i + 65], blockR.v[2 * i + 80], blockR.v[2 * i + 81],
blockR.v[2 * i + 96], blockR.v[2 * i + 97], blockR.v[2 * i + 112],
blockR.v[2 * i + 113]);
}
copy_block(next_block, &block_tmp);
xor_block(next_block, &blockR);
}
static void next_addresses(block *address_block, block *input_block,
const block *zero_block) {
input_block->v[6]++;
fill_block(zero_block, input_block, address_block, 0);
fill_block(zero_block, address_block, address_block, 0);
}
void fill_segment(const argon2_instance_t *instance,
argon2_position_t position) {
block *ref_block = NULL, *curr_block = NULL;
block address_block, input_block, zero_block;
uint64_t pseudo_rand, ref_index, ref_lane;
uint32_t prev_offset, curr_offset;
uint32_t starting_index;
uint32_t i;
int data_independent_addressing;
if (instance == NULL) {
return;
}
data_independent_addressing =
(instance->type == Argon2_i) ||
(instance->type == Argon2_id && (position.pass == 0) &&
(position.slice < ARGON2_SYNC_POINTS / 2));
if (data_independent_addressing) {
init_block_value(&zero_block, 0);
init_block_value(&input_block, 0);
input_block.v[0] = position.pass;
input_block.v[1] = position.lane;
input_block.v[2] = position.slice;
input_block.v[3] = instance->memory_blocks;
input_block.v[4] = instance->passes;
input_block.v[5] = instance->type;
}
starting_index = 0;
if ((0 == position.pass) && (0 == position.slice)) {
starting_index = 2; /* we have already generated the first two blocks */
/* Don't forget to generate the first block of addresses: */
if (data_independent_addressing) {
next_addresses(&address_block, &input_block, &zero_block);
}
}
/* Offset of the current block */
curr_offset = position.lane * instance->lane_length +
position.slice * instance->segment_length + starting_index;
if (0 == curr_offset % instance->lane_length) {
/* Last block in this lane */
prev_offset = curr_offset + instance->lane_length - 1;
} else {
/* Previous block */
prev_offset = curr_offset - 1;
}
for (i = starting_index; i < instance->segment_length;
++i, ++curr_offset, ++prev_offset) {
/*1.1 Rotating prev_offset if needed */
if (curr_offset % instance->lane_length == 1) {
prev_offset = curr_offset - 1;
}
/* 1.2 Computing the index of the reference block */
/* 1.2.1 Taking pseudo-random value from the previous block */
if (data_independent_addressing) {
if (i % ARGON2_ADDRESSES_IN_BLOCK == 0) {
next_addresses(&address_block, &input_block, &zero_block);
}
pseudo_rand = address_block.v[i % ARGON2_ADDRESSES_IN_BLOCK];
} else {
pseudo_rand = instance->memory[prev_offset].v[0];
}
/* 1.2.2 Computing the lane of the reference block */
ref_lane = ((pseudo_rand >> 32)) % instance->lanes;
if ((position.pass == 0) && (position.slice == 0)) {
/* Can not reference other lanes yet */
ref_lane = position.lane;
}
/* 1.2.3 Computing the number of possible reference block within the
* lane.
*/
position.index = i;
ref_index = index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF,
ref_lane == position.lane);
/* 2 Creating a new block */
ref_block =
instance->memory + instance->lane_length * ref_lane + ref_index;
curr_block = instance->memory + curr_offset;
if (ARGON2_VERSION_10 == instance->version) {
/* version 1.2.1 and earlier: overwrite, not XOR */
fill_block(instance->memory + prev_offset, ref_block, curr_block, 0);
} else {
if(0 == position.pass) {
fill_block(instance->memory + prev_offset, ref_block,
curr_block, 0);
} else {
fill_block(instance->memory + prev_offset, ref_block,
curr_block, 1);
}
}
}
}

View File

@@ -1,57 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#if !defined(ARGON2_NO_THREADS)
#include "thread.h"
#if defined(_WIN32)
#include <windows.h>
#endif
int argon2_thread_create(argon2_thread_handle_t *handle,
argon2_thread_func_t func, void *args) {
if (NULL == handle || func == NULL) {
return -1;
}
#if defined(_WIN32)
*handle = _beginthreadex(NULL, 0, func, args, 0, NULL);
return *handle != 0 ? 0 : -1;
#else
return pthread_create(handle, NULL, func, args);
#endif
}
int argon2_thread_join(argon2_thread_handle_t handle) {
#if defined(_WIN32)
if (WaitForSingleObject((HANDLE)handle, INFINITE) == WAIT_OBJECT_0) {
return CloseHandle((HANDLE)handle) != 0 ? 0 : -1;
}
return -1;
#else
return pthread_join(handle, NULL);
#endif
}
void argon2_thread_exit(void) {
#if defined(_WIN32)
_endthreadex(0);
#else
pthread_exit(NULL);
#endif
}
#endif /* ARGON2_NO_THREADS */

View File

@@ -1,67 +0,0 @@
/*
* Argon2 reference source code package - reference C implementations
*
* Copyright 2015
* Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
*
* You may use this work under the terms of a Creative Commons CC0 1.0
* License/Waiver or the Apache Public License 2.0, at your option. The terms of
* these licenses can be found at:
*
* - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
* - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
*
* You should have received a copy of both of these licenses along with this
* software. If not, they may be obtained at the above URLs.
*/
#ifndef ARGON2_THREAD_H
#define ARGON2_THREAD_H
#if !defined(ARGON2_NO_THREADS)
/*
Here we implement an abstraction layer for the simpĺe requirements
of the Argon2 code. We only require 3 primitives---thread creation,
joining, and termination---so full emulation of the pthreads API
is unwarranted. Currently we wrap pthreads and Win32 threads.
The API defines 2 types: the function pointer type,
argon2_thread_func_t,
and the type of the thread handle---argon2_thread_handle_t.
*/
#if defined(_WIN32)
#include <process.h>
typedef unsigned(__stdcall *argon2_thread_func_t)(void *);
typedef uintptr_t argon2_thread_handle_t;
#else
#include <pthread.h>
typedef void *(*argon2_thread_func_t)(void *);
typedef pthread_t argon2_thread_handle_t;
#endif
/* Creates a thread
* @param handle pointer to a thread handle, which is the output of this
* function. Must not be NULL.
* @param func A function pointer for the thread's entry point. Must not be
* NULL.
* @param args Pointer that is passed as an argument to @func. May be NULL.
* @return 0 if @handle and @func are valid pointers and a thread is successfully
* created.
*/
int argon2_thread_create(argon2_thread_handle_t *handle,
argon2_thread_func_t func, void *args);
/* Waits for a thread to terminate
* @param handle Handle to a thread created with argon2_thread_create.
* @return 0 if @handle is a valid handle, and joining completed successfully.
*/
int argon2_thread_join(argon2_thread_handle_t handle);
/* Terminate the current thread. Must be run inside a thread created by
* argon2_thread_create.
*/
void argon2_thread_exit(void);
#endif /* ARGON2_NO_THREADS */
#endif

View File

@@ -1,193 +0,0 @@
/*
* Argon2 PBKDF2 library wrapper
*
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <errno.h>
#include "crypto_backend.h"
#if HAVE_ARGON2_H
#include <argon2.h>
#else
#include "argon2/argon2.h"
#endif
#define CONST_CAST(x) (x)(uintptr_t)
int argon2(const char *type, const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel)
{
#if !USE_INTERNAL_ARGON2 && !HAVE_ARGON2_H
return -EINVAL;
#else
argon2_type atype;
argon2_context context = {
.flags = ARGON2_DEFAULT_FLAGS,
.version = ARGON2_VERSION_NUMBER,
.t_cost = (uint32_t)iterations,
.m_cost = (uint32_t)memory,
.lanes = (uint32_t)parallel,
.threads = (uint32_t)parallel,
.out = (uint8_t *)key,
.outlen = (uint32_t)key_length,
.pwd = CONST_CAST(uint8_t *)password,
.pwdlen = (uint32_t)password_length,
.salt = CONST_CAST(uint8_t *)salt,
.saltlen = (uint32_t)salt_length,
};
int r;
if (!strcmp(type, "argon2i"))
atype = Argon2_i;
else if(!strcmp(type, "argon2id"))
atype = Argon2_id;
else
return -EINVAL;
switch (argon2_ctx(&context, atype)) {
case ARGON2_OK:
r = 0;
break;
case ARGON2_MEMORY_ALLOCATION_ERROR:
case ARGON2_FREE_MEMORY_CBK_NULL:
case ARGON2_ALLOCATE_MEMORY_CBK_NULL:
r = -ENOMEM;
break;
default:
r = -EINVAL;
}
return r;
#endif
}
#if 0
#include <stdio.h>
struct test_vector {
argon2_type type;
unsigned int memory;
unsigned int iterations;
unsigned int parallelism;
const char *password;
unsigned int password_length;
const char *salt;
unsigned int salt_length;
const char *key;
unsigned int key_length;
const char *ad;
unsigned int ad_length;
const char *output;
unsigned int output_length;
};
struct test_vector test_vectors[] = {
/* Argon2 RFC */
{
Argon2_i, 32, 3, 4,
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01", 32,
"\x02\x02\x02\x02\x02\x02\x02\x02"
"\x02\x02\x02\x02\x02\x02\x02\x02", 16,
"\x03\x03\x03\x03\x03\x03\x03\x03", 8,
"\x04\x04\x04\x04\x04\x04\x04\x04"
"\x04\x04\x04\x04", 12,
"\xc8\x14\xd9\xd1\xdc\x7f\x37\xaa"
"\x13\xf0\xd7\x7f\x24\x94\xbd\xa1"
"\xc8\xde\x6b\x01\x6d\xd3\x88\xd2"
"\x99\x52\xa4\xc4\x67\x2b\x6c\xe8", 32
},
{
Argon2_id, 32, 3, 4,
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01"
"\x01\x01\x01\x01\x01\x01\x01\x01", 32,
"\x02\x02\x02\x02\x02\x02\x02\x02"
"\x02\x02\x02\x02\x02\x02\x02\x02", 16,
"\x03\x03\x03\x03\x03\x03\x03\x03", 8,
"\x04\x04\x04\x04\x04\x04\x04\x04"
"\x04\x04\x04\x04", 12,
"\x0d\x64\x0d\xf5\x8d\x78\x76\x6c"
"\x08\xc0\x37\xa3\x4a\x8b\x53\xc9"
"\xd0\x1e\xf0\x45\x2d\x75\xb6\x5e"
"\xb5\x25\x20\xe9\x6b\x01\xe6\x59", 32
}
};
static void printhex(const char *s, const char *buf, size_t len)
{
size_t i;
printf("%s: ", s);
for (i = 0; i < len; i++)
printf("\\x%02x", (unsigned char)buf[i]);
printf("\n");
fflush(stdout);
}
static int argon2_test_vectors(void)
{
char result[64];
int i, r;
struct test_vector *vec;
argon2_context context;
printf("Argon2 running test vectors\n");
for (i = 0; i < (sizeof(test_vectors) / sizeof(*test_vectors)); i++) {
vec = &test_vectors[i];
memset(result, 0, sizeof(result));
memset(&context, 0, sizeof(context));
context.flags = ARGON2_DEFAULT_FLAGS;
context.version = ARGON2_VERSION_NUMBER;
context.out = (uint8_t *)result;
context.outlen = (uint32_t)vec->output_length;
context.pwd = (uint8_t *)vec->password;
context.pwdlen = (uint32_t)vec->password_length;
context.salt = (uint8_t *)vec->salt;
context.saltlen = (uint32_t)vec->salt_length;
context.secret = (uint8_t *)vec->key;
context.secretlen = (uint32_t)vec->key_length;;
context.ad = (uint8_t *)vec->ad;
context.adlen = (uint32_t)vec->ad_length;
context.t_cost = vec->iterations;
context.m_cost = vec->memory;
context.lanes = vec->parallelism;
context.threads = vec->parallelism;
r = argon2_ctx(&context, vec->type);
if (r != ARGON2_OK) {
printf("Argon2 failed %i, vector %d\n", r, i);
return -EINVAL;
}
if (memcmp(result, vec->output, vec->output_length) != 0) {
printf("vector %u\n", i);
printhex(" got", result, vec->output_length);
printhex("want", vec->output, vec->output_length);
return -EINVAL;
}
}
return 0;
}
#endif

View File

@@ -1,83 +0,0 @@
/*
* Linux kernel cipher generic utilities
*
* Copyright (C) 2018-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2018-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <string.h>
#include <stdbool.h>
#include <errno.h>
#include "crypto_backend.h"
struct cipher_alg {
const char *name;
const char *mode;
int blocksize;
bool wrapped_key;
};
/* FIXME: Getting block size should be dynamic from cipher backend. */
static const struct cipher_alg cipher_algs[] = {
{ "cipher_null", NULL, 16, false },
{ "aes", NULL, 16, false },
{ "serpent", NULL, 16, false },
{ "twofish", NULL, 16, false },
{ "anubis", NULL, 16, false },
{ "blowfish", NULL, 8, false },
{ "camellia", NULL, 16, false },
{ "cast5", NULL, 8, false },
{ "cast6", NULL, 16, false },
{ "des", NULL, 8, false },
{ "des3_ede", NULL, 8, false },
{ "khazad", NULL, 8, false },
{ "seed", NULL, 16, false },
{ "tea", NULL, 8, false },
{ "xtea", NULL, 8, false },
{ "paes", NULL, 16, true }, /* protected AES, s390 wrapped key scheme */
{ "xchacha12,aes", "adiantum", 32, false },
{ "xchacha20,aes", "adiantum", 32, false },
{ NULL, NULL, 0, false }
};
static const struct cipher_alg *_get_alg(const char *name, const char *mode)
{
int i = 0;
while (name && cipher_algs[i].name) {
if (!strcasecmp(name, cipher_algs[i].name))
if (!mode || !cipher_algs[i].mode ||
!strncasecmp(mode, cipher_algs[i].mode, strlen(cipher_algs[i].mode)))
return &cipher_algs[i];
i++;
}
return NULL;
}
int crypt_cipher_ivsize(const char *name, const char *mode)
{
const struct cipher_alg *ca = _get_alg(name, mode);
return ca ? ca->blocksize : -EINVAL;
}
int crypt_cipher_wrapped_key(const char *name, const char *mode)
{
const struct cipher_alg *ca = _get_alg(name, mode);
return ca ? (int)ca->wrapped_key : 0;
}

View File

@@ -1,8 +1,8 @@
/*
* crypto backend implementation
*
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
* Copyright (C) 2010-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2014, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -22,7 +22,6 @@
#define _CRYPTO_BACKEND_H
#include <stdint.h>
#include <stddef.h>
#include <string.h>
struct crypt_device;
@@ -32,7 +31,6 @@ struct crypt_cipher;
struct crypt_storage;
int crypt_backend_init(struct crypt_device *ctx);
void crypt_backend_destroy(void);
#define CRYPT_BACKEND_KERNEL (1 << 0) /* Crypto uses kernel part, for benchmark */
@@ -44,40 +42,30 @@ int crypt_hash_size(const char *name);
int crypt_hash_init(struct crypt_hash **ctx, const char *name);
int crypt_hash_write(struct crypt_hash *ctx, const char *buffer, size_t length);
int crypt_hash_final(struct crypt_hash *ctx, char *buffer, size_t length);
void crypt_hash_destroy(struct crypt_hash *ctx);
int crypt_hash_destroy(struct crypt_hash *ctx);
/* HMAC */
int crypt_hmac_size(const char *name);
int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
const void *key, size_t key_length);
const void *buffer, size_t length);
int crypt_hmac_write(struct crypt_hmac *ctx, const char *buffer, size_t length);
int crypt_hmac_final(struct crypt_hmac *ctx, char *buffer, size_t length);
void crypt_hmac_destroy(struct crypt_hmac *ctx);
int crypt_hmac_destroy(struct crypt_hmac *ctx);
/* RNG (if fips parameter set, must provide FIPS compliance) */
/* RNG (if fips paramater set, must provide FIPS compliance) */
enum { CRYPT_RND_NORMAL = 0, CRYPT_RND_KEY = 1, CRYPT_RND_SALT = 2 };
int crypt_backend_rng(char *buffer, size_t length, int quality, int fips);
struct crypt_pbkdf_limits {
uint32_t min_iterations, max_iterations;
uint32_t min_memory, max_memory;
uint32_t min_parallel, max_parallel;
};
int crypt_pbkdf_get_limits(const char *kdf, struct crypt_pbkdf_limits *l);
/* PBKDF*/
int crypt_pbkdf_check(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
size_t key_length, uint64_t *iter_secs);
int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel);
int crypt_pbkdf_perf(const char *kdf, const char *hash,
const char *password, size_t password_size,
const char *salt, size_t salt_size,
size_t volume_key_size, uint32_t time_ms,
uint32_t max_memory_kb, uint32_t parallel_threads,
uint32_t *iterations_out, uint32_t *memory_out,
int (*progress)(uint32_t time_ms, void *usrptr), void *usrptr);
unsigned int iterations);
#if USE_INTERNAL_PBKDF2
/* internal PBKDF2 implementation */
@@ -89,21 +77,14 @@ int pkcs5_pbkdf2(const char *hash,
unsigned int hash_block_size);
#endif
/* Argon2 implementation wrapper */
int argon2(const char *type, const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel);
/* CRC32 */
uint32_t crypt_crc32(uint32_t seed, const unsigned char *buf, size_t len);
/* ciphers */
int crypt_cipher_ivsize(const char *name, const char *mode);
int crypt_cipher_wrapped_key(const char *name, const char *mode);
int crypt_cipher_blocksize(const char *name);
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length);
void crypt_cipher_destroy(struct crypt_cipher *ctx);
const char *mode, const void *buffer, size_t length);
int crypt_cipher_destroy(struct crypt_cipher *ctx);
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length);
@@ -111,15 +92,11 @@ int crypt_cipher_decrypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
const char *iv, size_t iv_length);
/* Check availability of a cipher */
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length);
/* storage encryption wrappers */
int crypt_storage_init(struct crypt_storage **ctx, uint64_t sector_start,
const char *cipher, const char *cipher_mode,
const void *key, size_t key_length);
void crypt_storage_destroy(struct crypt_storage *ctx);
char *key, size_t key_length);
int crypt_storage_destroy(struct crypt_storage *ctx);
int crypt_storage_decrypt(struct crypt_storage *ctx, uint64_t sector,
size_t count, char *buffer);
int crypt_storage_encrypt(struct crypt_storage *ctx, uint64_t sector,
@@ -128,12 +105,8 @@ int crypt_storage_encrypt(struct crypt_storage *ctx, uint64_t sector,
/* Memzero helper (memset on stack can be optimized out) */
static inline void crypt_backend_memzero(void *s, size_t n)
{
#ifdef HAVE_EXPLICIT_BZERO
explicit_bzero(s, n);
#else
volatile uint8_t *p = (volatile uint8_t *)s;
while(n--) *p++ = 0;
#endif
}
#endif /* _CRYPTO_BACKEND_H */

View File

@@ -1,8 +1,8 @@
/*
* Linux kernel userspace API crypto backend implementation (skcipher)
*
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
* Copyright (C) 2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2016, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -22,7 +22,6 @@
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <errno.h>
#include <unistd.h>
#include <sys/socket.h>
@@ -45,23 +44,73 @@ struct crypt_cipher {
int opfd;
};
struct cipher_alg {
const char *name;
int blocksize;
};
/* FIXME: Getting block size should be dynamic from cipher backend. */
static struct cipher_alg cipher_algs[] = {
{ "cipher_null", 16 },
{ "aes", 16 },
{ "serpent", 16 },
{ "twofish", 16 },
{ "anubis", 16 },
{ "blowfish", 8 },
{ "camellia", 16 },
{ "cast5", 8 },
{ "cast6", 16 },
{ "des", 8 },
{ "des3_ede", 8 },
{ "khazad", 8 },
{ "seed", 16 },
{ "tea", 8 },
{ "xtea", 8 },
{ NULL, 0 }
};
static struct cipher_alg *_get_alg(const char *name)
{
int i = 0;
while (name && cipher_algs[i].name) {
if (!strcasecmp(name, cipher_algs[i].name))
return &cipher_algs[i];
i++;
}
return NULL;
}
int crypt_cipher_blocksize(const char *name)
{
struct cipher_alg *ca = _get_alg(name);
return ca ? ca->blocksize : -EINVAL;
}
/*
* ciphers
*
* ENOENT - algorithm not available
* ENOTSUP - AF_ALG family not available
* (but cannot check specifically for skcipher API)
* (but cannot check specificaly for skcipher API)
*/
static int _crypt_cipher_init(struct crypt_cipher **ctx,
const void *key, size_t key_length,
struct sockaddr_alg *sa)
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *buffer, size_t length)
{
struct crypt_cipher *h;
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "skcipher",
};
h = malloc(sizeof(*h));
if (!h)
return -ENOMEM;
snprintf((char *)sa.salg_name, sizeof(sa.salg_name),
"%s(%s)", mode, name);
h->opfd = -1;
h->tfmfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
if (h->tfmfd < 0) {
@@ -69,12 +118,15 @@ static int _crypt_cipher_init(struct crypt_cipher **ctx,
return -ENOTSUP;
}
if (bind(h->tfmfd, (struct sockaddr *)sa, sizeof(*sa)) < 0) {
if (bind(h->tfmfd, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
crypt_cipher_destroy(h);
return -ENOENT;
}
if (setsockopt(h->tfmfd, SOL_ALG, ALG_SET_KEY, key, key_length) < 0) {
if (!strcmp(name, "cipher_null"))
length = 0;
if (setsockopt(h->tfmfd, SOL_ALG, ALG_SET_KEY, buffer, length) < 0) {
crypt_cipher_destroy(h);
return -EINVAL;
}
@@ -89,22 +141,6 @@ static int _crypt_cipher_init(struct crypt_cipher **ctx,
return 0;
}
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *key, size_t key_length)
{
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "skcipher",
};
if (!strcmp(name, "cipher_null"))
key_length = 0;
snprintf((char *)sa.salg_name, sizeof(sa.salg_name), "%s(%s)", mode, name);
return _crypt_cipher_init(ctx, key, key_length, &sa);
}
/* The in/out should be aligned to page boundary */
static int crypt_cipher_crypt(struct crypt_cipher *ctx,
const char *in, char *out, size_t length,
@@ -189,7 +225,7 @@ int crypt_cipher_decrypt(struct crypt_cipher *ctx,
iv, iv_length, ALG_OP_DECRYPT);
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
int crypt_cipher_destroy(struct crypt_cipher *ctx)
{
if (ctx->tfmfd >= 0)
close(ctx->tfmfd);
@@ -197,78 +233,25 @@ void crypt_cipher_destroy(struct crypt_cipher *ctx)
close(ctx->opfd);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
}
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length)
{
struct crypt_cipher *c = NULL;
char mode_name[64], tmp_salg_name[180], *real_mode = NULL, *cipher_iv = NULL, *key;
const char *salg_type;
bool aead;
int r;
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
};
aead = integrity && strcmp(integrity, "none");
/* Remove IV if present */
if (mode) {
strncpy(mode_name, mode, sizeof(mode_name));
mode_name[sizeof(mode_name) - 1] = 0;
cipher_iv = strchr(mode_name, '-');
if (cipher_iv) {
*cipher_iv = '\0';
real_mode = mode_name;
}
}
salg_type = aead ? "aead" : "skcipher";
snprintf((char *)sa.salg_type, sizeof(sa.salg_type), "%s", salg_type);
memset(tmp_salg_name, 0, sizeof(tmp_salg_name));
/* FIXME: this is duplicating a part of devmapper backend */
if (aead && !strcmp(integrity, "poly1305"))
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "rfc7539(%s,%s)", name, integrity);
else if (!real_mode)
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "%s", name);
else if (aead && !strcmp(real_mode, "ccm"))
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "rfc4309(%s(%s))", real_mode, name);
else
r = snprintf(tmp_salg_name, sizeof(tmp_salg_name), "%s(%s)", real_mode, name);
if (r <= 0 || r > (int)(sizeof(sa.salg_name) - 1))
return -EINVAL;
memcpy(sa.salg_name, tmp_salg_name, sizeof(sa.salg_name));
key = malloc(key_length);
if (!key)
return -ENOMEM;
/* We cannot use RNG yet, any key works here, tweak the first part if it is split key (XTS). */
memset(key, 0xab, key_length);
*key = 0xef;
r = _crypt_cipher_init(&c, key, key_length, &sa);
if (c)
crypt_cipher_destroy(c);
free(key);
return r;
return 0;
}
#else /* ENABLE_AF_ALG */
int crypt_cipher_blocksize(const char *name)
{
return -EINVAL;
}
int crypt_cipher_init(struct crypt_cipher **ctx, const char *name,
const char *mode, const void *buffer, size_t length)
{
return -ENOTSUP;
}
void crypt_cipher_destroy(struct crypt_cipher *ctx)
int crypt_cipher_destroy(struct crypt_cipher *ctx)
{
return;
return 0;
}
int crypt_cipher_encrypt(struct crypt_cipher *ctx,
@@ -283,9 +266,4 @@ int crypt_cipher_decrypt(struct crypt_cipher *ctx,
{
return -EINVAL;
}
int crypt_cipher_check(const char *name, const char *mode,
const char *integrity, size_t key_length)
{
return 0;
}
#endif

View File

@@ -1,8 +1,8 @@
/*
* GCRYPT crypto backend implementation
*
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
* Copyright (C) 2010-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2014, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -121,14 +121,6 @@ int crypt_backend_init(struct crypt_device *ctx)
return 0;
}
void crypt_backend_destroy(void)
{
if (crypto_backend_initialised)
gcry_control(GCRYCTL_TERM_SECMEM);
crypto_backend_initialised = 0;
}
const char *crypt_backend_version(void)
{
return crypto_backend_initialised ? version : "";
@@ -225,11 +217,12 @@ int crypt_hash_final(struct crypt_hash *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hash_destroy(struct crypt_hash *ctx)
int crypt_hash_destroy(struct crypt_hash *ctx)
{
gcry_md_close(ctx->hd);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* HMAC */
@@ -239,7 +232,7 @@ int crypt_hmac_size(const char *name)
}
int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
const void *key, size_t key_length)
const void *buffer, size_t length)
{
struct crypt_hmac *h;
unsigned int flags = GCRY_MD_FLAG_HMAC;
@@ -261,7 +254,7 @@ int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
return -EINVAL;
}
if (gcry_md_setkey(h->hd, key, key_length)) {
if (gcry_md_setkey(h->hd, buffer, length)) {
gcry_md_close(h->hd);
free(h);
return -EINVAL;
@@ -300,11 +293,12 @@ int crypt_hmac_final(struct crypt_hmac *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hmac_destroy(struct crypt_hmac *ctx)
int crypt_hmac_destroy(struct crypt_hmac *ctx)
{
gcry_md_close(ctx->hd);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* RNG */
@@ -323,46 +317,38 @@ int crypt_backend_rng(char *buffer, size_t length, int quality, int fips)
return 0;
}
static int pbkdf2(const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations)
/* PBKDF */
int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
unsigned int iterations)
{
const char *hash_name = crypt_hash_compat_name(hash, NULL);
#if USE_INTERNAL_PBKDF2
if (!kdf || strncmp(kdf, "pbkdf2", 6))
return -EINVAL;
return pkcs5_pbkdf2(hash_name, password, password_length, salt, salt_length,
iterations, key_length, key, 0);
#else /* USE_INTERNAL_PBKDF2 */
int hash_id = gcry_md_map_name(hash_name);
int kdf_id;
if (!hash_id)
return -EINVAL;
if (gcry_kdf_derive(password, password_length, GCRY_KDF_PBKDF2, hash_id,
if (kdf && !strncmp(kdf, "pbkdf2", 6))
kdf_id = GCRY_KDF_PBKDF2;
else
return -EINVAL;
if (gcry_kdf_derive(password, password_length, kdf_id, hash_id,
salt, salt_length, iterations, key_length, key))
return -EINVAL;
return 0;
#endif /* USE_INTERNAL_PBKDF2 */
}
/* PBKDF */
int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel)
{
if (!kdf)
return -EINVAL;
if (!strcmp(kdf, "pbkdf2"))
return pbkdf2(hash, password, password_length, salt, salt_length,
key, key_length, iterations);
else if (!strncmp(kdf, "argon2", 6))
return argon2(kdf, password, password_length, salt, salt_length,
key, key_length, iterations, memory, parallel);
return -EINVAL;
}

View File

@@ -1,8 +1,8 @@
/*
* Linux kernel userspace API crypto backend implementation
*
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
* Copyright (C) 2010-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2016, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -38,7 +38,7 @@
#endif
static int crypto_backend_initialised = 0;
static char version[256];
static char version[64];
struct hash_alg {
const char *name;
@@ -48,21 +48,12 @@ struct hash_alg {
};
static struct hash_alg hash_algs[] = {
{ "sha1", "sha1", 20, 64 },
{ "sha224", "sha224", 28, 64 },
{ "sha256", "sha256", 32, 64 },
{ "sha384", "sha384", 48, 128 },
{ "sha512", "sha512", 64, 128 },
{ "ripemd160", "rmd160", 20, 64 },
{ "whirlpool", "wp512", 64, 64 },
{ "sha3-224", "sha3-224", 28, 144 },
{ "sha3-256", "sha3-256", 32, 136 },
{ "sha3-384", "sha3-384", 48, 104 },
{ "sha3-512", "sha3-512", 64, 72 },
{ "stribog256","streebog256", 32, 64 },
{ "stribog512","streebog512", 64, 64 },
{ "sm3", "sm3", 32, 64 },
{ NULL, NULL, 0, 0 }
{ "sha1", "sha1", 20, 64 },
{ "sha256", "sha256", 32, 64 },
{ "sha512", "sha512", 64, 128 },
{ "ripemd160", "rmd160", 20, 64 },
{ "whirlpool", "wp512", 64, 64 },
{ NULL, NULL, 0, 0 }
};
struct crypt_hash {
@@ -135,11 +126,6 @@ int crypt_backend_init(struct crypt_device *ctx)
return 0;
}
void crypt_backend_destroy(void)
{
crypto_backend_initialised = 0;
}
uint32_t crypt_backend_flags(void)
{
return CRYPT_BACKEND_KERNEL;
@@ -190,7 +176,7 @@ int crypt_hash_init(struct crypt_hash **ctx, const char *name)
}
h->hash_len = ha->length;
strncpy((char *)sa.salg_name, ha->kernel_name, sizeof(sa.salg_name)-1);
strncpy((char *)sa.salg_name, ha->kernel_name, sizeof(sa.salg_name));
if (crypt_kernel_socket_init(&sa, &h->tfmfd, &h->opfd, NULL, 0) < 0) {
free(h);
@@ -226,7 +212,7 @@ int crypt_hash_final(struct crypt_hash *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hash_destroy(struct crypt_hash *ctx)
int crypt_hash_destroy(struct crypt_hash *ctx)
{
if (ctx->tfmfd >= 0)
close(ctx->tfmfd);
@@ -234,6 +220,7 @@ void crypt_hash_destroy(struct crypt_hash *ctx)
close(ctx->opfd);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* HMAC */
@@ -243,7 +230,7 @@ int crypt_hmac_size(const char *name)
}
int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
const void *key, size_t key_length)
const void *buffer, size_t length)
{
struct crypt_hmac *h;
struct hash_alg *ha;
@@ -266,7 +253,7 @@ int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
snprintf((char *)sa.salg_name, sizeof(sa.salg_name),
"hmac(%s)", ha->kernel_name);
if (crypt_kernel_socket_init(&sa, &h->tfmfd, &h->opfd, key, key_length) < 0) {
if (crypt_kernel_socket_init(&sa, &h->tfmfd, &h->opfd, buffer, length) < 0) {
free(h);
return -EINVAL;
}
@@ -300,7 +287,7 @@ int crypt_hmac_final(struct crypt_hmac *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hmac_destroy(struct crypt_hmac *ctx)
int crypt_hmac_destroy(struct crypt_hmac *ctx)
{
if (ctx->tfmfd >= 0)
close(ctx->tfmfd);
@@ -308,6 +295,7 @@ void crypt_hmac_destroy(struct crypt_hmac *ctx)
close(ctx->opfd);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* RNG - N/A */
@@ -321,24 +309,13 @@ int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel)
unsigned int iterations)
{
struct hash_alg *ha;
struct hash_alg *ha = _get_alg(hash);
if (!kdf)
if (!ha || !kdf || strncmp(kdf, "pbkdf2", 6))
return -EINVAL;
if (!strcmp(kdf, "pbkdf2")) {
ha = _get_alg(hash);
if (!ha)
return -EINVAL;
return pkcs5_pbkdf2(hash, password, password_length, salt, salt_length,
iterations, key_length, key, ha->block_length);
} else if (!strncmp(kdf, "argon2", 6)) {
return argon2(kdf, password, password_length, salt, salt_length,
key, key_length, iterations, memory, parallel);
}
return -EINVAL;
return pkcs5_pbkdf2(hash, password, password_length, salt, salt_length,
iterations, key_length, key, ha->block_length);
}

View File

@@ -1,8 +1,8 @@
/*
* Nettle crypto backend implementation
*
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Milan Broz
* Copyright (C) 2011-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2016, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -23,19 +23,11 @@
#include <string.h>
#include <errno.h>
#include <nettle/sha.h>
#include <nettle/sha3.h>
#include <nettle/hmac.h>
#include <nettle/pbkdf2.h>
#include "crypto_backend.h"
#if HAVE_NETTLE_VERSION_H
#include <nettle/version.h>
#define VSTR(s) STR(s)
#define STR(s) #s
static const char *version = "Nettle "VSTR(NETTLE_VERSION_MAJOR)"."VSTR(NETTLE_VERSION_MINOR);
#else
static const char *version = "Nettle";
#endif
static char *version = "Nettle";
typedef void (*init_func) (void *);
typedef void (*update_func) (void *, size_t, const uint8_t *);
@@ -53,24 +45,6 @@ struct hash_alg {
set_key_func hmac_set_key;
};
/* Missing HMAC wrappers in Nettle */
#define HMAC_FCE(xxx) \
struct xhmac_##xxx##_ctx HMAC_CTX(struct xxx##_ctx); \
static void xhmac_##xxx##_set_key(struct xhmac_##xxx##_ctx *ctx, \
size_t key_length, const uint8_t *key) \
{HMAC_SET_KEY(ctx, &nettle_##xxx, key_length, key);} \
static void xhmac_##xxx##_update(struct xhmac_##xxx##_ctx *ctx, \
size_t length, const uint8_t *data) \
{xxx##_update(&ctx->state, length, data);} \
static void xhmac_##xxx##_digest(struct xhmac_##xxx##_ctx *ctx, \
size_t length, uint8_t *digest) \
{HMAC_DIGEST(ctx, &nettle_##xxx, length, digest);}
HMAC_FCE(sha3_224);
HMAC_FCE(sha3_256);
HMAC_FCE(sha3_384);
HMAC_FCE(sha3_512);
static struct hash_alg hash_algs[] = {
{ "sha1", SHA1_DIGEST_SIZE,
(init_func) sha1_init,
@@ -120,41 +94,6 @@ static struct hash_alg hash_algs[] = {
(digest_func) hmac_ripemd160_digest,
(set_key_func) hmac_ripemd160_set_key,
},
/* Nettle prior to version 3.2 has incompatible SHA3 implementation */
#if NETTLE_SHA3_FIPS202
{ "sha3-224", SHA3_224_DIGEST_SIZE,
(init_func) sha3_224_init,
(update_func) sha3_224_update,
(digest_func) sha3_224_digest,
(update_func) xhmac_sha3_224_update,
(digest_func) xhmac_sha3_224_digest,
(set_key_func) xhmac_sha3_224_set_key,
},
{ "sha3-256", SHA3_256_DIGEST_SIZE,
(init_func) sha3_256_init,
(update_func) sha3_256_update,
(digest_func) sha3_256_digest,
(update_func) xhmac_sha3_256_update,
(digest_func) xhmac_sha3_256_digest,
(set_key_func) xhmac_sha3_256_set_key,
},
{ "sha3-384", SHA3_384_DIGEST_SIZE,
(init_func) sha3_384_init,
(update_func) sha3_384_update,
(digest_func) sha3_384_digest,
(update_func) xhmac_sha3_384_update,
(digest_func) xhmac_sha3_384_digest,
(set_key_func) xhmac_sha3_384_set_key,
},
{ "sha3-512", SHA3_512_DIGEST_SIZE,
(init_func) sha3_512_init,
(update_func) sha3_512_update,
(digest_func) sha3_512_digest,
(update_func) xhmac_sha3_512_update,
(digest_func) xhmac_sha3_512_digest,
(set_key_func) xhmac_sha3_512_set_key,
},
#endif
{ NULL, 0, NULL, NULL, NULL, NULL, NULL, NULL, }
};
@@ -166,11 +105,6 @@ struct crypt_hash {
struct sha256_ctx sha256;
struct sha384_ctx sha384;
struct sha512_ctx sha512;
struct ripemd160_ctx ripemd160;
struct sha3_224_ctx sha3_224;
struct sha3_256_ctx sha3_256;
struct sha3_384_ctx sha3_384;
struct sha3_512_ctx sha3_512;
} nettle_ctx;
};
@@ -182,11 +116,6 @@ struct crypt_hmac {
struct hmac_sha256_ctx sha256;
struct hmac_sha384_ctx sha384;
struct hmac_sha512_ctx sha512;
struct hmac_ripemd160_ctx ripemd160;
struct xhmac_sha3_224_ctx sha3_224;
struct xhmac_sha3_256_ctx sha3_256;
struct xhmac_sha3_384_ctx sha3_384;
struct xhmac_sha3_512_ctx sha3_512;
} nettle_ctx;
size_t key_length;
uint8_t *key;
@@ -214,11 +143,6 @@ int crypt_backend_init(struct crypt_device *ctx)
return 0;
}
void crypt_backend_destroy(void)
{
return;
}
const char *crypt_backend_version(void)
{
return version;
@@ -273,10 +197,11 @@ int crypt_hash_final(struct crypt_hash *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hash_destroy(struct crypt_hash *ctx)
int crypt_hash_destroy(struct crypt_hash *ctx)
{
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* HMAC */
@@ -286,7 +211,7 @@ int crypt_hmac_size(const char *name)
}
int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
const void *key, size_t key_length)
const void *buffer, size_t length)
{
struct crypt_hmac *h;
@@ -300,12 +225,12 @@ int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
if (!h->hash)
goto bad;
h->key = malloc(key_length);
h->key = malloc(length);
if (!h->key)
goto bad;
memcpy(h->key, key, key_length);
h->key_length = key_length;
memcpy(h->key, buffer, length);
h->key_length = length;
h->hash->init(&h->nettle_ctx);
h->hash->hmac_set_key(&h->nettle_ctx, h->key_length, h->key);
@@ -338,12 +263,13 @@ int crypt_hmac_final(struct crypt_hmac *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hmac_destroy(struct crypt_hmac *ctx)
int crypt_hmac_destroy(struct crypt_hmac *ctx)
{
memset(ctx->key, 0, ctx->key_length);
free(ctx->key);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* RNG - N/A */
@@ -357,29 +283,23 @@ int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel)
unsigned int iterations)
{
struct crypt_hmac *h;
int r;
if (!kdf)
if (!kdf || strncmp(kdf, "pbkdf2", 6))
return -EINVAL;
if (!strcmp(kdf, "pbkdf2")) {
r = crypt_hmac_init(&h, hash, password, password_length);
if (r < 0)
return r;
r = crypt_hmac_init(&h, hash, password, password_length);
if (r < 0)
return r;
nettle_pbkdf2(&h->nettle_ctx, h->hash->hmac_update,
h->hash->hmac_digest, h->hash->length, iterations,
salt_length, (const uint8_t *)salt, key_length,
(uint8_t *)key);
crypt_hmac_destroy(h);
return 0;
} else if (!strncmp(kdf, "argon2", 6)) {
return argon2(kdf, password, password_length, salt, salt_length,
key, key_length, iterations, memory, parallel);
}
nettle_pbkdf2(&h->nettle_ctx, h->hash->nettle_hmac_update,
h->hash->nettle_hmac_digest, h->hash->length, iterations,
salt_length, (const uint8_t *)salt, key_length,
(uint8_t *)key);
crypt_hmac_destroy(h);
return -EINVAL;
return 0;
}

View File

@@ -1,8 +1,8 @@
/*
* NSS crypto backend implementation
*
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
* Copyright (C) 2010-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2014, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -88,11 +88,6 @@ int crypt_backend_init(struct crypt_device *ctx)
return 0;
}
void crypt_backend_destroy(void)
{
crypto_backend_initialised = 0;
}
uint32_t crypt_backend_flags(void)
{
return 0;
@@ -180,11 +175,12 @@ int crypt_hash_final(struct crypt_hash *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hash_destroy(struct crypt_hash *ctx)
int crypt_hash_destroy(struct crypt_hash *ctx)
{
PK11_DestroyContext(ctx->md, PR_TRUE);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* HMAC */
@@ -194,15 +190,15 @@ int crypt_hmac_size(const char *name)
}
int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
const void *key, size_t key_length)
const void *buffer, size_t length)
{
struct crypt_hmac *h;
SECItem keyItem;
SECItem noParams;
keyItem.type = siBuffer;
keyItem.data = CONST_CAST(unsigned char *)key;
keyItem.len = (int)key_length;
keyItem.data = CONST_CAST(unsigned char *)buffer;
keyItem.len = (int)length;
noParams.type = siBuffer;
noParams.data = 0;
@@ -281,7 +277,7 @@ int crypt_hmac_final(struct crypt_hmac *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hmac_destroy(struct crypt_hmac *ctx)
int crypt_hmac_destroy(struct crypt_hmac *ctx)
{
if (ctx->key)
PK11_FreeSymKey(ctx->key);
@@ -291,6 +287,7 @@ void crypt_hmac_destroy(struct crypt_hmac *ctx)
PK11_DestroyContext(ctx->md, PR_TRUE);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* RNG */
@@ -310,24 +307,13 @@ int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel)
unsigned int iterations)
{
struct hash_alg *ha;
struct hash_alg *ha = _get_alg(hash);
if (!kdf)
if (!ha || !kdf || strncmp(kdf, "pbkdf2", 6))
return -EINVAL;
if (!strcmp(kdf, "pbkdf2")) {
ha = _get_alg(hash);
if (!ha)
return -EINVAL;
return pkcs5_pbkdf2(hash, password, password_length, salt, salt_length,
iterations, key_length, key, ha->block_length);
} else if (!strncmp(kdf, "argon2", 6)) {
return argon2(kdf, password, password_length, salt, salt_length,
key, key_length, iterations, memory, parallel);
}
return -EINVAL;
return pkcs5_pbkdf2(hash, password, password_length, salt, salt_length,
iterations, key_length, key, ha->block_length);
}

View File

@@ -1,8 +1,8 @@
/*
* OPENSSL crypto backend implementation
*
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2019 Milan Broz
* Copyright (C) 2010-2016, Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2016, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -49,22 +49,31 @@ struct crypt_hmac {
int hash_len;
};
/*
* Compatible wrappers for OpenSSL < 1.1.0 and LibreSSL < 2.7.0
*/
#if OPENSSL_VERSION_NUMBER < 0x10100000L || \
(defined(LIBRESSL_VERSION_NUMBER) && LIBRESSL_VERSION_NUMBER < 0x2070000fL)
static void openssl_backend_init(void)
int crypt_backend_init(struct crypt_device *ctx)
{
if (crypto_backend_initialised)
return 0;
OpenSSL_add_all_algorithms();
crypto_backend_initialised = 1;
return 0;
}
static const char *openssl_backend_version(void)
uint32_t crypt_backend_flags(void)
{
return 0;
}
const char *crypt_backend_version(void)
{
return SSLeay_version(SSLEAY_VERSION);
}
/*
* Compatible wrappers for OpenSSL < 1.1.0
*/
#if OPENSSL_VERSION_NUMBER < 0x10100000L
static EVP_MD_CTX *EVP_MD_CTX_new(void)
{
EVP_MD_CTX *md = malloc(sizeof(*md));
@@ -96,43 +105,8 @@ static void HMAC_CTX_free(HMAC_CTX *md)
HMAC_CTX_cleanup(md);
free(md);
}
#else
static void openssl_backend_init(void)
{
}
static const char *openssl_backend_version(void)
{
return OpenSSL_version(OPENSSL_VERSION);
}
#endif
int crypt_backend_init(struct crypt_device *ctx)
{
if (crypto_backend_initialised)
return 0;
openssl_backend_init();
crypto_backend_initialised = 1;
return 0;
}
void crypt_backend_destroy(void)
{
crypto_backend_initialised = 0;
}
uint32_t crypt_backend_flags(void)
{
return 0;
}
const char *crypt_backend_version(void)
{
return openssl_backend_version();
}
/* HASH */
int crypt_hash_size(const char *name)
{
@@ -215,11 +189,12 @@ int crypt_hash_final(struct crypt_hash *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hash_destroy(struct crypt_hash *ctx)
int crypt_hash_destroy(struct crypt_hash *ctx)
{
EVP_MD_CTX_free(ctx->md);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* HMAC */
@@ -229,7 +204,7 @@ int crypt_hmac_size(const char *name)
}
int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
const void *key, size_t key_length)
const void *buffer, size_t length)
{
struct crypt_hmac *h;
@@ -250,7 +225,7 @@ int crypt_hmac_init(struct crypt_hmac **ctx, const char *name,
return -EINVAL;
}
HMAC_Init_ex(h->md, key, key_length, h->hash_id, NULL);
HMAC_Init_ex(h->md, buffer, length, h->hash_id, NULL);
h->hash_len = EVP_MD_size(h->hash_id);
*ctx = h;
@@ -289,16 +264,20 @@ int crypt_hmac_final(struct crypt_hmac *ctx, char *buffer, size_t length)
return 0;
}
void crypt_hmac_destroy(struct crypt_hmac *ctx)
int crypt_hmac_destroy(struct crypt_hmac *ctx)
{
HMAC_CTX_free(ctx->md);
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}
/* RNG */
int crypt_backend_rng(char *buffer, size_t length, int quality, int fips)
{
if (fips)
return -EINVAL;
if (RAND_bytes((unsigned char *)buffer, length) != 1)
return -EINVAL;
@@ -310,28 +289,21 @@ int crypt_pbkdf(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t iterations, uint32_t memory, uint32_t parallel)
unsigned int iterations)
{
const EVP_MD *hash_id;
if (!kdf)
if (!kdf || strncmp(kdf, "pbkdf2", 6))
return -EINVAL;
if (!strcmp(kdf, "pbkdf2")) {
hash_id = EVP_get_digestbyname(hash);
if (!hash_id)
return -EINVAL;
hash_id = EVP_get_digestbyname(hash);
if (!hash_id)
return -EINVAL;
if (!PKCS5_PBKDF2_HMAC(password, (int)password_length,
(const unsigned char *)salt, (int)salt_length,
(int)iterations, hash_id, (int)key_length, (unsigned char *)key))
return -EINVAL;
return 0;
} else if (!strncmp(kdf, "argon2", 6)) {
return argon2(kdf, password, password_length, salt, salt_length,
key, key_length, iterations, memory, parallel);
}
if (!PKCS5_PBKDF2_HMAC(password, (int)password_length,
(const unsigned char *)salt, (int)salt_length,
(int)iterations, hash_id, (int)key_length, (unsigned char *)key))
return -EINVAL;
return -EINVAL;
return 0;
}

View File

@@ -2,7 +2,7 @@
* Generic wrapper for storage encryption modes and Initial Vectors
* (reimplementation of some functions from Linux dm-crypt kernel)
*
* Copyright (C) 2014-2019 Milan Broz
* Copyright (C) 2014, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -32,7 +32,7 @@
* IV documentation: https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt
*/
struct crypt_sector_iv {
enum { IV_NONE, IV_NULL, IV_PLAIN, IV_PLAIN64, IV_ESSIV, IV_BENBI, IV_PLAIN64BE } type;
enum { IV_NONE, IV_NULL, IV_PLAIN, IV_PLAIN64, IV_ESSIV, IV_BENBI } type;
int iv_size;
char *iv;
struct crypt_cipher *essiv_cipher;
@@ -56,29 +56,24 @@ static int int_log2(unsigned int x)
static int crypt_sector_iv_init(struct crypt_sector_iv *ctx,
const char *cipher_name, const char *mode_name,
const char *iv_name, const void *key, size_t key_length)
const char *iv_name, char *key, size_t key_length)
{
memset(ctx, 0, sizeof(*ctx));
ctx->iv_size = crypt_cipher_ivsize(cipher_name, mode_name);
if (ctx->iv_size < 8)
ctx->iv_size = crypt_cipher_blocksize(cipher_name);
if (ctx->iv_size < 0)
return -ENOENT;
if (!strcmp(cipher_name, "cipher_null") ||
if (!iv_name ||
!strcmp(cipher_name, "cipher_null") ||
!strcmp(mode_name, "ecb")) {
if (iv_name)
return -EINVAL;
ctx->type = IV_NONE;
ctx->iv_size = 0;
return 0;
} else if (!iv_name) {
return -EINVAL;
} else if (!strcasecmp(iv_name, "null")) {
ctx->type = IV_NULL;
} else if (!strcasecmp(iv_name, "plain64")) {
ctx->type = IV_PLAIN64;
} else if (!strcasecmp(iv_name, "plain64be")) {
ctx->type = IV_PLAIN64BE;
} else if (!strcasecmp(iv_name, "plain")) {
ctx->type = IV_PLAIN;
} else if (!strncasecmp(iv_name, "essiv:", 6)) {
@@ -156,10 +151,6 @@ static int crypt_sector_iv_generate(struct crypt_sector_iv *ctx, uint64_t sector
memset(ctx->iv, 0, ctx->iv_size);
*(uint64_t *)ctx->iv = cpu_to_le64(sector);
break;
case IV_PLAIN64BE:
memset(ctx->iv, 0, ctx->iv_size);
*(uint64_t *)&ctx->iv[ctx->iv_size - sizeof(uint64_t)] = cpu_to_be64(sector);
break;
case IV_ESSIV:
memset(ctx->iv, 0, ctx->iv_size);
*(uint64_t *)ctx->iv = cpu_to_le64(sector);
@@ -178,7 +169,7 @@ static int crypt_sector_iv_generate(struct crypt_sector_iv *ctx, uint64_t sector
return 0;
}
static void crypt_sector_iv_destroy(struct crypt_sector_iv *ctx)
static int crypt_sector_iv_destroy(struct crypt_sector_iv *ctx)
{
if (ctx->type == IV_ESSIV)
crypt_cipher_destroy(ctx->essiv_cipher);
@@ -189,6 +180,7 @@ static void crypt_sector_iv_destroy(struct crypt_sector_iv *ctx)
}
memset(ctx, 0, sizeof(*ctx));
return 0;
}
/* Block encryption storage wrappers */
@@ -197,7 +189,7 @@ int crypt_storage_init(struct crypt_storage **ctx,
uint64_t sector_start,
const char *cipher,
const char *cipher_mode,
const void *key, size_t key_length)
char *key, size_t key_length)
{
struct crypt_storage *s;
char mode_name[64];
@@ -284,10 +276,10 @@ int crypt_storage_encrypt(struct crypt_storage *ctx,
return r;
}
void crypt_storage_destroy(struct crypt_storage *ctx)
int crypt_storage_destroy(struct crypt_storage *ctx)
{
if (!ctx)
return;
return 0;
crypt_sector_iv_destroy(&ctx->cipher_iv);
@@ -296,4 +288,6 @@ void crypt_storage_destroy(struct crypt_storage *ctx)
memset(ctx, 0, sizeof(*ctx));
free(ctx);
return 0;
}

View File

@@ -4,8 +4,8 @@
* Copyright (C) 2004 Free Software Foundation
*
* cryptsetup related changes
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
* Copyright (C) 2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2014, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public

View File

@@ -1,8 +1,7 @@
/*
* PBKDF performance check
* Copyright (C) 2012-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
* Copyright (C) 2016-2019 Ondrej Mosnacek
* Copyright (C) 2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2014, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -21,46 +20,10 @@
#include <stdlib.h>
#include <errno.h>
#include <limits.h>
#include <time.h>
#include <sys/time.h>
#include <sys/resource.h>
#include "crypto_backend.h"
#define BENCH_MIN_MS 250
#define BENCH_MIN_MS_FAST 10
#define BENCH_PERCENT_ATLEAST 95
#define BENCH_PERCENT_ATMOST 110
#define BENCH_SAMPLES_FAST 3
#define BENCH_SAMPLES_SLOW 1
/* These PBKDF2 limits must be never violated */
int crypt_pbkdf_get_limits(const char *kdf, struct crypt_pbkdf_limits *limits)
{
if (!kdf || !limits)
return -EINVAL;
if (!strcmp(kdf, "pbkdf2")) {
limits->min_iterations = 1000; /* recommendation in NIST SP 800-132 */
limits->max_iterations = UINT32_MAX;
limits->min_memory = 0; /* N/A */
limits->max_memory = 0; /* N/A */
limits->min_parallel = 0; /* N/A */
limits->max_parallel = 0; /* N/A */
return 0;
} else if (!strcmp(kdf, "argon2i") || !strcmp(kdf, "argon2id")) {
limits->min_iterations = 4;
limits->max_iterations = UINT32_MAX;
limits->min_memory = 32;
limits->max_memory = 4*1024*1024; /* 4GiB */
limits->min_parallel = 1;
limits->max_parallel = 4;
return 0;
}
return -EINVAL;
}
static long time_ms(struct rusage *start, struct rusage *end)
{
int count_kernel_time = 0;
@@ -88,245 +51,17 @@ static long time_ms(struct rusage *start, struct rusage *end)
return ms;
}
static long timespec_ms(struct timespec *start, struct timespec *end)
{
return (end->tv_sec - start->tv_sec) * 1000 +
(end->tv_nsec - start->tv_nsec) / (1000 * 1000);
}
static int measure_argon2(const char *kdf, const char *password, size_t password_length,
const char *salt, size_t salt_length,
char *key, size_t key_length,
uint32_t t_cost, uint32_t m_cost, uint32_t parallel,
size_t samples, long ms_atleast, long *out_ms)
{
long ms, ms_min = LONG_MAX;
int r;
size_t i;
for (i = 0; i < samples; i++) {
struct timespec tstart, tend;
/*
* NOTE: We must use clock_gettime here, because Argon2 can run over
* multiple threads, and thus we care about real time, not CPU time!
*/
if (clock_gettime(CLOCK_MONOTONIC_RAW, &tstart) < 0)
return -EINVAL;
r = crypt_pbkdf(kdf, NULL, password, password_length, salt,
salt_length, key, key_length, t_cost, m_cost, parallel);
if (r < 0)
return r;
if (clock_gettime(CLOCK_MONOTONIC_RAW, &tend) < 0)
return -EINVAL;
ms = timespec_ms(&tstart, &tend);
if (ms < 0)
return -EINVAL;
if (ms < ms_atleast) {
/* early exit */
ms_min = ms;
break;
}
if (ms < ms_min) {
ms_min = ms;
}
}
*out_ms = ms_min;
return 0;
}
#define CONTINUE 0
#define FINAL 1
static int next_argon2_params(uint32_t *t_cost, uint32_t *m_cost,
uint32_t min_t_cost, uint32_t min_m_cost,
uint32_t max_m_cost, long ms, uint32_t target_ms)
{
uint32_t old_t_cost, old_m_cost, new_t_cost, new_m_cost;
uint64_t num, denom;
old_t_cost = *t_cost;
old_m_cost = *m_cost;
if (ms > target_ms) {
/* decreasing, first try to lower t_cost, then m_cost */
num = (uint64_t)*t_cost * (uint64_t)target_ms;
denom = (uint64_t)ms;
new_t_cost = (uint32_t)(num / denom);
if (new_t_cost < min_t_cost) {
num = (uint64_t)*t_cost * (uint64_t)*m_cost *
(uint64_t)target_ms;
denom = (uint64_t)min_t_cost * (uint64_t)ms;
*t_cost = min_t_cost;
*m_cost = (uint32_t)(num / denom);
if (*m_cost < min_m_cost) {
*m_cost = min_m_cost;
return FINAL;
}
} else {
*t_cost = new_t_cost;
}
} else {
/* increasing, first try to increase m_cost, then t_cost */
num = (uint64_t)*m_cost * (uint64_t)target_ms;
denom = (uint64_t)ms;
new_m_cost = (uint32_t)(num / denom);
if (new_m_cost > max_m_cost) {
num = (uint64_t)*t_cost * (uint64_t)*m_cost *
(uint64_t)target_ms;
denom = (uint64_t)max_m_cost * (uint64_t)ms;
*t_cost = (uint32_t)(num / denom);
*m_cost = max_m_cost;
if (*t_cost <= min_t_cost) {
*t_cost = min_t_cost;
return FINAL;
}
} else if (new_m_cost < min_m_cost) {
*m_cost = min_m_cost;
return FINAL;
} else {
*m_cost = new_m_cost;
}
}
/* do not continue if it is the same as in the previous run */
if (old_t_cost == *t_cost && old_m_cost == *m_cost)
return FINAL;
return CONTINUE;
}
static int crypt_argon2_check(const char *kdf, const char *password,
size_t password_length, const char *salt,
size_t salt_length, size_t key_length,
uint32_t min_t_cost, uint32_t min_m_cost, uint32_t max_m_cost,
uint32_t parallel, uint32_t target_ms,
uint32_t *out_t_cost, uint32_t *out_m_cost,
int (*progress)(uint32_t time_ms, void *usrptr),
void *usrptr)
{
int r = 0;
char *key = NULL;
uint32_t t_cost, m_cost;
long ms;
long ms_atleast = (long)target_ms * BENCH_PERCENT_ATLEAST / 100;
long ms_atmost = (long)target_ms * BENCH_PERCENT_ATMOST / 100;
if (key_length <= 0 || target_ms <= 0)
return -EINVAL;
if (min_m_cost < (parallel * 8))
min_m_cost = parallel * 8;
if (max_m_cost < min_m_cost)
return -EINVAL;
key = malloc(key_length);
if (!key)
return -ENOMEM;
t_cost = min_t_cost;
m_cost = min_m_cost;
/* 1. Find some small parameters, s. t. ms >= BENCH_MIN_MS: */
while (1) {
r = measure_argon2(kdf, password, password_length, salt, salt_length,
key, key_length, t_cost, m_cost, parallel,
BENCH_SAMPLES_FAST, BENCH_MIN_MS, &ms);
if (!r) {
/* Update parameters to actual measurement */
*out_t_cost = t_cost;
*out_m_cost = m_cost;
if (progress && progress((uint32_t)ms, usrptr))
r = -EINTR;
}
if (r < 0)
goto out;
if (ms >= BENCH_MIN_MS)
break;
if (m_cost == max_m_cost) {
if (ms < BENCH_MIN_MS_FAST)
t_cost *= 16;
else {
uint32_t new = (t_cost * BENCH_MIN_MS) / (uint32_t)ms;
if (new == t_cost)
break;
t_cost = new;
}
} else {
if (ms < BENCH_MIN_MS_FAST)
m_cost *= 16;
else {
uint32_t new = (m_cost * BENCH_MIN_MS) / (uint32_t)ms;
if (new == m_cost)
break;
m_cost = new;
}
if (m_cost > max_m_cost) {
m_cost = max_m_cost;
}
}
}
/*
* 2. Use the params obtained in (1.) to estimate the target params.
* 3. Then repeatedly measure the candidate params and if they fall out of
* the acceptance range (+-5 %), try to improve the estimate:
*/
do {
if (next_argon2_params(&t_cost, &m_cost, min_t_cost, min_m_cost,
max_m_cost, ms, target_ms)) {
/* Update parameters to final computation */
*out_t_cost = t_cost;
*out_m_cost = m_cost;
break;
}
r = measure_argon2(kdf, password, password_length, salt, salt_length,
key, key_length, t_cost, m_cost, parallel,
BENCH_SAMPLES_SLOW, ms_atleast, &ms);
if (!r) {
/* Update parameters to actual measurement */
*out_t_cost = t_cost;
*out_m_cost = m_cost;
if (progress && progress((uint32_t)ms, usrptr))
r = -EINTR;
}
if (r < 0)
break;
} while (ms < ms_atleast || ms > ms_atmost);
out:
if (key) {
crypt_backend_memzero(key, key_length);
free(key);
}
return r;
}
/* This code benchmarks PBKDF and returns iterations/second using specified hash */
static int crypt_pbkdf_check(const char *kdf, const char *hash,
int crypt_pbkdf_check(const char *kdf, const char *hash,
const char *password, size_t password_length,
const char *salt, size_t salt_length,
size_t key_length, uint32_t *iter_secs, uint32_t target_ms,
int (*progress)(uint32_t time_ms, void *usrptr), void *usrptr)
size_t key_length, uint64_t *iter_secs)
{
struct rusage rstart, rend;
int r = 0, step = 0;
long ms = 0;
char *key = NULL;
uint32_t iterations;
double PBKDF2_temp;
unsigned int iterations;
if (!kdf || !hash || key_length <= 0)
return -EINVAL;
@@ -335,7 +70,6 @@ static int crypt_pbkdf_check(const char *kdf, const char *hash,
if (!key)
return -ENOMEM;
*iter_secs = 0;
iterations = 1 << 15;
while (1) {
if (getrusage(RUSAGE_SELF, &rstart) < 0) {
@@ -344,8 +78,7 @@ static int crypt_pbkdf_check(const char *kdf, const char *hash,
}
r = crypt_pbkdf(kdf, hash, password, password_length, salt,
salt_length, key, key_length, iterations, 0, 0);
salt_length, key, key_length, iterations);
if (r < 0)
goto out;
@@ -355,18 +88,6 @@ static int crypt_pbkdf_check(const char *kdf, const char *hash,
}
ms = time_ms(&rstart, &rend);
if (ms) {
PBKDF2_temp = (double)iterations * target_ms / ms;
if (PBKDF2_temp > UINT32_MAX)
return -EINVAL;
*iter_secs = (uint32_t)PBKDF2_temp;
}
if (progress && progress((uint32_t)ms, usrptr)) {
r = -EINTR;
goto out;
}
if (ms > 500)
break;
@@ -384,6 +105,9 @@ static int crypt_pbkdf_check(const char *kdf, const char *hash,
goto out;
}
}
if (iter_secs)
*iter_secs = (iterations * 1000) / ms;
out:
if (key) {
crypt_backend_memzero(key, key_length);
@@ -391,41 +115,3 @@ out:
}
return r;
}
int crypt_pbkdf_perf(const char *kdf, const char *hash,
const char *password, size_t password_size,
const char *salt, size_t salt_size,
size_t volume_key_size, uint32_t time_ms,
uint32_t max_memory_kb, uint32_t parallel_threads,
uint32_t *iterations_out, uint32_t *memory_out,
int (*progress)(uint32_t time_ms, void *usrptr), void *usrptr)
{
struct crypt_pbkdf_limits pbkdf_limits;
int r = -EINVAL;
if (!kdf || !iterations_out || !memory_out)
return -EINVAL;
/* FIXME: whole limits propagation should be more clear here */
r = crypt_pbkdf_get_limits(kdf, &pbkdf_limits);
if (r < 0)
return r;
*memory_out = 0;
*iterations_out = 0;
if (!strcmp(kdf, "pbkdf2"))
r = crypt_pbkdf_check(kdf, hash, password, password_size,
salt, salt_size, volume_key_size,
iterations_out, time_ms, progress, usrptr);
else if (!strncmp(kdf, "argon2", 6))
r = crypt_argon2_check(kdf, password, password_size,
salt, salt_size, volume_key_size,
pbkdf_limits.min_iterations,
pbkdf_limits.min_memory,
max_memory_kb,
parallel_threads, time_ms, iterations_out,
memory_out, progress, usrptr);
return r;
}

View File

@@ -1,327 +0,0 @@
/*
* Integrity volume handling
*
* Copyright (C) 2016-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <uuid/uuid.h>
#include "integrity.h"
#include "internal.h"
static int INTEGRITY_read_superblock(struct crypt_device *cd,
struct device *device,
uint64_t offset, struct superblock *sb)
{
int devfd, r;
devfd = device_open(cd, device, O_RDONLY);
if(devfd < 0) {
return -EINVAL;
}
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), sb, sizeof(*sb), offset) != sizeof(*sb) ||
memcmp(sb->magic, SB_MAGIC, sizeof(sb->magic)) ||
(sb->version != SB_VERSION_1 && sb->version != SB_VERSION_2)) {
log_std(cd, "No integrity superblock detected on %s.\n",
device_path(device));
r = -EINVAL;
} else {
sb->integrity_tag_size = le16toh(sb->integrity_tag_size);
sb->journal_sections = le32toh(sb->journal_sections);
sb->provided_data_sectors = le64toh(sb->provided_data_sectors);
sb->recalc_sector = le64toh(sb->recalc_sector);
sb->flags = le32toh(sb->flags);
r = 0;
}
close(devfd);
return r;
}
int INTEGRITY_read_sb(struct crypt_device *cd, struct crypt_params_integrity *params)
{
struct superblock sb;
int r;
r = INTEGRITY_read_superblock(cd, crypt_metadata_device(cd), 0, &sb);
if (r)
return r;
params->sector_size = SECTOR_SIZE << sb.log2_sectors_per_block;
params->tag_size = sb.integrity_tag_size;
return 0;
}
int INTEGRITY_dump(struct crypt_device *cd, struct device *device, uint64_t offset)
{
struct superblock sb;
int r;
r = INTEGRITY_read_superblock(cd, device, offset, &sb);
if (r)
return r;
log_std(cd, "Info for integrity device %s.\n", device_path(device));
log_std(cd, "superblock_version %d\n", (unsigned)sb.version);
log_std(cd, "log2_interleave_sectors %d\n", sb.log2_interleave_sectors);
log_std(cd, "integrity_tag_size %u\n", sb.integrity_tag_size);
log_std(cd, "journal_sections %u\n", sb.journal_sections);
log_std(cd, "provided_data_sectors %" PRIu64 "\n", sb.provided_data_sectors);
log_std(cd, "sector_size %u\n", SECTOR_SIZE << sb.log2_sectors_per_block);
if (sb.version == SB_VERSION_2 && (sb.flags & SB_FLAG_RECALCULATING))
log_std(cd, "recalc_sector %" PRIu64 "\n", sb.recalc_sector);
log_std(cd, "flags %s%s\n",
sb.flags & SB_FLAG_HAVE_JOURNAL_MAC ? "have_journal_mac " : "",
sb.flags & SB_FLAG_RECALCULATING ? "recalculating " : "");
return 0;
}
int INTEGRITY_data_sectors(struct crypt_device *cd,
struct device *device, uint64_t offset,
uint64_t *data_sectors)
{
struct superblock sb;
int r;
r = INTEGRITY_read_superblock(cd, device, offset, &sb);
if (r)
return r;
*data_sectors = sb.provided_data_sectors;
return 0;
}
int INTEGRITY_key_size(struct crypt_device *cd, const char *integrity)
{
if (!integrity)
return 0;
//FIXME: use crypto backend hash size
if (!strcmp(integrity, "aead"))
return 0;
else if (!strcmp(integrity, "hmac(sha1)"))
return 20;
else if (!strcmp(integrity, "hmac(sha256)"))
return 32;
else if (!strcmp(integrity, "hmac(sha512)"))
return 64;
else if (!strcmp(integrity, "poly1305"))
return 0;
else if (!strcmp(integrity, "none"))
return 0;
return -EINVAL;
}
int INTEGRITY_tag_size(struct crypt_device *cd,
const char *integrity,
const char *cipher,
const char *cipher_mode)
{
int iv_tag_size = 0, auth_tag_size = 0;
if (!cipher_mode)
iv_tag_size = 0;
else if (!strcmp(cipher_mode, "xts-random"))
iv_tag_size = 16;
else if (!strcmp(cipher_mode, "gcm-random"))
iv_tag_size = 12;
else if (!strcmp(cipher_mode, "ccm-random"))
iv_tag_size = 8;
else if (!strcmp(cipher_mode, "ctr-random"))
iv_tag_size = 16;
else if (!strcmp(cipher, "aegis256") && !strcmp(cipher_mode, "random"))
iv_tag_size = 32;
else if (!strcmp(cipher_mode, "random"))
iv_tag_size = 16;
//FIXME: use crypto backend hash size
if (!integrity || !strcmp(integrity, "none"))
auth_tag_size = 0;
else if (!strcmp(integrity, "aead"))
auth_tag_size = 16; //FIXME gcm- mode only
else if (!strcmp(integrity, "cmac(aes)"))
auth_tag_size = 16;
else if (!strcmp(integrity, "hmac(sha1)"))
auth_tag_size = 20;
else if (!strcmp(integrity, "hmac(sha256)"))
auth_tag_size = 32;
else if (!strcmp(integrity, "hmac(sha512)"))
auth_tag_size = 64;
else if (!strcmp(integrity, "poly1305")) {
if (iv_tag_size)
iv_tag_size = 12;
auth_tag_size = 16;
}
return iv_tag_size + auth_tag_size;
}
int INTEGRITY_create_dmd_device(struct crypt_device *cd,
const struct crypt_params_integrity *params,
struct volume_key *vk,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key,
struct crypt_dm_active_device *dmd,
uint32_t flags)
{
int r;
if (!dmd)
return -EINVAL;
*dmd = (struct crypt_dm_active_device) {
.flags = flags,
};
r = INTEGRITY_data_sectors(cd, crypt_metadata_device(cd),
crypt_get_data_offset(cd) * SECTOR_SIZE, &dmd->size);
if (r < 0)
return r;
return dm_integrity_target_set(&dmd->segment, 0, dmd->size,
crypt_metadata_device(cd), crypt_data_device(cd),
crypt_get_integrity_tag_size(cd), crypt_get_data_offset(cd),
crypt_get_sector_size(cd), vk, journal_crypt_key,
journal_mac_key, params);
}
int INTEGRITY_activate_dmd_device(struct crypt_device *cd,
const char *name,
struct crypt_dm_active_device *dmd)
{
int r;
uint32_t dmi_flags;
struct dm_target *tgt = &dmd->segment;
if (!single_segment(dmd) || tgt->type != DM_INTEGRITY)
return -EINVAL;
log_dbg(cd, "Trying to activate INTEGRITY device on top of %s, using name %s, tag size %d, provided sectors %" PRIu64".",
device_path(tgt->data_device), name, tgt->u.integrity.tag_size, dmd->size);
r = device_block_adjust(cd, tgt->data_device, DEV_EXCL,
tgt->u.integrity.offset, NULL, &dmd->flags);
if (r)
return r;
if (tgt->u.integrity.meta_device) {
r = device_block_adjust(cd, tgt->u.integrity.meta_device, DEV_EXCL, 0, NULL, NULL);
if (r)
return r;
}
r = dm_create_device(cd, name, "INTEGRITY", dmd);
if (r < 0 && (dm_flags(cd, DM_INTEGRITY, &dmi_flags) || !(dmi_flags & DM_INTEGRITY_SUPPORTED))) {
log_err(cd, _("Kernel doesn't support dm-integrity mapping."));
return -ENOTSUP;
}
return r;
}
int INTEGRITY_activate(struct crypt_device *cd,
const char *name,
const struct crypt_params_integrity *params,
struct volume_key *vk,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key,
uint32_t flags)
{
struct crypt_dm_active_device dmd = {};
int r = INTEGRITY_create_dmd_device(cd, params, vk, journal_crypt_key, journal_mac_key, &dmd, flags);
if (r < 0)
return r;
r = INTEGRITY_activate_dmd_device(cd, name, &dmd);
dm_targets_free(cd, &dmd);
return r;
}
int INTEGRITY_format(struct crypt_device *cd,
const struct crypt_params_integrity *params,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key)
{
uint32_t dmi_flags;
char tmp_name[64], tmp_uuid[40];
struct crypt_dm_active_device dmdi = {
.size = 8,
.flags = CRYPT_ACTIVATE_PRIVATE, /* We always create journal but it can be unused later */
};
struct dm_target *tgt = &dmdi.segment;
int r;
uuid_t tmp_uuid_bin;
struct volume_key *vk = NULL;
uuid_generate(tmp_uuid_bin);
uuid_unparse(tmp_uuid_bin, tmp_uuid);
snprintf(tmp_name, sizeof(tmp_name), "temporary-cryptsetup-%s", tmp_uuid);
/* There is no data area, we can actually use fake zeroed key */
if (params && params->integrity_key_size)
vk = crypt_alloc_volume_key(params->integrity_key_size, NULL);
r = dm_integrity_target_set(tgt, 0, dmdi.size, crypt_metadata_device(cd),
crypt_data_device(cd), crypt_get_integrity_tag_size(cd),
crypt_get_data_offset(cd), crypt_get_sector_size(cd), vk,
journal_crypt_key, journal_mac_key, params);
if (r < 0) {
crypt_free_volume_key(vk);
return r;
}
log_dbg(cd, "Trying to format INTEGRITY device on top of %s, tmp name %s, tag size %d.",
device_path(tgt->data_device), tmp_name, tgt->u.integrity.tag_size);
r = device_block_adjust(cd, tgt->data_device, DEV_EXCL, tgt->u.integrity.offset, NULL, NULL);
if (r < 0 && (dm_flags(cd, DM_INTEGRITY, &dmi_flags) || !(dmi_flags & DM_INTEGRITY_SUPPORTED))) {
log_err(cd, _("Kernel doesn't support dm-integrity mapping."));
r = -ENOTSUP;
}
if (r) {
dm_targets_free(cd, &dmdi);
return r;
}
if (tgt->u.integrity.meta_device) {
r = device_block_adjust(cd, tgt->u.integrity.meta_device, DEV_EXCL, 0, NULL, NULL);
if (r) {
dm_targets_free(cd, &dmdi);
return r;
}
}
r = dm_create_device(cd, tmp_name, "INTEGRITY", &dmdi);
crypt_free_volume_key(vk);
dm_targets_free(cd, &dmdi);
if (r)
return r;
return dm_remove_device(cd, tmp_name, CRYPT_DEACTIVATE_FORCE);
}

View File

@@ -1,91 +0,0 @@
/*
* Integrity header defitinion
*
* Copyright (C) 2016-2019 Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef _CRYPTSETUP_INTEGRITY_H
#define _CRYPTSETUP_INTEGRITY_H
#include <stdint.h>
struct crypt_device;
struct device;
struct crypt_params_integrity;
struct volume_key;
struct crypt_dm_active_device;
/* dm-integrity helper */
#define SB_MAGIC "integrt"
#define SB_VERSION_1 1
#define SB_VERSION_2 2
#define SB_FLAG_HAVE_JOURNAL_MAC (1 << 0)
#define SB_FLAG_RECALCULATING (1 << 1) /* V2 only */
struct superblock {
uint8_t magic[8];
uint8_t version;
int8_t log2_interleave_sectors;
uint16_t integrity_tag_size;
uint32_t journal_sections;
uint64_t provided_data_sectors;
uint32_t flags;
uint8_t log2_sectors_per_block;
uint8_t pad[3];
uint64_t recalc_sector; /* V2 only */
} __attribute__ ((packed));
int INTEGRITY_read_sb(struct crypt_device *cd, struct crypt_params_integrity *params);
int INTEGRITY_dump(struct crypt_device *cd, struct device *device, uint64_t offset);
int INTEGRITY_data_sectors(struct crypt_device *cd,
struct device *device, uint64_t offset,
uint64_t *data_sectors);
int INTEGRITY_key_size(struct crypt_device *cd,
const char *integrity);
int INTEGRITY_tag_size(struct crypt_device *cd,
const char *integrity,
const char *cipher,
const char *cipher_mode);
int INTEGRITY_format(struct crypt_device *cd,
const struct crypt_params_integrity *params,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key);
int INTEGRITY_activate(struct crypt_device *cd,
const char *name,
const struct crypt_params_integrity *params,
struct volume_key *vk,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key,
uint32_t flags);
int INTEGRITY_create_dmd_device(struct crypt_device *cd,
const struct crypt_params_integrity *params,
struct volume_key *vk,
struct volume_key *journal_crypt_key,
struct volume_key *journal_mac_key,
struct crypt_dm_active_device *dmd,
uint32_t flags);
int INTEGRITY_activate_dmd_device(struct crypt_device *cd,
const char *name,
struct crypt_dm_active_device *dmd);
#endif

View File

@@ -1,10 +1,10 @@
/*
* libcryptsetup - cryptsetup library internal
*
* Copyright (C) 2004 Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2019 Milan Broz
* Copyright (C) 2004, Jana Saout <jana@saout.de>
* Copyright (C) 2004-2007, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2009-2012, Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -26,19 +26,15 @@
#include <stdint.h>
#include <stdarg.h>
#include <stdbool.h>
#include <unistd.h>
#include <inttypes.h>
#include "nls.h"
#include "bitops.h"
#include "utils_blkid.h"
#include "utils_crypt.h"
#include "utils_loop.h"
#include "utils_dm.h"
#include "utils_fips.h"
#include "utils_keyring.h"
#include "utils_io.h"
#include "crypto_backend.h"
#include "libcryptsetup.h"
@@ -46,108 +42,50 @@
/* to silent gcc -Wcast-qual for const cast */
#define CONST_CAST(x) (x)(uintptr_t)
#define SHIFT_4K 12
#define SECTOR_SHIFT 9
#define SECTOR_SIZE (1 << SECTOR_SHIFT)
#define MAX_SECTOR_SIZE 4096 /* min page size among all platforms */
#define DEFAULT_DISK_ALIGNMENT 1048576 /* 1MiB */
#define DEFAULT_MEM_ALIGNMENT 4096
#define LOG_MAX_LEN 4096
#define MAX_ERROR_LENGTH 512
#define at_least(a, b) ({ __typeof__(a) __at_least = (a); (__at_least >= (b))?__at_least:(b); })
#define MISALIGNED(a, b) ((a) & ((b) - 1))
#define MISALIGNED_4K(a) MISALIGNED((a), 1 << SHIFT_4K)
#define MISALIGNED_512(a) MISALIGNED((a), 1 << SECTOR_SHIFT)
#define NOTPOW2(a) MISALIGNED((a), (a))
#ifndef ARRAY_SIZE
# define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
#endif
#define MOVE_REF(x, y) \
do { \
typeof (x) *_px = &(x), *_py = &(y); \
*_px = *_py; \
*_py = NULL; \
} while (0)
struct crypt_device;
struct volume_key {
size_t keylength;
const char *key_description;
char key[];
};
struct volume_key *crypt_alloc_volume_key(size_t keylength, const char *key);
struct volume_key *crypt_generate_volume_key(struct crypt_device *cd, size_t keylength);
void crypt_free_volume_key(struct volume_key *vk);
int crypt_volume_key_set_description(struct volume_key *key, const char *key_description);
struct crypt_pbkdf_type *crypt_get_pbkdf(struct crypt_device *cd);
int init_pbkdf_type(struct crypt_device *cd,
const struct crypt_pbkdf_type *pbkdf,
const char *dev_type);
int verify_pbkdf_params(struct crypt_device *cd,
const struct crypt_pbkdf_type *pbkdf);
int crypt_benchmark_pbkdf_internal(struct crypt_device *cd,
struct crypt_pbkdf_type *pbkdf,
size_t volume_key_size);
const char *crypt_get_cipher_spec(struct crypt_device *cd);
/* Device backend */
struct device;
int device_alloc(struct crypt_device *cd, struct device **device, const char *path);
int device_alloc_no_check(struct device **device, const char *path);
void device_free(struct crypt_device *cd, struct device *device);
int device_alloc(struct device **device, const char *path);
void device_free(struct device *device);
const char *device_path(const struct device *device);
const char *device_dm_name(const struct device *device);
const char *device_block_path(const struct device *device);
void device_topology_alignment(struct crypt_device *cd,
struct device *device,
unsigned long *required_alignment, /* bytes */
unsigned long *alignment_offset, /* bytes */
unsigned long default_alignment);
size_t device_block_size(struct crypt_device *cd, struct device *device);
void device_topology_alignment(struct device *device,
unsigned long *required_alignment, /* bytes */
unsigned long *alignment_offset, /* bytes */
unsigned long default_alignment);
int device_block_size(struct device *device);
int device_read_ahead(struct device *device, uint32_t *read_ahead);
int device_size(struct device *device, uint64_t *size);
int device_open(struct crypt_device *cd, struct device *device, int flags);
int device_open(struct device *device, int flags);
void device_disable_direct_io(struct device *device);
int device_is_identical(struct device *device1, struct device *device2);
int device_is_rotational(struct device *device);
size_t device_alignment(struct device *device);
int device_direct_io(const struct device *device);
int device_fallocate(struct device *device, uint64_t size);
void device_sync(struct crypt_device *cd, struct device *device, int devfd);
int device_check_size(struct crypt_device *cd,
struct device *device,
uint64_t req_offset, int falloc);
int device_open_locked(struct crypt_device *cd, struct device *device, int flags);
int device_read_lock(struct crypt_device *cd, struct device *device);
int device_write_lock(struct crypt_device *cd, struct device *device);
void device_read_unlock(struct crypt_device *cd, struct device *device);
void device_write_unlock(struct crypt_device *cd, struct device *device);
enum devcheck { DEV_OK = 0, DEV_EXCL = 1 };
int device_check_access(struct crypt_device *cd,
struct device *device,
enum devcheck device_check);
enum devcheck { DEV_OK = 0, DEV_EXCL = 1, DEV_SHARED = 2 };
int device_block_adjust(struct crypt_device *cd,
struct device *device,
enum devcheck device_check,
uint64_t device_offset,
uint64_t *size,
uint32_t *flags);
size_t size_round_up(size_t size, size_t block);
int create_or_reload_device(struct crypt_device *cd, const char *name,
const char *type, struct crypt_dm_active_device *dmd);
int create_or_reload_device_with_integrity(struct crypt_device *cd, const char *name,
const char *type, struct crypt_dm_active_device *dmd,
struct crypt_dm_active_device *dmdi);
size_t size_round_up(size_t size, unsigned int block);
/* Receive backend devices from context helpers */
struct device *crypt_metadata_device(struct crypt_device *cd);
@@ -161,17 +99,16 @@ int crypt_dev_is_partition(const char *dev_path);
char *crypt_get_partition_device(const char *dev_path, uint64_t offset, uint64_t size);
char *crypt_get_base_device(const char *dev_path);
uint64_t crypt_dev_partition_offset(const char *dev_path);
int lookup_by_disk_id(const char *dm_uuid);
int lookup_by_sysfs_uuid_field(const char *dm_uuid, size_t max_len);
size_t crypt_getpagesize(void);
unsigned crypt_cpusonline(void);
uint64_t crypt_getphysmemory_kb(void);
ssize_t write_blockwise(int fd, int bsize, void *buf, size_t count);
ssize_t read_blockwise(int fd, int bsize, void *_buf, size_t count);
ssize_t write_lseek_blockwise(int fd, int bsize, char *buf, size_t count, off_t offset);
unsigned crypt_getpagesize(void);
int init_crypto(struct crypt_device *ctx);
void logger(struct crypt_device *cd, int level, const char *file, int line, const char *format, ...) __attribute__ ((format (printf, 5, 6)));
#define log_dbg(c, x...) logger(c, CRYPT_LOG_DEBUG, __FILE__, __LINE__, x)
void logger(struct crypt_device *cd, int class, const char *file, int line, const char *format, ...) __attribute__ ((format (printf, 5, 6)));
#define log_dbg(x...) logger(NULL, CRYPT_LOG_DEBUG, __FILE__, __LINE__, x)
#define log_std(c, x...) logger(c, CRYPT_LOG_NORMAL, __FILE__, __LINE__, x)
#define log_verbose(c, x...) logger(c, CRYPT_LOG_VERBOSE, __FILE__, __LINE__, x)
#define log_err(c, x...) logger(c, CRYPT_LOG_ERROR, __FILE__, __LINE__, x)
@@ -181,14 +118,12 @@ int crypt_get_debug_level(void);
int crypt_memlock_inc(struct crypt_device *ctx);
int crypt_memlock_dec(struct crypt_device *ctx);
int crypt_metadata_locking_enabled(void);
int crypt_random_init(struct crypt_device *ctx);
int crypt_random_get(struct crypt_device *ctx, char *buf, size_t len, int quality);
void crypt_random_exit(void);
int crypt_random_default_key_rng(void);
int crypt_plain_hash(struct crypt_device *cd,
int crypt_plain_hash(struct crypt_device *ctx,
const char *hash_name,
char *key, size_t key_size,
const char *passphrase, size_t passphrase_size);
@@ -198,33 +133,23 @@ int PLAIN_activate(struct crypt_device *cd,
uint64_t size,
uint32_t flags);
void *crypt_get_hdr(struct crypt_device *cd, const char *type);
/**
* Different methods used to erase sensitive data concerning
* either encrypted payload area or master key inside keyslot
* area
*/
typedef enum {
CRYPT_WIPE_ZERO, /**< overwrite area using zero blocks */
CRYPT_WIPE_DISK, /**< erase disk (using Gutmann method if it is rotational disk)*/
CRYPT_WIPE_SSD, /**< erase solid state disk (random write) */
CRYPT_WIPE_RANDOM /**< overwrite area using some up to now unspecified
* random algorithm */
} crypt_wipe_type;
int crypt_wipe_device(struct crypt_device *cd,
struct device *device,
crypt_wipe_pattern pattern,
uint64_t offset,
uint64_t length,
size_t wipe_block_size,
int (*progress)(uint64_t size, uint64_t offset, void *usrptr),
void *usrptr);
/* Internal integrity helpers */
const char *crypt_get_integrity(struct crypt_device *cd);
int crypt_get_integrity_key_size(struct crypt_device *cd);
int crypt_get_integrity_tag_size(struct crypt_device *cd);
int crypt_key_in_keyring(struct crypt_device *cd);
void crypt_set_key_in_keyring(struct crypt_device *cd, unsigned key_in_keyring);
int crypt_volume_key_load_in_keyring(struct crypt_device *cd, struct volume_key *vk);
int crypt_use_keyring_for_vk(struct crypt_device *cd);
void crypt_drop_keyring_key(struct crypt_device *cd, const char *key_description);
static inline uint64_t version(uint16_t major, uint16_t minor, uint16_t patch, uint16_t release)
{
return (uint64_t)release | ((uint64_t)patch << 16) | ((uint64_t)minor << 32) | ((uint64_t)major << 48);
}
int kernel_version(uint64_t *kversion);
int crypt_wipe(struct device *device,
uint64_t offset,
uint64_t sectors,
crypt_wipe_type type,
int flags);
#endif /* INTERNAL_H */

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +1,21 @@
CRYPTSETUP_2.0 {
CRYPTSETUP_1.0 {
global:
crypt_init;
crypt_init_data_device;
crypt_init_by_name;
crypt_init_by_name_and_header;
crypt_set_log_callback;
crypt_set_confirm_callback;
crypt_set_password_callback;
crypt_set_timeout;
crypt_set_password_retry;
crypt_set_iterarion_time;
crypt_set_iteration_time;
crypt_set_password_verify;
crypt_set_uuid;
crypt_set_label;
crypt_set_data_device;
crypt_memory_lock;
crypt_metadata_locking;
crypt_format;
crypt_convert;
crypt_load;
crypt_repair;
crypt_resize;
@@ -23,96 +23,51 @@ CRYPTSETUP_2.0 {
crypt_resume_by_passphrase;
crypt_resume_by_keyfile;
crypt_resume_by_keyfile_offset;
crypt_resume_by_keyfile_device_offset;
crypt_free;
crypt_keyslot_add_by_passphrase;
crypt_keyslot_change_by_passphrase;
crypt_keyslot_add_by_keyfile;
crypt_keyslot_add_by_keyfile_offset;
crypt_keyslot_add_by_keyfile_device_offset;
crypt_keyslot_add_by_volume_key;
crypt_keyslot_add_by_key;
crypt_keyslot_set_priority;
crypt_keyslot_get_priority;
crypt_token_json_get;
crypt_token_json_set;
crypt_token_status;
crypt_token_luks2_keyring_get;
crypt_token_luks2_keyring_set;
crypt_token_assign_keyslot;
crypt_token_unassign_keyslot;
crypt_token_is_assigned;
crypt_token_register;
crypt_activate_by_token;
crypt_keyslot_destroy;
crypt_activate_by_passphrase;
crypt_activate_by_keyfile;
crypt_activate_by_keyfile_offset;
crypt_activate_by_keyfile_device_offset;
crypt_activate_by_volume_key;
crypt_activate_by_keyring;
crypt_deactivate;
crypt_deactivate_by_name;
crypt_volume_key_get;
crypt_volume_key_verify;
crypt_volume_key_keyring;
crypt_status;
crypt_dump;
crypt_benchmark;
crypt_benchmark_pbkdf;
crypt_benchmark_kdf;
crypt_get_cipher;
crypt_get_cipher_mode;
crypt_get_integrity_info;
crypt_get_uuid;
crypt_set_data_offset;
crypt_get_data_offset;
crypt_get_iv_offset;
crypt_get_volume_key_size;
crypt_get_device_name;
crypt_get_metadata_device_name;
crypt_get_metadata_size;
crypt_set_metadata_size;
crypt_get_verity_info;
crypt_get_sector_size;
crypt_get_type;
crypt_get_default_type;
crypt_get_active_device;
crypt_get_active_integrity_failures;
crypt_persistent_flags_set;
crypt_persistent_flags_get;
crypt_set_rng_type;
crypt_get_rng_type;
crypt_set_pbkdf_type;
crypt_get_pbkdf_type;
crypt_get_pbkdf_type_params;
crypt_get_pbkdf_default;
crypt_keyslot_max;
crypt_keyslot_area;
crypt_keyslot_status;
crypt_keyslot_get_key_size;
crypt_keyslot_set_encryption;
crypt_keyslot_get_encryption;
crypt_keyslot_get_pbkdf;
crypt_last_error;
crypt_get_error;
crypt_get_dir;
crypt_set_debug_level;
crypt_log;
crypt_header_backup;
crypt_header_restore;
crypt_keyfile_read;
crypt_keyfile_device_read;
crypt_wipe;
local:
*;
};

File diff suppressed because it is too large Load Diff

14
lib/loopaes/Makefile.am Normal file
View File

@@ -0,0 +1,14 @@
moduledir = $(libdir)/cryptsetup
noinst_LTLIBRARIES = libloopaes.la
libloopaes_la_CFLAGS = -Wall $(AM_CFLAGS) @CRYPTO_CFLAGS@
libloopaes_la_SOURCES = \
loopaes.c \
loopaes.h
AM_CPPFLAGS = -include config.h \
-I$(top_srcdir)/lib \
-I$(top_srcdir)/lib/crypto_backend

View File

@@ -1,8 +1,8 @@
/*
* loop-AES compatible volume handling
*
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Milan Broz
* Copyright (C) 2011-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2013, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -87,12 +87,12 @@ static int hash_keys(struct crypt_device *cd,
tweak = get_tweak(keys_count);
if (!keys_count || !key_len_output || !hash_name || !key_len_input) {
log_err(cd, _("Key processing error (using hash %s)."),
log_err(cd, _("Key processing error (using hash %s).\n"),
hash_name ?: "[none]");
return -EINVAL;
}
*vk = crypt_alloc_volume_key((size_t)key_len_output * keys_count, NULL);
*vk = crypt_alloc_volume_key(key_len_output * keys_count, NULL);
if (!*vk)
return -ENOMEM;
@@ -137,13 +137,13 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
unsigned int key_lengths[LOOPAES_KEYS_MAX];
unsigned int i, key_index, key_len, offset;
log_dbg(cd, "Parsing loop-AES keyfile of size %zu.", buffer_len);
log_dbg("Parsing loop-AES keyfile of size %zu.", buffer_len);
if (!buffer_len)
return -EINVAL;
if (keyfile_is_gpg(buffer, buffer_len)) {
log_err(cd, _("Detected not yet supported GPG encrypted keyfile."));
log_err(cd, _("Detected not yet supported GPG encrypted keyfile.\n"));
log_std(cd, _("Please use gpg --decrypt <KEYFILE> | cryptsetup --keyfile=- ...\n"));
return -EINVAL;
}
@@ -164,8 +164,8 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
key_lengths[key_index]++;
}
if (offset == buffer_len) {
log_dbg(cd, "Unterminated key #%d in keyfile.", key_index);
log_err(cd, _("Incompatible loop-AES keyfile detected."));
log_dbg("Unterminated key #%d in keyfile.", key_index);
log_err(cd, _("Incompatible loop-AES keyfile detected.\n"));
return -EINVAL;
}
while (offset < buffer_len && !buffer[offset])
@@ -177,7 +177,7 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
key_len = key_lengths[0];
for (i = 0; i < key_index; i++)
if (!key_lengths[i] || (key_lengths[i] != key_len)) {
log_dbg(cd, "Unexpected length %d of key #%d (should be %d).",
log_dbg("Unexpected length %d of key #%d (should be %d).",
key_lengths[i], i, key_len);
key_len = 0;
break;
@@ -185,11 +185,11 @@ int LOOPAES_parse_keyfile(struct crypt_device *cd,
if (offset != buffer_len || key_len == 0 ||
(key_index != 1 && key_index !=64 && key_index != 65)) {
log_err(cd, _("Incompatible loop-AES keyfile detected."));
log_err(cd, _("Incompatible loop-AES keyfile detected.\n"));
return -EINVAL;
}
log_dbg(cd, "Keyfile: %d keys of length %d.", key_index, key_len);
log_dbg("Keyfile: %d keys of length %d.", key_index, key_len);
*keys_count = key_index;
return hash_keys(cd, vk, hash, keys, key_index,
@@ -203,15 +203,24 @@ int LOOPAES_activate(struct crypt_device *cd,
struct volume_key *vk,
uint32_t flags)
{
int r;
uint32_t req_flags, dmc_flags;
char *cipher = NULL;
uint32_t req_flags;
int r;
struct crypt_dm_active_device dmd = {
.flags = flags,
.target = DM_CRYPT,
.size = 0,
.flags = flags,
.data_device = crypt_data_device(cd),
.u.crypt = {
.cipher = NULL,
.vk = vk,
.offset = crypt_get_data_offset(cd),
.iv_offset = crypt_get_iv_offset(cd),
}
};
r = device_block_adjust(cd, crypt_data_device(cd), DEV_EXCL,
crypt_get_data_offset(cd), &dmd.size, &dmd.flags);
r = device_block_adjust(cd, dmd.data_device, DEV_EXCL,
dmd.u.crypt.offset, &dmd.size, &dmd.flags);
if (r)
return r;
@@ -225,29 +234,17 @@ int LOOPAES_activate(struct crypt_device *cd,
if (r < 0)
return -ENOMEM;
r = dm_crypt_target_set(&dmd.segment, 0, dmd.size, crypt_data_device(cd),
vk, cipher, crypt_get_iv_offset(cd),
crypt_get_data_offset(cd), crypt_get_integrity(cd),
crypt_get_integrity_tag_size(cd), crypt_get_sector_size(cd));
dmd.u.crypt.cipher = cipher;
log_dbg("Trying to activate loop-AES device %s using cipher %s.",
name, dmd.u.crypt.cipher);
if (r) {
free(cipher);
return r;
}
r = dm_create_device(cd, name, CRYPT_LOOPAES, &dmd, 0);
log_dbg(cd, "Trying to activate loop-AES device %s using cipher %s.",
name, cipher);
r = dm_create_device(cd, name, CRYPT_LOOPAES, &dmd);
if (r < 0 && !dm_flags(cd, DM_CRYPT, &dmc_flags) &&
(dmc_flags & req_flags) != req_flags) {
log_err(cd, _("Kernel doesn't support loop-AES compatible mapping."));
if (r < 0 && !(dm_flags() & req_flags)) {
log_err(cd, _("Kernel doesn't support loop-AES compatible mapping.\n"));
r = -ENOTSUP;
}
dm_targets_free(cd, &dmd);
free(cipher);
return r;
}

View File

@@ -1,8 +1,8 @@
/*
* loop-AES compatible volume handling
*
* Copyright (C) 2011-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2019 Milan Broz
* Copyright (C) 2011-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2013, Milan Broz
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -22,7 +22,6 @@
#ifndef _LOOPAES_H
#define _LOOPAES_H
#include <stdint.h>
#include <unistd.h>
struct crypt_device;

17
lib/luks1/Makefile.am Normal file
View File

@@ -0,0 +1,17 @@
moduledir = $(libdir)/cryptsetup
noinst_LTLIBRARIES = libluks1.la
libluks1_la_CFLAGS = -Wall $(AM_CFLAGS) @CRYPTO_CFLAGS@
libluks1_la_SOURCES = \
af.c \
keymanage.c \
keyencryption.c \
af.h \
luks.h
AM_CPPFLAGS = -include config.h \
-I$(top_srcdir)/lib \
-I$(top_srcdir)/lib/crypto_backend

View File

@@ -1,11 +1,11 @@
/*
* AFsplitter - Anti forensic information splitter
*
* Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2012, Red Hat, Inc. All rights reserved.
*
* AFsplitter diffuses information over a large stripe of data,
* therefore supporting secure data destruction.
* therefor supporting secure data destruction.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -25,6 +25,7 @@
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include <netinet/in.h>
#include <errno.h>
#include "internal.h"
#include "af.h"
@@ -33,7 +34,7 @@ static void XORblock(const char *src1, const char *src2, char *dst, size_t n)
{
size_t j;
for (j = 0; j < n; j++)
for(j = 0; j < n; ++j)
dst[j] = src1[j] ^ src2[j];
}
@@ -44,7 +45,7 @@ static int hash_buf(const char *src, char *dst, uint32_t iv,
char *iv_char = (char *)&iv;
int r;
iv = be32_to_cpu(iv);
iv = htonl(iv);
if (crypt_hash_init(&hd, hash_name))
return -EINVAL;
@@ -60,38 +61,34 @@ out:
return r;
}
/*
* diffuse: Information spreading over the whole dataset with
/* diffuse: Information spreading over the whole dataset with
* the help of hash function.
*/
static int diffuse(char *src, char *dst, size_t size, const char *hash_name)
{
int r, hash_size = crypt_hash_size(hash_name);
int hash_size = crypt_hash_size(hash_name);
unsigned int digest_size;
unsigned int i, blocks, padding;
if (hash_size <= 0)
return -EINVAL;
return 1;
digest_size = hash_size;
blocks = size / digest_size;
padding = size % digest_size;
for (i = 0; i < blocks; i++) {
r = hash_buf(src + digest_size * i,
for (i = 0; i < blocks; i++)
if(hash_buf(src + digest_size * i,
dst + digest_size * i,
i, (size_t)digest_size, hash_name);
if (r < 0)
return r;
}
i, (size_t)digest_size, hash_name))
return 1;
if (padding) {
r = hash_buf(src + digest_size * i,
if(padding)
if(hash_buf(src + digest_size * i,
dst + digest_size * i,
i, (size_t)padding, hash_name);
if (r < 0)
return r;
}
i, (size_t)padding, hash_name))
return 1;
return 0;
}
@@ -101,57 +98,53 @@ static int diffuse(char *src, char *dst, size_t size, const char *hash_name)
* blocknumbers. The same blocksize and blocknumbers values
* must be supplied to AF_merge to recover information.
*/
int AF_split(struct crypt_device *ctx, const char *src, char *dst,
size_t blocksize, unsigned int blocknumbers, const char *hash)
int AF_split(char *src, char *dst, size_t blocksize,
unsigned int blocknumbers, const char *hash)
{
unsigned int i;
char *bufblock;
int r;
int r = -EINVAL;
bufblock = crypt_safe_alloc(blocksize);
if (!bufblock)
return -ENOMEM;
if((bufblock = calloc(blocksize, 1)) == NULL) return -ENOMEM;
/* process everything except the last block */
for (i = 0; i < blocknumbers - 1; i++) {
r = crypt_random_get(ctx, dst + blocksize * i, blocksize, CRYPT_RND_NORMAL);
if (r < 0)
goto out;
for(i=0; i<blocknumbers-1; i++) {
r = crypt_random_get(NULL, dst+(blocksize*i), blocksize, CRYPT_RND_NORMAL);
if(r < 0) goto out;
XORblock(dst + blocksize * i, bufblock, bufblock, blocksize);
r = diffuse(bufblock, bufblock, blocksize, hash);
if (r < 0)
XORblock(dst+(blocksize*i),bufblock,bufblock,blocksize);
if(diffuse(bufblock, bufblock, blocksize, hash))
goto out;
}
/* the last block is computed */
XORblock(src, bufblock, dst + blocksize * i, blocksize);
XORblock(src,bufblock,dst+(i*blocksize),blocksize);
r = 0;
out:
crypt_safe_free(bufblock);
free(bufblock);
return r;
}
int AF_merge(struct crypt_device *ctx __attribute__((unused)), const char *src, char *dst,
size_t blocksize, unsigned int blocknumbers, const char *hash)
int AF_merge(char *src, char *dst, size_t blocksize,
unsigned int blocknumbers, const char *hash)
{
unsigned int i;
char *bufblock;
int r;
int r = -EINVAL;
bufblock = crypt_safe_alloc(blocksize);
if (!bufblock)
if((bufblock = calloc(blocksize, 1)) == NULL)
return -ENOMEM;
for(i = 0; i < blocknumbers - 1; i++) {
XORblock(src + blocksize * i, bufblock, bufblock, blocksize);
r = diffuse(bufblock, bufblock, blocksize, hash);
if (r < 0)
memset(bufblock,0,blocksize);
for(i=0; i<blocknumbers-1; i++) {
XORblock(src+(blocksize*i),bufblock,bufblock,blocksize);
if(diffuse(bufblock, bufblock, blocksize, hash))
goto out;
}
XORblock(src + blocksize * i, bufblock, dst, blocksize);
r = 0;
out:
crypt_safe_free(bufblock);
free(bufblock);
return r;
}

View File

@@ -1,11 +1,11 @@
/*
* AFsplitter - Anti forensic information splitter
*
* Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2012, Red Hat, Inc. All rights reserved.
*
* AFsplitter diffuses information over a large stripe of data,
* therefore supporting secure data destruction.
* therefor supporting secure data destruction.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -24,8 +24,6 @@
#ifndef INCLUDED_CRYPTSETUP_LUKS_AF_H
#define INCLUDED_CRYPTSETUP_LUKS_AF_H
#include <stddef.h>
/*
* AF_split operates on src and produces information split data in
* dst. src is assumed to be of the length blocksize. The data stripe
@@ -39,26 +37,8 @@
* On error, both functions return -1, 0 otherwise.
*/
int AF_split(struct crypt_device *ctx, const char *src, char *dst,
size_t blocksize, unsigned int blocknumbers, const char *hash);
int AF_merge(struct crypt_device *ctx, const char *src, char *dst, size_t blocksize,
unsigned int blocknumbers, const char *hash);
int AF_split(char *src, char *dst, size_t blocksize, unsigned int blocknumbers, const char *hash);
int AF_merge(char *src, char *dst, size_t blocksize, unsigned int blocknumbers, const char *hash);
size_t AF_split_sectors(size_t blocksize, unsigned int blocknumbers);
int LUKS_encrypt_to_storage(
char *src, size_t srcLength,
const char *cipher,
const char *cipher_mode,
struct volume_key *vk,
unsigned int sector,
struct crypt_device *ctx);
int LUKS_decrypt_from_storage(
char *dst, size_t dstLength,
const char *cipher,
const char *cipher_mode,
struct volume_key *vk,
unsigned int sector,
struct crypt_device *ctx);
#endif

View File

@@ -1,9 +1,9 @@
/*
* LUKS - Linux Unified Key Setup
*
* Copyright (C) 2004-2006 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2019 Milan Broz
* Copyright (C) 2004-2006, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2012, Red Hat, Inc. All rights reserved.
* Copyright (C) 2012-2014, Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -21,60 +21,58 @@
*/
#include <stdio.h>
#include <string.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/stat.h>
#include "luks.h"
#include "af.h"
#include "internal.h"
static void _error_hint(struct crypt_device *ctx, const char *device,
const char *cipher, const char *mode, size_t keyLength)
{
char *c, cipher_spec[MAX_CIPHER_LEN * 3];
char cipher_spec[MAX_CIPHER_LEN * 3];
if (snprintf(cipher_spec, sizeof(cipher_spec), "%s-%s", cipher, mode) < 0)
return;
log_err(ctx, _("Failed to setup dm-crypt key mapping for device %s.\n"
"Check that kernel supports %s cipher (check syslog for more info)."),
"Check that kernel supports %s cipher (check syslog for more info).\n"),
device, cipher_spec);
if (!strncmp(mode, "xts", 3) && (keyLength != 256 && keyLength != 512))
log_err(ctx, _("Key size in XTS mode must be 256 or 512 bits."));
else if (!(c = strchr(mode, '-')) || strlen(c) < 4)
log_err(ctx, _("Cipher specification should be in [cipher]-[mode]-[iv] format."));
log_err(ctx, _("Key size in XTS mode must be 256 or 512 bits.\n"));
}
static int LUKS_endec_template(char *src, size_t srcLength,
const char *cipher, const char *cipher_mode,
struct volume_key *vk,
unsigned int sector,
ssize_t (*func)(int, size_t, size_t, void *, size_t),
ssize_t (*func)(int, int, void *, size_t),
int mode,
struct crypt_device *ctx)
{
char name[PATH_MAX], path[PATH_MAX];
char cipher_spec[MAX_CIPHER_LEN * 3];
struct crypt_dm_active_device dmd = {
.flags = CRYPT_ACTIVATE_PRIVATE,
.target = DM_CRYPT,
.uuid = NULL,
.flags = CRYPT_ACTIVATE_PRIVATE,
.data_device = crypt_metadata_device(ctx),
.u.crypt = {
.cipher = cipher_spec,
.vk = vk,
.offset = sector,
.iv_offset = 0,
}
};
int r, devfd = -1;
size_t bsize, keyslot_alignment, alignment;
int r, bsize, devfd = -1;
log_dbg(ctx, "Using dmcrypt to access keyslot area.");
log_dbg("Using dmcrypt to access keyslot area.");
bsize = device_block_size(ctx, crypt_metadata_device(ctx));
alignment = device_alignment(crypt_metadata_device(ctx));
if (!bsize || !alignment)
bsize = device_block_size(dmd.data_device);
if (bsize <= 0)
return -EINVAL;
if (bsize > LUKS_ALIGN_KEYSLOTS)
keyslot_alignment = LUKS_ALIGN_KEYSLOTS;
else
keyslot_alignment = bsize;
dmd.size = size_round_up(srcLength, keyslot_alignment) / SECTOR_SIZE;
dmd.size = size_round_up(srcLength, bsize) / SECTOR_SIZE;
if (mode == O_RDONLY)
dmd.flags |= CRYPT_ACTIVATE_READONLY;
@@ -86,53 +84,45 @@ static int LUKS_endec_template(char *src, size_t srcLength,
if (snprintf(cipher_spec, sizeof(cipher_spec), "%s-%s", cipher, cipher_mode) < 0)
return -ENOMEM;
r = device_block_adjust(ctx, crypt_metadata_device(ctx), DEV_OK,
sector, &dmd.size, &dmd.flags);
r = device_block_adjust(ctx, dmd.data_device, DEV_OK,
dmd.u.crypt.offset, &dmd.size, &dmd.flags);
if (r < 0) {
log_err(ctx, _("Device %s doesn't exist or access denied."),
device_path(crypt_metadata_device(ctx)));
log_err(ctx, _("Device %s doesn't exist or access denied.\n"),
device_path(dmd.data_device));
return -EIO;
}
if (mode != O_RDONLY && dmd.flags & CRYPT_ACTIVATE_READONLY) {
log_err(ctx, _("Cannot write to device %s, permission denied."),
device_path(crypt_metadata_device(ctx)));
log_err(ctx, _("Cannot write to device %s, permission denied.\n"),
device_path(dmd.data_device));
return -EACCES;
}
r = dm_crypt_target_set(&dmd.segment, 0, dmd.size,
crypt_metadata_device(ctx), vk, cipher_spec, 0, sector,
NULL, 0, SECTOR_SIZE);
if (r)
goto out;
r = dm_create_device(ctx, name, "TEMP", &dmd);
r = dm_create_device(ctx, name, "TEMP", &dmd, 0);
if (r < 0) {
if (r != -EACCES && r != -ENOTSUP)
_error_hint(ctx, device_path(crypt_metadata_device(ctx)),
_error_hint(ctx, device_path(dmd.data_device),
cipher, cipher_mode, vk->keylength * 8);
r = -EIO;
goto out;
return -EIO;
}
devfd = open(path, mode | O_DIRECT | O_SYNC);
if (devfd == -1) {
log_err(ctx, _("Failed to open temporary keystore device."));
log_err(ctx, _("Failed to open temporary keystore device.\n"));
r = -EIO;
goto out;
}
r = func(devfd, bsize, alignment, src, srcLength);
r = func(devfd, bsize, src, srcLength);
if (r < 0) {
log_err(ctx, _("Failed to access temporary keystore device."));
log_err(ctx, _("Failed to access temporary keystore device.\n"));
r = -EIO;
} else
r = 0;
out:
dm_targets_free(ctx, &dmd);
if (devfd != -1)
if(devfd != -1)
close(devfd);
dm_remove_device(ctx, name, CRYPT_DEACTIVATE_FORCE);
dm_remove_device(ctx, name, 1, dmd.size);
return r;
}
@@ -146,17 +136,17 @@ int LUKS_encrypt_to_storage(char *src, size_t srcLength,
struct device *device = crypt_metadata_device(ctx);
struct crypt_storage *s;
int devfd = -1, r = 0;
int devfd = -1, bsize, r = 0;
/* Only whole sector writes supported */
if (MISALIGNED_512(srcLength))
if (srcLength % SECTOR_SIZE)
return -EINVAL;
/* Encrypt buffer */
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
if (r)
log_dbg(ctx, "Userspace crypto wrapper cannot use %s-%s (%d).",
log_dbg("Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
/* Fallback to old temporary dmcrypt device */
@@ -170,7 +160,7 @@ int LUKS_encrypt_to_storage(char *src, size_t srcLength,
return r;
}
log_dbg(ctx, "Using userspace crypto wrapper to access keyslot area.");
log_dbg("Using userspace crypto wrapper to access keyslot area.");
r = crypt_storage_encrypt(s, 0, srcLength / SECTOR_SIZE, src);
crypt_storage_destroy(s);
@@ -181,23 +171,24 @@ int LUKS_encrypt_to_storage(char *src, size_t srcLength,
r = -EIO;
/* Write buffer to device */
devfd = device_open(ctx, device, O_RDWR);
if (devfd < 0)
bsize = device_block_size(device);
if (bsize <= 0)
goto out;
if (write_lseek_blockwise(devfd, device_block_size(ctx, device),
device_alignment(device), src, srcLength,
sector * SECTOR_SIZE) < 0)
devfd = device_open(device, O_RDWR);
if (devfd == -1)
goto out;
if (lseek(devfd, sector * SECTOR_SIZE, SEEK_SET) == -1 ||
write_blockwise(devfd, bsize, src, srcLength) == -1)
goto out;
r = 0;
out:
if (devfd >= 0) {
device_sync(ctx, device, devfd);
if(devfd != -1)
close(devfd);
}
if (r)
log_err(ctx, _("IO error while encrypting keyslot."));
log_err(ctx, _("IO error while encrypting keyslot.\n"));
return r;
}
@@ -211,17 +202,16 @@ int LUKS_decrypt_from_storage(char *dst, size_t dstLength,
{
struct device *device = crypt_metadata_device(ctx);
struct crypt_storage *s;
struct stat st;
int devfd = -1, r = 0;
int devfd = -1, bsize, r = 0;
/* Only whole sector reads supported */
if (MISALIGNED_512(dstLength))
if (dstLength % SECTOR_SIZE)
return -EINVAL;
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
if (r)
log_dbg(ctx, "Userspace crypto wrapper cannot use %s-%s (%d).",
log_dbg("Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
/* Fallback to old temporary dmcrypt device */
@@ -235,28 +225,22 @@ int LUKS_decrypt_from_storage(char *dst, size_t dstLength,
return r;
}
log_dbg(ctx, "Using userspace crypto wrapper to access keyslot area.");
log_dbg("Using userspace crypto wrapper to access keyslot area.");
r = -EIO;
/* Read buffer from device */
devfd = device_open(ctx, device, O_RDONLY);
if (devfd < 0) {
log_err(ctx, _("Cannot open device %s."), device_path(device));
crypt_storage_destroy(s);
return -EIO;
}
bsize = device_block_size(device);
if (bsize <= 0)
goto bad;
if (read_lseek_blockwise(devfd, device_block_size(ctx, device),
device_alignment(device), dst, dstLength,
sector * SECTOR_SIZE) < 0) {
if (!fstat(devfd, &st) && (st.st_size < (off_t)dstLength))
log_err(ctx, _("Device %s is too small."), device_path(device));
else
log_err(ctx, _("IO error while decrypting keyslot."));
devfd = device_open(device, O_RDONLY);
if (devfd == -1)
goto bad;
close(devfd);
crypt_storage_destroy(s);
return -EIO;
}
if (lseek(devfd, sector * SECTOR_SIZE, SEEK_SET) == -1 ||
read_blockwise(devfd, bsize, dst, dstLength) == -1)
goto bad;
close(devfd);
@@ -264,5 +248,13 @@ int LUKS_decrypt_from_storage(char *dst, size_t dstLength,
r = crypt_storage_decrypt(s, 0, dstLength / SECTOR_SIZE, dst);
crypt_storage_destroy(s);
return r;
bad:
if(devfd != -1)
close(devfd);
log_err(ctx, _("IO error while decrypting keyslot.\n"));
crypt_storage_destroy(s);
return r;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,8 @@
/*
* LUKS - Linux Unified Key Setup
*
* Copyright (C) 2004-2006 Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2006, Clemens Fruhwirth <clemens@endorphin.org>
* Copyright (C) 2009-2012, Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -40,9 +40,6 @@
#define LUKS_MKD_ITERATIONS_MIN 1000
#define LUKS_SLOT_ITERATIONS_MIN 1000
// Iteration time for digest in ms
#define LUKS_MKD_ITERATIONS_MS 125
#define LUKS_KEY_DISABLED_OLD 0
#define LUKS_KEY_ENABLED_OLD 0xCAFE
@@ -61,9 +58,6 @@
/* Offset to keyslot area [in bytes] */
#define LUKS_ALIGN_KEYSLOTS 4096
/* Maximal LUKS header size, for wipe [in bytes] */
#define LUKS_MAX_KEYSLOT_SIZE 0x1000000 /* 16 MB, up to 32768 bits key */
/* Any integer values are stored in network byte order on disk and must be
converted */
@@ -102,20 +96,19 @@ struct luks_phdr {
int LUKS_verify_volume_key(const struct luks_phdr *hdr,
const struct volume_key *vk);
int LUKS_check_cipher(struct crypt_device *ctx,
size_t keylength,
const char *cipher,
const char *cipher_mode);
int LUKS_generate_phdr(struct luks_phdr *header,
int LUKS_generate_phdr(
struct luks_phdr *header,
const struct volume_key *vk,
const char *cipherName,
const char *cipherMode,
const char *hashSpec,
const char *uuid,
uint64_t data_offset,
uint64_t align_offset,
uint64_t required_alignment,
unsigned int stripes,
unsigned int alignPayload,
unsigned int alignOffset,
uint32_t iteration_time_ms,
uint64_t *PBKDF2_per_sec,
int detached_metadata_device,
struct crypt_device *ctx);
int LUKS_read_phdr(
@@ -154,6 +147,8 @@ int LUKS_set_key(
size_t passwordLen,
struct luks_phdr *hdr,
struct volume_key *vk,
uint32_t iteration_time_ms,
uint64_t *PBKDF2_per_sec,
struct crypt_device *ctx);
int LUKS_open_key_with_hdr(
@@ -169,22 +164,30 @@ int LUKS_del_key(
struct luks_phdr *hdr,
struct crypt_device *ctx);
int LUKS_wipe_header_areas(struct luks_phdr *hdr,
struct crypt_device *ctx);
crypt_keyslot_info LUKS_keyslot_info(struct luks_phdr *hdr, int keyslot);
int LUKS_keyslot_find_empty(struct luks_phdr *hdr);
int LUKS_keyslot_active_count(struct luks_phdr *hdr);
int LUKS_keyslot_set(struct luks_phdr *hdr, int keyslot, int enable,
struct crypt_device *ctx);
int LUKS_keyslot_area(const struct luks_phdr *hdr,
int LUKS_keyslot_set(struct luks_phdr *hdr, int keyslot, int enable);
int LUKS_keyslot_area(struct luks_phdr *hdr,
int keyslot,
uint64_t *offset,
uint64_t *length);
size_t LUKS_device_sectors(const struct luks_phdr *hdr);
size_t LUKS_keyslots_offset(const struct luks_phdr *hdr);
int LUKS_keyslot_pbkdf(struct luks_phdr *hdr, int keyslot,
struct crypt_pbkdf_type *pbkdf);
int LUKS_encrypt_to_storage(
char *src, size_t srcLength,
const char *cipher,
const char *cipher_mode,
struct volume_key *vk,
unsigned int sector,
struct crypt_device *ctx);
int LUKS_decrypt_from_storage(
char *dst, size_t dstLength,
const char *cipher,
const char *cipher_mode,
struct volume_key *vk,
unsigned int sector,
struct crypt_device *ctx);
int LUKS1_activate(struct crypt_device *cd,
const char *name,

View File

@@ -1,388 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef _CRYPTSETUP_LUKS2_ONDISK_H
#define _CRYPTSETUP_LUKS2_ONDISK_H
#include "libcryptsetup.h"
#define LUKS2_MAGIC_1ST "LUKS\xba\xbe"
#define LUKS2_MAGIC_2ND "SKUL\xba\xbe"
#define LUKS2_MAGIC_L 6
#define LUKS2_UUID_L 40
#define LUKS2_LABEL_L 48
#define LUKS2_SALT_L 64
#define LUKS2_CHECKSUM_ALG_L 32
#define LUKS2_CHECKSUM_L 64
#define LUKS2_KEYSLOTS_MAX 32
#define LUKS2_TOKENS_MAX 32
#define LUKS2_SEGMENT_MAX 32
#define LUKS2_BUILTIN_TOKEN_PREFIX "luks2-"
#define LUKS2_BUILTIN_TOKEN_PREFIX_LEN 6
#define LUKS2_TOKEN_KEYRING LUKS2_BUILTIN_TOKEN_PREFIX "keyring"
#define LUKS2_DIGEST_MAX 8
#define CRYPT_ANY_SEGMENT -1
#define CRYPT_DEFAULT_SEGMENT 0
#define CRYPT_DEFAULT_SEGMENT_STR "0"
#define CRYPT_ANY_DIGEST -1
/*
* LUKS2 header on-disk.
*
* Binary header is followed by JSON area.
* JSON area is followed by keyslot area and data area,
* these are described in JSON metadata.
*
* Note: uuid, csum_alg are intentionally on the same offset as LUKS1
* (checksum alg replaces hash in LUKS1)
*
* String (char) should be zero terminated.
* Padding should be wiped.
* Checksum is calculated with csum zeroed (+ full JSON area).
*/
struct luks2_hdr_disk {
char magic[LUKS2_MAGIC_L];
uint16_t version; /* Version 2 */
uint64_t hdr_size; /* in bytes, including JSON area */
uint64_t seqid; /* increased on every update */
char label[LUKS2_LABEL_L];
char checksum_alg[LUKS2_CHECKSUM_ALG_L];
uint8_t salt[LUKS2_SALT_L]; /* unique for every header/offset */
char uuid[LUKS2_UUID_L];
char subsystem[LUKS2_LABEL_L]; /* owner subsystem label */
uint64_t hdr_offset; /* offset from device start in bytes */
char _padding[184];
uint8_t csum[LUKS2_CHECKSUM_L];
char _padding4096[7*512];
/* JSON area starts here */
} __attribute__ ((packed));
/*
* LUKS2 header in-memory.
*/
typedef struct json_object json_object;
struct luks2_hdr {
size_t hdr_size;
uint64_t seqid;
unsigned int version;
char label[LUKS2_LABEL_L];
char subsystem[LUKS2_LABEL_L];
char checksum_alg[LUKS2_CHECKSUM_ALG_L];
uint8_t salt1[LUKS2_SALT_L];
uint8_t salt2[LUKS2_SALT_L];
char uuid[LUKS2_UUID_L];
json_object *jobj;
};
struct luks2_keyslot_params {
enum { LUKS2_KEYSLOT_AF_LUKS1 = 0 } af_type;
enum { LUKS2_KEYSLOT_AREA_RAW = 0 } area_type;
union {
struct {
char hash[LUKS2_CHECKSUM_ALG_L]; // or include luks.h
unsigned int stripes;
} luks1;
} af;
union {
struct {
char encryption[65]; // or include utils_crypt.h
size_t key_size;
} raw;
} area;
};
/*
* Supportable header sizes (hdr_disk + JSON area)
* Also used as offset for the 2nd header.
*/
#define LUKS2_HDR_16K_LEN 0x4000
#define LUKS2_HDR_BIN_LEN sizeof(struct luks2_hdr_disk)
//#define LUKS2_DEFAULT_HDR_SIZE 0x400000 /* 4 MiB */
#define LUKS2_DEFAULT_HDR_SIZE 0x1000000 /* 16 MiB */
#define LUKS2_MAX_KEYSLOTS_SIZE 0x8000000 /* 128 MiB */
#define LUKS2_HDR_OFFSET_MAX 0x400000 /* 4 MiB */
/* Offsets for secondary header (for scan if primary header is corrupted). */
#define LUKS2_HDR2_OFFSETS { 0x04000, 0x008000, 0x010000, 0x020000, \
0x40000, 0x080000, 0x100000, 0x200000, LUKS2_HDR_OFFSET_MAX }
int LUKS2_hdr_version_unlocked(struct crypt_device *cd,
const char *backup_file);
int LUKS2_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr, int repair);
int LUKS2_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr);
int LUKS2_hdr_dump(struct crypt_device *cd, struct luks2_hdr *hdr);
int LUKS2_hdr_uuid(struct crypt_device *cd,
struct luks2_hdr *hdr,
const char *uuid);
int LUKS2_hdr_labels(struct crypt_device *cd,
struct luks2_hdr *hdr,
const char *label,
const char *subsystem,
int commit);
void LUKS2_hdr_free(struct crypt_device *cd, struct luks2_hdr *hdr);
int LUKS2_hdr_backup(struct crypt_device *cd,
struct luks2_hdr *hdr,
const char *backup_file);
int LUKS2_hdr_restore(struct crypt_device *cd,
struct luks2_hdr *hdr,
const char *backup_file);
uint64_t LUKS2_hdr_and_areas_size(json_object *jobj);
uint64_t LUKS2_keyslots_size(json_object *jobj);
uint64_t LUKS2_metadata_size(json_object *jobj);
int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd, const char *cipher_spec);
/*
* Generic LUKS2 keyslot
*/
int LUKS2_keyslot_open(struct crypt_device *cd,
int keyslot,
int segment,
const char *password,
size_t password_len,
struct volume_key **vk);
int LUKS2_keyslot_store(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const char *password,
size_t password_len,
const struct volume_key *vk,
const struct luks2_keyslot_params *params);
int LUKS2_keyslot_wipe(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int wipe_area_only);
int LUKS2_keyslot_dump(struct crypt_device *cd,
int keyslot);
crypt_keyslot_priority LUKS2_keyslot_priority_get(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot);
int LUKS2_keyslot_priority_set(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
crypt_keyslot_priority priority,
int commit);
/*
* Generic LUKS2 token
*/
int LUKS2_token_json_get(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char **json);
int LUKS2_token_assign(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int token,
int assign,
int commit);
int LUKS2_token_is_assigned(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int token);
int LUKS2_token_create(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *json,
int commit);
crypt_token_info LUKS2_token_status(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char **type);
int LUKS2_builtin_token_get(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *type,
void *params);
int LUKS2_builtin_token_create(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *type,
const void *params,
int commit);
int LUKS2_token_open_and_activate(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *name,
uint32_t flags,
void *usrptr);
int LUKS2_token_open_and_activate_any(struct crypt_device *cd,
struct luks2_hdr *hdr,
const char *name,
uint32_t flags);
int LUKS2_tokens_count(struct luks2_hdr *hdr);
/*
* Generic LUKS2 digest
*/
int LUKS2_digest_by_segment(struct luks2_hdr *hdr, int segment);
int LUKS2_digest_verify_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr,
int segment,
const struct volume_key *vk);
void LUKS2_digests_erase_unused(struct crypt_device *cd,
struct luks2_hdr *hdr);
int LUKS2_digest_verify(struct crypt_device *cd,
struct luks2_hdr *hdr,
struct volume_key *vk,
int keyslot);
int LUKS2_digest_dump(struct crypt_device *cd,
int digest);
int LUKS2_digest_assign(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int digest,
int assign,
int commit);
int LUKS2_digest_segment_assign(struct crypt_device *cd,
struct luks2_hdr *hdr,
int segment,
int digest,
int assign,
int commit);
int LUKS2_digest_by_keyslot(struct luks2_hdr *hdr, int keyslot);
int LUKS2_digest_create(struct crypt_device *cd,
const char *type,
struct luks2_hdr *hdr,
const struct volume_key *vk);
/*
* LUKS2 generic
*/
int LUKS2_activate(struct crypt_device *cd,
const char *name,
struct volume_key *vk,
uint32_t flags);
int LUKS2_keyslot_luks2_format(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const char *cipher,
size_t keylength);
int LUKS2_generate_hdr(
struct crypt_device *cd,
struct luks2_hdr *hdr,
const struct volume_key *vk,
const char *cipherName,
const char *cipherMode,
const char *integrity,
const char *uuid,
unsigned int sector_size,
uint64_t data_offset,
uint64_t align_offset,
uint64_t required_alignment,
uint64_t metadata_size,
uint64_t keyslots_size);
int LUKS2_check_metadata_area_size(uint64_t metadata_size);
int LUKS2_check_keyslots_area_size(uint64_t keyslots_size);
int LUKS2_wipe_header_areas(struct crypt_device *cd,
struct luks2_hdr *hdr);
uint64_t LUKS2_get_data_offset(struct luks2_hdr *hdr);
int LUKS2_get_sector_size(struct luks2_hdr *hdr);
const char *LUKS2_get_cipher(struct luks2_hdr *hdr, int segment);
const char *LUKS2_get_integrity(struct luks2_hdr *hdr, int segment);
int LUKS2_keyslot_params_default(struct crypt_device *cd, struct luks2_hdr *hdr,
struct luks2_keyslot_params *params);
int LUKS2_get_volume_key_size(struct luks2_hdr *hdr, int segment);
int LUKS2_get_keyslot_stored_key_size(struct luks2_hdr *hdr, int keyslot);
const char *LUKS2_get_keyslot_cipher(struct luks2_hdr *hdr, int keyslot, size_t *key_size);
int LUKS2_keyslot_find_empty(struct luks2_hdr *hdr, const char *type);
int LUKS2_keyslot_active_count(struct luks2_hdr *hdr, int segment);
int LUKS2_keyslot_for_segment(struct luks2_hdr *hdr, int keyslot, int segment);
crypt_keyslot_info LUKS2_keyslot_info(struct luks2_hdr *hdr, int keyslot);
int LUKS2_keyslot_area(struct luks2_hdr *hdr,
int keyslot,
uint64_t *offset,
uint64_t *length);
int LUKS2_keyslot_pbkdf(struct luks2_hdr *hdr, int keyslot, struct crypt_pbkdf_type *pbkdf);
/*
* Permanent activation flags stored in header
*/
int LUKS2_config_get_flags(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t *flags);
int LUKS2_config_set_flags(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t flags);
/*
* Requirements for device activation or header modification
*/
int LUKS2_config_get_requirements(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t *reqs);
int LUKS2_config_set_requirements(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t reqs);
int LUKS2_unmet_requirements(struct crypt_device *cd, struct luks2_hdr *hdr, uint32_t reqs_mask, int quiet);
int LUKS2_key_description_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int segment);
int LUKS2_volume_key_load_in_keyring_by_keyslot(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int keyslot);
struct luks_phdr;
int LUKS2_luks1_to_luks2(struct crypt_device *cd,
struct luks_phdr *hdr1,
struct luks2_hdr *hdr2);
int LUKS2_luks2_to_luks1(struct crypt_device *cd,
struct luks2_hdr *hdr2,
struct luks_phdr *hdr1);
#endif

View File

@@ -1,393 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, digest handling
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
extern const digest_handler PBKDF2_digest;
static const digest_handler *digest_handlers[LUKS2_DIGEST_MAX] = {
&PBKDF2_digest,
NULL
};
const digest_handler *LUKS2_digest_handler_type(struct crypt_device *cd, const char *type)
{
int i;
for (i = 0; i < LUKS2_DIGEST_MAX && digest_handlers[i]; i++) {
if (!strcmp(digest_handlers[i]->name, type))
return digest_handlers[i];
}
return NULL;
}
static const digest_handler *LUKS2_digest_handler(struct crypt_device *cd, int digest)
{
struct luks2_hdr *hdr;
json_object *jobj1, *jobj2;
if (digest < 0)
return NULL;
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return NULL;
if (!(jobj1 = LUKS2_get_digest_jobj(hdr, digest)))
return NULL;
if (!json_object_object_get_ex(jobj1, "type", &jobj2))
return NULL;
return LUKS2_digest_handler_type(cd, json_object_get_string(jobj2));
}
static int LUKS2_digest_find_free(struct crypt_device *cd, struct luks2_hdr *hdr)
{
int digest = 0;
while (LUKS2_get_digest_jobj(hdr, digest) && digest < LUKS2_DIGEST_MAX)
digest++;
return digest < LUKS2_DIGEST_MAX ? digest : -1;
}
int LUKS2_digest_create(struct crypt_device *cd,
const char *type,
struct luks2_hdr *hdr,
const struct volume_key *vk)
{
int digest;
const digest_handler *dh;
dh = LUKS2_digest_handler_type(cd, type);
if (!dh)
return -EINVAL;
digest = LUKS2_digest_find_free(cd, hdr);
if (digest < 0)
return -EINVAL;
log_dbg(cd, "Creating new digest %d (%s).", digest, type);
return dh->store(cd, digest, vk->key, vk->keylength) ?: digest;
}
int LUKS2_digest_by_keyslot(struct luks2_hdr *hdr, int keyslot)
{
char keyslot_name[16];
json_object *jobj_digests, *jobj_digest_keyslots;
if (snprintf(keyslot_name, sizeof(keyslot_name), "%u", keyslot) < 1)
return -ENOMEM;
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
json_object_object_foreach(jobj_digests, key, val) {
json_object_object_get_ex(val, "keyslots", &jobj_digest_keyslots);
if (LUKS2_array_jobj(jobj_digest_keyslots, keyslot_name))
return atoi(key);
}
return -ENOENT;
}
int LUKS2_digest_verify(struct crypt_device *cd,
struct luks2_hdr *hdr,
struct volume_key *vk,
int keyslot)
{
const digest_handler *h;
int digest, r;
digest = LUKS2_digest_by_keyslot(hdr, keyslot);
if (digest < 0)
return digest;
log_dbg(cd, "Verifying key from keyslot %d, digest %d.", keyslot, digest);
h = LUKS2_digest_handler(cd, digest);
if (!h)
return -EINVAL;
r = h->verify(cd, digest, vk->key, vk->keylength);
if (r < 0) {
log_dbg(cd, "Digest %d (%s) verify failed with %d.", digest, h->name, r);
return r;
}
return digest;
}
int LUKS2_digest_dump(struct crypt_device *cd, int digest)
{
const digest_handler *h;
if (!(h = LUKS2_digest_handler(cd, digest)))
return -EINVAL;
return h->dump(cd, digest);
}
int LUKS2_digest_verify_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr,
int segment,
const struct volume_key *vk)
{
const digest_handler *h;
int digest, r;
digest = LUKS2_digest_by_segment(hdr, segment);
if (digest < 0)
return digest;
log_dbg(cd, "Verifying key digest %d.", digest);
h = LUKS2_digest_handler(cd, digest);
if (!h)
return -EINVAL;
r = h->verify(cd, digest, vk->key, vk->keylength);
if (r < 0) {
log_dbg(cd, "Digest %d (%s) verify failed with %d.", digest, h->name, r);
return r;
}
return digest;
}
/* FIXME: segment can have more digests */
int LUKS2_digest_by_segment(struct luks2_hdr *hdr, int segment)
{
char segment_name[16];
json_object *jobj_digests, *jobj_digest_segments;
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
if (snprintf(segment_name, sizeof(segment_name), "%u", segment) < 1)
return -EINVAL;
json_object_object_foreach(jobj_digests, key, val) {
json_object_object_get_ex(val, "segments", &jobj_digest_segments);
if (!LUKS2_array_jobj(jobj_digest_segments, segment_name))
continue;
return atoi(key);
}
return -ENOENT;
}
static int assign_one_digest(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, int digest, int assign)
{
json_object *jobj1, *jobj_digest, *jobj_digest_keyslots;
char num[16];
log_dbg(cd, "Keyslot %i %s digest %i.", keyslot, assign ? "assigned to" : "unassigned from", digest);
jobj_digest = LUKS2_get_digest_jobj(hdr, digest);
if (!jobj_digest)
return -EINVAL;
json_object_object_get_ex(jobj_digest, "keyslots", &jobj_digest_keyslots);
if (!jobj_digest_keyslots)
return -EINVAL;
snprintf(num, sizeof(num), "%d", keyslot);
if (assign) {
jobj1 = LUKS2_array_jobj(jobj_digest_keyslots, num);
if (!jobj1)
json_object_array_add(jobj_digest_keyslots, json_object_new_string(num));
} else {
jobj1 = LUKS2_array_remove(jobj_digest_keyslots, num);
if (jobj1)
json_object_object_add(jobj_digest, "keyslots", jobj1);
}
return 0;
}
int LUKS2_digest_assign(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, int digest, int assign, int commit)
{
json_object *jobj_digests;
int r = 0;
if (digest == CRYPT_ANY_DIGEST) {
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
json_object_object_foreach(jobj_digests, key, val) {
UNUSED(val);
r = assign_one_digest(cd, hdr, keyslot, atoi(key), assign);
if (r < 0)
break;
}
} else
r = assign_one_digest(cd, hdr, keyslot, digest, assign);
if (r < 0)
return r;
// FIXME: do not write header in nothing changed
return commit ? LUKS2_hdr_write(cd, hdr) : 0;
}
static int assign_one_segment(struct crypt_device *cd, struct luks2_hdr *hdr,
int segment, int digest, int assign)
{
json_object *jobj1, *jobj_digest, *jobj_digest_segments;
char num[16];
log_dbg(cd, "Segment %i %s digest %i.", segment, assign ? "assigned to" : "unassigned from", digest);
jobj_digest = LUKS2_get_digest_jobj(hdr, digest);
if (!jobj_digest)
return -EINVAL;
json_object_object_get_ex(jobj_digest, "segments", &jobj_digest_segments);
if (!jobj_digest_segments)
return -EINVAL;
snprintf(num, sizeof(num), "%d", segment);
if (assign) {
jobj1 = LUKS2_array_jobj(jobj_digest_segments, num);
if (!jobj1)
json_object_array_add(jobj_digest_segments, json_object_new_string(num));
} else {
jobj1 = LUKS2_array_remove(jobj_digest_segments, num);
if (jobj1)
json_object_object_add(jobj_digest, "segments", jobj1);
}
return 0;
}
int LUKS2_digest_segment_assign(struct crypt_device *cd, struct luks2_hdr *hdr,
int segment, int digest, int assign, int commit)
{
json_object *jobj_digests;
int r = 0;
if (digest == CRYPT_ANY_DIGEST) {
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
json_object_object_foreach(jobj_digests, key, val) {
UNUSED(val);
r = assign_one_segment(cd, hdr, segment, atoi(key), assign);
if (r < 0)
break;
}
} else
r = assign_one_segment(cd, hdr, segment, digest, assign);
if (r < 0)
return r;
// FIXME: do not write header in nothing changed
return commit ? LUKS2_hdr_write(cd, hdr) : 0;
}
static int digest_unused(json_object *jobj_digest)
{
json_object *jobj;
json_object_object_get_ex(jobj_digest, "segments", &jobj);
if (!jobj || !json_object_is_type(jobj, json_type_array) || json_object_array_length(jobj) > 0)
return 0;
json_object_object_get_ex(jobj_digest, "keyslots", &jobj);
if (!jobj || !json_object_is_type(jobj, json_type_array))
return 0;
return json_object_array_length(jobj) > 0 ? 0 : 1;
}
void LUKS2_digests_erase_unused(struct crypt_device *cd,
struct luks2_hdr *hdr)
{
json_object *jobj_digests;
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
if (!jobj_digests || !json_object_is_type(jobj_digests, json_type_object))
return;
json_object_object_foreach(jobj_digests, key, val) {
if (digest_unused(val)) {
log_dbg(cd, "Erasing unused digest %d.", atoi(key));
json_object_object_del(jobj_digests, key);
}
}
}
/* Key description helpers */
static char *get_key_description_by_digest(struct crypt_device *cd, int digest)
{
char *desc, digest_str[3];
int r;
size_t len;
if (!crypt_get_uuid(cd))
return NULL;
r = snprintf(digest_str, sizeof(digest_str), "d%u", digest);
if (r < 0 || (size_t)r >= sizeof(digest_str))
return NULL;
/* "cryptsetup:<uuid>-<digest_str>" + \0 */
len = strlen(crypt_get_uuid(cd)) + strlen(digest_str) + 13;
desc = malloc(len);
if (!desc)
return NULL;
r = snprintf(desc, len, "%s:%s-%s", "cryptsetup", crypt_get_uuid(cd), digest_str);
if (r < 0 || (size_t)r >= len) {
free(desc);
return NULL;
}
return desc;
}
int LUKS2_key_description_by_segment(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int segment)
{
char *desc = get_key_description_by_digest(cd, LUKS2_digest_by_segment(hdr, segment));
int r;
r = crypt_volume_key_set_description(vk, desc);
free(desc);
return r;
}
int LUKS2_volume_key_load_in_keyring_by_keyslot(struct crypt_device *cd,
struct luks2_hdr *hdr, struct volume_key *vk, int keyslot)
{
char *desc = get_key_description_by_digest(cd, LUKS2_digest_by_keyslot(hdr, keyslot));
int r;
r = crypt_volume_key_set_description(vk, desc);
if (!r)
r = crypt_volume_key_load_in_keyring(cd, vk);
free(desc);
return r;
}

View File

@@ -1,211 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, PBKDF2 digest handler (LUKS1 compatible)
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
#define LUKS_DIGESTSIZE 20 // since SHA1
#define LUKS_SALTSIZE 32
#define LUKS_MKD_ITERATIONS_MS 125
static int PBKDF2_digest_verify(struct crypt_device *cd,
int digest,
const char *volume_key,
size_t volume_key_len)
{
char checkHashBuf[64];
json_object *jobj_digest, *jobj1;
const char *hashSpec;
char *mkDigest = NULL, mkDigestSalt[LUKS_SALTSIZE];
unsigned int mkDigestIterations;
size_t len;
int r;
/* This can be done only for internally linked digests */
jobj_digest = LUKS2_get_digest_jobj(crypt_get_hdr(cd, CRYPT_LUKS2), digest);
if (!jobj_digest)
return -EINVAL;
if (!json_object_object_get_ex(jobj_digest, "hash", &jobj1))
return -EINVAL;
hashSpec = json_object_get_string(jobj1);
if (!json_object_object_get_ex(jobj_digest, "iterations", &jobj1))
return -EINVAL;
mkDigestIterations = json_object_get_int64(jobj1);
if (!json_object_object_get_ex(jobj_digest, "salt", &jobj1))
return -EINVAL;
len = sizeof(mkDigestSalt);
if (!base64_decode(json_object_get_string(jobj1),
json_object_get_string_len(jobj1), mkDigestSalt, &len))
return -EINVAL;
if (len != LUKS_SALTSIZE)
return -EINVAL;
if (!json_object_object_get_ex(jobj_digest, "digest", &jobj1))
return -EINVAL;
len = 0;
if (!base64_decode_alloc(json_object_get_string(jobj1),
json_object_get_string_len(jobj1), &mkDigest, &len))
return -EINVAL;
if (len < LUKS_DIGESTSIZE ||
len > sizeof(checkHashBuf) ||
(len != LUKS_DIGESTSIZE && len != (size_t)crypt_hash_size(hashSpec))) {
free(mkDigest);
return -EINVAL;
}
r = -EPERM;
if (crypt_pbkdf(CRYPT_KDF_PBKDF2, hashSpec, volume_key, volume_key_len,
mkDigestSalt, LUKS_SALTSIZE,
checkHashBuf, len,
mkDigestIterations, 0, 0) < 0) {
r = -EINVAL;
} else {
if (memcmp(checkHashBuf, mkDigest, len) == 0)
r = 0;
}
free(mkDigest);
return r;
}
static int PBKDF2_digest_store(struct crypt_device *cd,
int digest,
const char *volume_key,
size_t volume_key_len)
{
json_object *jobj_digest, *jobj_digests;
char salt[LUKS_SALTSIZE], digest_raw[128];
int hmac_size, r;
char *base64_str;
struct luks2_hdr *hdr;
struct crypt_pbkdf_limits pbkdf_limits;
const struct crypt_pbkdf_type *pbkdf_cd;
struct crypt_pbkdf_type pbkdf = {
.type = CRYPT_KDF_PBKDF2,
.time_ms = LUKS_MKD_ITERATIONS_MS,
};
/* Inherit hash from PBKDF setting */
pbkdf_cd = crypt_get_pbkdf_type(cd);
if (pbkdf_cd)
pbkdf.hash = pbkdf_cd->hash;
if (!pbkdf.hash)
pbkdf.hash = DEFAULT_LUKS1_HASH;
log_dbg(cd, "Setting PBKDF2 type key digest %d.", digest);
r = crypt_random_get(cd, salt, LUKS_SALTSIZE, CRYPT_RND_SALT);
if (r < 0)
return r;
r = crypt_pbkdf_get_limits(CRYPT_KDF_PBKDF2, &pbkdf_limits);
if (r < 0)
return r;
if (crypt_get_pbkdf(cd)->flags & CRYPT_PBKDF_NO_BENCHMARK)
pbkdf.iterations = pbkdf_limits.min_iterations;
else {
r = crypt_benchmark_pbkdf_internal(cd, &pbkdf, volume_key_len);
if (r < 0)
return r;
}
hmac_size = crypt_hmac_size(pbkdf.hash);
if (hmac_size < 0)
return hmac_size;
r = crypt_pbkdf(CRYPT_KDF_PBKDF2, pbkdf.hash, volume_key, volume_key_len,
salt, LUKS_SALTSIZE, digest_raw, hmac_size,
pbkdf.iterations, 0, 0);
if (r < 0)
return r;
jobj_digest = LUKS2_get_digest_jobj(crypt_get_hdr(cd, CRYPT_LUKS2), digest);
jobj_digests = NULL;
if (!jobj_digest) {
hdr = crypt_get_hdr(cd, CRYPT_LUKS2);
jobj_digest = json_object_new_object();
json_object_object_get_ex(hdr->jobj, "digests", &jobj_digests);
}
json_object_object_add(jobj_digest, "type", json_object_new_string("pbkdf2"));
json_object_object_add(jobj_digest, "keyslots", json_object_new_array());
json_object_object_add(jobj_digest, "segments", json_object_new_array());
json_object_object_add(jobj_digest, "hash", json_object_new_string(pbkdf.hash));
json_object_object_add(jobj_digest, "iterations", json_object_new_int(pbkdf.iterations));
base64_encode_alloc(salt, LUKS_SALTSIZE, &base64_str);
if (!base64_str) {
json_object_put(jobj_digest);
return -ENOMEM;
}
json_object_object_add(jobj_digest, "salt", json_object_new_string(base64_str));
free(base64_str);
base64_encode_alloc(digest_raw, hmac_size, &base64_str);
if (!base64_str) {
json_object_put(jobj_digest);
return -ENOMEM;
}
json_object_object_add(jobj_digest, "digest", json_object_new_string(base64_str));
free(base64_str);
if (jobj_digests)
json_object_object_add_by_uint(jobj_digests, digest, jobj_digest);
JSON_DBG(cd, jobj_digest, "Digest JSON:");
return 0;
}
static int PBKDF2_digest_dump(struct crypt_device *cd, int digest)
{
json_object *jobj_digest, *jobj1;
/* This can be done only for internally linked digests */
jobj_digest = LUKS2_get_digest_jobj(crypt_get_hdr(cd, CRYPT_LUKS2), digest);
if (!jobj_digest)
return -EINVAL;
json_object_object_get_ex(jobj_digest, "hash", &jobj1);
log_std(cd, "\tHash: %s\n", json_object_get_string(jobj1));
json_object_object_get_ex(jobj_digest, "iterations", &jobj1);
log_std(cd, "\tIterations: %" PRIu64 "\n", json_object_get_int64(jobj1));
json_object_object_get_ex(jobj_digest, "salt", &jobj1);
log_std(cd, "\tSalt: ");
hexprint_base64(cd, jobj1, " ", " ");
json_object_object_get_ex(jobj_digest, "digest", &jobj1);
log_std(cd, "\tDigest: ");
hexprint_base64(cd, jobj1, " ", " ");
return 0;
}
const digest_handler PBKDF2_digest = {
.name = "pbkdf2",
.verify = PBKDF2_digest_verify,
.store = PBKDF2_digest_store,
.dump = PBKDF2_digest_dump,
};

View File

@@ -1,769 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <assert.h>
#include "luks2_internal.h"
/*
* Helper functions
*/
json_object *parse_json_len(struct crypt_device *cd, const char *json_area,
uint64_t max_length, int *json_len)
{
json_object *jobj;
struct json_tokener *jtok;
/* INT32_MAX is internal (json-c) json_tokener_parse_ex() limit */
if (!json_area || max_length > INT32_MAX)
return NULL;
jtok = json_tokener_new();
if (!jtok) {
log_dbg(cd, "ERROR: Failed to init json tokener");
return NULL;
}
jobj = json_tokener_parse_ex(jtok, json_area, max_length);
if (!jobj)
log_dbg(cd, "ERROR: Failed to parse json data (%d): %s",
json_tokener_get_error(jtok),
json_tokener_error_desc(json_tokener_get_error(jtok)));
else
*json_len = jtok->char_offset;
json_tokener_free(jtok);
return jobj;
}
static void log_dbg_checksum(struct crypt_device *cd,
const uint8_t *csum, const char *csum_alg, const char *info)
{
char csum_txt[2*LUKS2_CHECKSUM_L+1];
int i;
for (i = 0; i < crypt_hash_size(csum_alg); i++)
snprintf(&csum_txt[i*2], 3, "%02hhx", (const char)csum[i]);
csum_txt[i*2+1] = '\0'; /* Just to be safe, sprintf should write \0 there. */
log_dbg(cd, "Checksum:%s (%s)", &csum_txt[0], info);
}
/*
* Calculate hash (checksum) of |LUKS2_bin|LUKS2_JSON_area| from in-memory structs.
* LUKS2 on-disk header contains uniques salt both for primary and secondary header.
* Checksum is always calculated with zeroed checksum field in binary header.
*/
static int hdr_checksum_calculate(const char *alg, struct luks2_hdr_disk *hdr_disk,
const char *json_area, size_t json_len)
{
struct crypt_hash *hd = NULL;
int hash_size, r;
hash_size = crypt_hash_size(alg);
if (hash_size <= 0 || crypt_hash_init(&hd, alg))
return -EINVAL;
/* Binary header, csum zeroed. */
r = crypt_hash_write(hd, (char*)hdr_disk, LUKS2_HDR_BIN_LEN);
/* JSON area (including unused space) */
if (!r)
r = crypt_hash_write(hd, json_area, json_len);
if (!r)
r = crypt_hash_final(hd, (char*)hdr_disk->csum, (size_t)hash_size);
crypt_hash_destroy(hd);
return r;
}
/*
* Compare hash (checksum) of on-disk and in-memory header.
*/
static int hdr_checksum_check(struct crypt_device *cd,
const char *alg, struct luks2_hdr_disk *hdr_disk,
const char *json_area, size_t json_len)
{
struct luks2_hdr_disk hdr_tmp;
int hash_size, r;
hash_size = crypt_hash_size(alg);
if (hash_size <= 0)
return -EINVAL;
/* Copy header and zero checksum. */
memcpy(&hdr_tmp, hdr_disk, LUKS2_HDR_BIN_LEN);
memset(&hdr_tmp.csum, 0, sizeof(hdr_tmp.csum));
r = hdr_checksum_calculate(alg, &hdr_tmp, json_area, json_len);
if (r < 0)
return r;
log_dbg_checksum(cd, hdr_disk->csum, alg, "on-disk");
log_dbg_checksum(cd, hdr_tmp.csum, alg, "in-memory");
if (memcmp(hdr_tmp.csum, hdr_disk->csum, (size_t)hash_size))
return -EINVAL;
return 0;
}
/*
* Convert header from on-disk format to in-memory struct
*/
static void hdr_from_disk(struct luks2_hdr_disk *hdr_disk1,
struct luks2_hdr_disk *hdr_disk2,
struct luks2_hdr *hdr,
int secondary)
{
hdr->version = be16_to_cpu(hdr_disk1->version);
hdr->hdr_size = be64_to_cpu(hdr_disk1->hdr_size);
hdr->seqid = be64_to_cpu(hdr_disk1->seqid);
memcpy(hdr->label, hdr_disk1->label, LUKS2_LABEL_L);
hdr->label[LUKS2_LABEL_L - 1] = '\0';
memcpy(hdr->subsystem, hdr_disk1->subsystem, LUKS2_LABEL_L);
hdr->subsystem[LUKS2_LABEL_L - 1] = '\0';
memcpy(hdr->checksum_alg, hdr_disk1->checksum_alg, LUKS2_CHECKSUM_ALG_L);
hdr->checksum_alg[LUKS2_CHECKSUM_ALG_L - 1] = '\0';
memcpy(hdr->uuid, hdr_disk1->uuid, LUKS2_UUID_L);
hdr->uuid[LUKS2_UUID_L - 1] = '\0';
if (secondary) {
memcpy(hdr->salt1, hdr_disk2->salt, LUKS2_SALT_L);
memcpy(hdr->salt2, hdr_disk1->salt, LUKS2_SALT_L);
} else {
memcpy(hdr->salt1, hdr_disk1->salt, LUKS2_SALT_L);
memcpy(hdr->salt2, hdr_disk2->salt, LUKS2_SALT_L);
}
}
/*
* Convert header from in-memory struct to on-disk format
*/
static void hdr_to_disk(struct luks2_hdr *hdr,
struct luks2_hdr_disk *hdr_disk,
int secondary, uint64_t offset)
{
assert(((char*)&(hdr_disk->_padding4096) - (char*)&(hdr_disk->magic)) == 512);
memset(hdr_disk, 0, LUKS2_HDR_BIN_LEN);
memcpy(&hdr_disk->magic, secondary ? LUKS2_MAGIC_2ND : LUKS2_MAGIC_1ST, LUKS2_MAGIC_L);
hdr_disk->version = cpu_to_be16(hdr->version);
hdr_disk->hdr_size = cpu_to_be64(hdr->hdr_size);
hdr_disk->hdr_offset = cpu_to_be64(offset);
hdr_disk->seqid = cpu_to_be64(hdr->seqid);
strncpy(hdr_disk->label, hdr->label, LUKS2_LABEL_L);
hdr_disk->label[LUKS2_LABEL_L - 1] = '\0';
strncpy(hdr_disk->subsystem, hdr->subsystem, LUKS2_LABEL_L);
hdr_disk->subsystem[LUKS2_LABEL_L - 1] = '\0';
strncpy(hdr_disk->checksum_alg, hdr->checksum_alg, LUKS2_CHECKSUM_ALG_L);
hdr_disk->checksum_alg[LUKS2_CHECKSUM_ALG_L - 1] = '\0';
strncpy(hdr_disk->uuid, hdr->uuid, LUKS2_UUID_L);
hdr_disk->uuid[LUKS2_UUID_L - 1] = '\0';
memcpy(hdr_disk->salt, secondary ? hdr->salt2 : hdr->salt1, LUKS2_SALT_L);
}
/*
* Sanity checks before checksum is validated
*/
static int hdr_disk_sanity_check_pre(struct crypt_device *cd,
struct luks2_hdr_disk *hdr,
size_t *hdr_json_size, int secondary,
uint64_t offset)
{
if (memcmp(hdr->magic, secondary ? LUKS2_MAGIC_2ND : LUKS2_MAGIC_1ST, LUKS2_MAGIC_L))
return -EINVAL;
if (be16_to_cpu(hdr->version) != 2) {
log_dbg(cd, "Unsupported LUKS2 header version %u.", be16_to_cpu(hdr->version));
return -EINVAL;
}
if (offset != be64_to_cpu(hdr->hdr_offset)) {
log_dbg(cd, "LUKS2 offset 0x%04x on device differs to expected offset 0x%04x.",
(unsigned)be64_to_cpu(hdr->hdr_offset), (unsigned)offset);
return -EINVAL;
}
if (secondary && (offset != be64_to_cpu(hdr->hdr_size))) {
log_dbg(cd, "LUKS2 offset 0x%04x in secondary header doesn't match size 0x%04x.",
(unsigned)offset, (unsigned)be64_to_cpu(hdr->hdr_size));
return -EINVAL;
}
/* FIXME: sanity check checksum alg. */
log_dbg(cd, "LUKS2 header version %u of size %u bytes, checksum %s.",
(unsigned)be16_to_cpu(hdr->version), (unsigned)be64_to_cpu(hdr->hdr_size),
hdr->checksum_alg);
*hdr_json_size = be64_to_cpu(hdr->hdr_size) - LUKS2_HDR_BIN_LEN;
return 0;
}
/*
* Read LUKS2 header from disk at specific offset.
*/
static int hdr_read_disk(struct crypt_device *cd,
struct device *device, struct luks2_hdr_disk *hdr_disk,
char **json_area, uint64_t offset, int secondary)
{
size_t hdr_json_size = 0;
int devfd = -1, r;
log_dbg(cd, "Trying to read %s LUKS2 header at offset 0x%" PRIx64 ".",
secondary ? "secondary" : "primary", offset);
devfd = device_open_locked(cd, device, O_RDONLY);
if (devfd < 0)
return devfd == -1 ? -EIO : devfd;
/*
* Read binary header and run sanity check before reading
* JSON area and validating checksum.
*/
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), hdr_disk,
LUKS2_HDR_BIN_LEN, offset) != LUKS2_HDR_BIN_LEN) {
close(devfd);
return -EIO;
}
r = hdr_disk_sanity_check_pre(cd, hdr_disk, &hdr_json_size, secondary, offset);
if (r < 0) {
close(devfd);
return r;
}
/*
* Allocate and read JSON area. Always the whole area must be read.
*/
*json_area = malloc(hdr_json_size);
if (!*json_area) {
close(devfd);
return -ENOMEM;
}
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), *json_area, hdr_json_size,
offset + LUKS2_HDR_BIN_LEN) != (ssize_t)hdr_json_size) {
close(devfd);
free(*json_area);
*json_area = NULL;
return -EIO;
}
close(devfd);
/*
* Calculate and validate checksum and zero it afterwards.
*/
if (hdr_checksum_check(cd, hdr_disk->checksum_alg, hdr_disk,
*json_area, hdr_json_size)) {
log_dbg(cd, "LUKS2 header checksum error (offset %" PRIu64 ").", offset);
r = -EINVAL;
}
memset(hdr_disk->csum, 0, LUKS2_CHECKSUM_L);
return r;
}
/*
* Write LUKS2 header to disk at specific offset.
*/
static int hdr_write_disk(struct crypt_device *cd,
struct device *device, struct luks2_hdr *hdr,
const char *json_area, int secondary)
{
struct luks2_hdr_disk hdr_disk;
uint64_t offset = secondary ? hdr->hdr_size : 0;
size_t hdr_json_len;
int devfd = -1, r;
log_dbg(cd, "Trying to write LUKS2 header (%zu bytes) at offset %" PRIu64 ".",
hdr->hdr_size, offset);
/* FIXME: read-only device silent fail? */
devfd = device_open_locked(cd, device, O_RDWR);
if (devfd < 0)
return devfd == -1 ? -EINVAL : devfd;
hdr_json_len = hdr->hdr_size - LUKS2_HDR_BIN_LEN;
hdr_to_disk(hdr, &hdr_disk, secondary, offset);
/*
* Write header without checksum but with proper seqid.
*/
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), (char *)&hdr_disk,
LUKS2_HDR_BIN_LEN, offset) < (ssize_t)LUKS2_HDR_BIN_LEN) {
close(devfd);
return -EIO;
}
/*
* Write json area.
*/
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device),
CONST_CAST(char*)json_area, hdr_json_len,
LUKS2_HDR_BIN_LEN + offset) < (ssize_t)hdr_json_len) {
close(devfd);
return -EIO;
}
/*
* Calculate checksum and write header with checksum.
*/
r = hdr_checksum_calculate(hdr_disk.checksum_alg, &hdr_disk,
json_area, hdr_json_len);
if (r < 0) {
close(devfd);
return r;
}
log_dbg_checksum(cd, hdr_disk.csum, hdr_disk.checksum_alg, "in-memory");
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), (char *)&hdr_disk,
LUKS2_HDR_BIN_LEN, offset) < (ssize_t)LUKS2_HDR_BIN_LEN)
r = -EIO;
device_sync(cd, device, devfd);
close(devfd);
return r;
}
/*
* Convert in-memory LUKS2 header and write it to disk.
* This will increase sequence id, write both header copies and calculate checksum.
*/
int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr, struct device *device)
{
char *json_area;
const char *json_text;
size_t json_area_len;
int r;
if (hdr->version != 2) {
log_dbg(cd, "Unsupported LUKS2 header version (%u).", hdr->version);
return -EINVAL;
}
r = device_check_size(cd, crypt_metadata_device(cd), LUKS2_hdr_and_areas_size(hdr->jobj), 1);
if (r)
return r;
/*
* Allocate and zero JSON area (of proper header size).
*/
json_area_len = hdr->hdr_size - LUKS2_HDR_BIN_LEN;
json_area = malloc(json_area_len);
if (!json_area)
return -ENOMEM;
memset(json_area, 0, json_area_len);
/*
* Generate text space-efficient JSON representation to json area.
*/
json_text = json_object_to_json_string_ext(hdr->jobj,
JSON_C_TO_STRING_PLAIN | JSON_C_TO_STRING_NOSLASHESCAPE);
if (!json_text || !*json_text) {
log_dbg(cd, "Cannot parse JSON object to text representation.");
free(json_area);
return -ENOMEM;
}
if (strlen(json_text) > (json_area_len - 1)) {
log_dbg(cd, "JSON is too large (%zu > %zu).", strlen(json_text), json_area_len);
free(json_area);
return -EINVAL;
}
strncpy(json_area, json_text, json_area_len);
/* Increase sequence id before writing it to disk. */
hdr->seqid++;
r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write device lock."));
free(json_area);
return r;
}
/* Write primary and secondary header */
r = hdr_write_disk(cd, device, hdr, json_area, 0);
if (!r)
r = hdr_write_disk(cd, device, hdr, json_area, 1);
if (r)
log_dbg(cd, "LUKS2 header write failed (%d).", r);
device_write_unlock(cd, device);
/* FIXME: try recovery here? */
free(json_area);
return r;
}
static int validate_json_area(struct crypt_device *cd, const char *json_area,
uint64_t json_len, uint64_t max_length)
{
char c;
/* Enforce there are no needless opening bytes */
if (*json_area != '{') {
log_dbg(cd, "ERROR: Opening character must be left curly bracket: '{'.");
return -EINVAL;
}
if (json_len >= max_length) {
log_dbg(cd, "ERROR: Missing trailing null byte beyond parsed json data string.");
return -EINVAL;
}
/*
* TODO:
* validate there are legal json format characters between
* 'json_area' and 'json_area + json_len'
*/
do {
c = *(json_area + json_len);
if (c != '\0') {
log_dbg(cd, "ERROR: Forbidden ascii code 0x%02hhx found beyond json data string at offset %" PRIu64,
c, json_len);
return -EINVAL;
}
} while (++json_len < max_length);
return 0;
}
static int validate_luks2_json_object(struct crypt_device *cd, json_object *jobj_hdr, uint64_t length)
{
int r;
/* we require top level object to be of json_type_object */
r = !json_object_is_type(jobj_hdr, json_type_object);
if (r) {
log_dbg(cd, "ERROR: Resulting object is not a json object type");
return r;
}
r = LUKS2_hdr_validate(cd, jobj_hdr, length);
if (r) {
log_dbg(cd, "Repairing JSON metadata.");
/* try to correct known glitches */
LUKS2_hdr_repair(cd, jobj_hdr);
/* run validation again */
r = LUKS2_hdr_validate(cd, jobj_hdr, length);
}
if (r)
log_dbg(cd, "ERROR: LUKS2 validation failed");
return r;
}
static json_object *parse_and_validate_json(struct crypt_device *cd,
const char *json_area, uint64_t max_length)
{
int json_len, r;
json_object *jobj = parse_json_len(cd, json_area, max_length, &json_len);
if (!jobj)
return NULL;
/* successful parse_json_len must not return offset <= 0 */
assert(json_len > 0);
r = validate_json_area(cd, json_area, json_len, max_length);
if (!r)
r = validate_luks2_json_object(cd, jobj, max_length);
if (r) {
json_object_put(jobj);
jobj = NULL;
}
return jobj;
}
static int detect_device_signatures(struct crypt_device *cd, const char *path)
{
blk_probe_status prb_state;
int r;
struct blkid_handle *h;
if (!blk_supported()) {
log_dbg(cd, "Blkid probing of device signatures disabled.");
return 0;
}
if ((r = blk_init_by_path(&h, path))) {
log_dbg(cd, "Failed to initialize blkid_handle by path.");
return -EINVAL;
}
/* We don't care about details. Be fast. */
blk_set_chains_for_fast_detection(h);
/* Filter out crypto_LUKS. we don't care now */
blk_superblocks_filter_luks(h);
prb_state = blk_safeprobe(h);
switch (prb_state) {
case PRB_AMBIGUOUS:
log_dbg(cd, "Blkid probe couldn't decide device type unambiguously.");
/* fall through */
case PRB_FAIL:
log_dbg(cd, "Blkid probe failed.");
r = -EINVAL;
break;
case PRB_OK: /* crypto_LUKS type is filtered out */
r = -EINVAL;
if (blk_is_partition(h))
log_dbg(cd, "Blkid probe detected partition type '%s'", blk_get_partition_type(h));
else if (blk_is_superblock(h))
log_dbg(cd, "blkid probe detected superblock type '%s'", blk_get_superblock_type(h));
break;
case PRB_EMPTY:
log_dbg(cd, "Blkid probe detected no foreign device signature.");
}
blk_free(h);
return r;
}
/*
* Read and convert on-disk LUKS2 header to in-memory representation..
* Try to do recovery if on-disk state is not consistent.
*/
int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
struct device *device, int do_recovery, int do_blkprobe)
{
enum { HDR_OK, HDR_OBSOLETE, HDR_FAIL, HDR_FAIL_IO } state_hdr1, state_hdr2;
struct luks2_hdr_disk hdr_disk1, hdr_disk2;
char *json_area1 = NULL, *json_area2 = NULL;
json_object *jobj_hdr1 = NULL, *jobj_hdr2 = NULL;
unsigned int i;
int r;
uint64_t hdr_size;
uint64_t hdr2_offsets[] = LUKS2_HDR2_OFFSETS;
/* Skip auto-recovery if locks are disabled and we're not doing LUKS2 explicit repair */
if (do_recovery && do_blkprobe && !crypt_metadata_locking_enabled()) {
do_recovery = 0;
log_dbg(cd, "Disabling header auto-recovery due to locking being disabled.");
}
/*
* Read primary LUKS2 header (offset 0).
*/
state_hdr1 = HDR_FAIL;
r = hdr_read_disk(cd, device, &hdr_disk1, &json_area1, 0, 0);
if (r == 0) {
jobj_hdr1 = parse_and_validate_json(cd, json_area1, be64_to_cpu(hdr_disk1.hdr_size) - LUKS2_HDR_BIN_LEN);
state_hdr1 = jobj_hdr1 ? HDR_OK : HDR_OBSOLETE;
} else if (r == -EIO)
state_hdr1 = HDR_FAIL_IO;
/*
* Read secondary LUKS2 header (follows primary).
*/
state_hdr2 = HDR_FAIL;
if (state_hdr1 != HDR_FAIL && state_hdr1 != HDR_FAIL_IO) {
r = hdr_read_disk(cd, device, &hdr_disk2, &json_area2, be64_to_cpu(hdr_disk1.hdr_size), 1);
if (r == 0) {
jobj_hdr2 = parse_and_validate_json(cd, json_area2, be64_to_cpu(hdr_disk2.hdr_size) - LUKS2_HDR_BIN_LEN);
state_hdr2 = jobj_hdr2 ? HDR_OK : HDR_OBSOLETE;
} else if (r == -EIO)
state_hdr2 = HDR_FAIL_IO;
} else {
/*
* No header size, check all known offsets.
*/
for (r = -EINVAL,i = 0; r < 0 && i < ARRAY_SIZE(hdr2_offsets); i++)
r = hdr_read_disk(cd, device, &hdr_disk2, &json_area2, hdr2_offsets[i], 1);
if (r == 0) {
jobj_hdr2 = parse_and_validate_json(cd, json_area2, be64_to_cpu(hdr_disk2.hdr_size) - LUKS2_HDR_BIN_LEN);
state_hdr2 = jobj_hdr2 ? HDR_OK : HDR_OBSOLETE;
} else if (r == -EIO)
state_hdr2 = HDR_FAIL_IO;
}
/*
* Check sequence id if both headers are read correctly.
*/
if (state_hdr1 == HDR_OK && state_hdr2 == HDR_OK) {
if (be64_to_cpu(hdr_disk1.seqid) > be64_to_cpu(hdr_disk2.seqid))
state_hdr2 = HDR_OBSOLETE;
else if (be64_to_cpu(hdr_disk1.seqid) < be64_to_cpu(hdr_disk2.seqid))
state_hdr1 = HDR_OBSOLETE;
}
/* check header with keyslots to fit the device */
if (state_hdr1 == HDR_OK)
hdr_size = LUKS2_hdr_and_areas_size(jobj_hdr1);
else if (state_hdr2 == HDR_OK)
hdr_size = LUKS2_hdr_and_areas_size(jobj_hdr2);
else {
r = (state_hdr1 == HDR_FAIL_IO && state_hdr2 == HDR_FAIL_IO) ? -EIO : -EINVAL;
goto err;
}
r = device_check_size(cd, device, hdr_size, 0);
if (r)
goto err;
/*
* Try to rewrite (recover) bad header. Always regenerate salt for bad header.
*/
if (state_hdr1 == HDR_OK && state_hdr2 != HDR_OK) {
log_dbg(cd, "Secondary LUKS2 header requires recovery.");
if (do_blkprobe && (r = detect_device_signatures(cd, device_path(device)))) {
log_err(cd, _("Device contains ambiguous signatures, cannot auto-recover LUKS2.\n"
"Please run \"cryptsetup repair\" for recovery."));
goto err;
}
if (do_recovery) {
memcpy(&hdr_disk2, &hdr_disk1, LUKS2_HDR_BIN_LEN);
r = crypt_random_get(cd, (char*)hdr_disk2.salt, sizeof(hdr_disk2.salt), CRYPT_RND_SALT);
if (r)
log_dbg(cd, "Cannot generate master salt.");
else {
hdr_from_disk(&hdr_disk1, &hdr_disk2, hdr, 0);
r = hdr_write_disk(cd, device, hdr, json_area1, 1);
}
if (r)
log_dbg(cd, "Secondary LUKS2 header recovery failed.");
}
} else if (state_hdr1 != HDR_OK && state_hdr2 == HDR_OK) {
log_dbg(cd, "Primary LUKS2 header requires recovery.");
if (do_blkprobe && (r = detect_device_signatures(cd, device_path(device)))) {
log_err(cd, _("Device contains ambiguous signatures, cannot auto-recover LUKS2.\n"
"Please run \"cryptsetup repair\" for recovery."));
goto err;
}
if (do_recovery) {
memcpy(&hdr_disk1, &hdr_disk2, LUKS2_HDR_BIN_LEN);
r = crypt_random_get(cd, (char*)hdr_disk1.salt, sizeof(hdr_disk1.salt), CRYPT_RND_SALT);
if (r)
log_dbg(cd, "Cannot generate master salt.");
else {
hdr_from_disk(&hdr_disk2, &hdr_disk1, hdr, 1);
r = hdr_write_disk(cd, device, hdr, json_area2, 0);
}
if (r)
log_dbg(cd, "Primary LUKS2 header recovery failed.");
}
}
free(json_area1);
json_area1 = NULL;
free(json_area2);
json_area2 = NULL;
/* wrong lock for write mode during recovery attempt */
if (r == -EAGAIN)
goto err;
/*
* Even if status is failed, the second header includes salt.
*/
if (state_hdr1 == HDR_OK) {
hdr_from_disk(&hdr_disk1, &hdr_disk2, hdr, 0);
hdr->jobj = jobj_hdr1;
json_object_put(jobj_hdr2);
} else if (state_hdr2 == HDR_OK) {
hdr_from_disk(&hdr_disk2, &hdr_disk1, hdr, 1);
hdr->jobj = jobj_hdr2;
json_object_put(jobj_hdr1);
}
/*
* FIXME: should this fail? At least one header was read correctly.
* r = (state_hdr1 == HDR_FAIL_IO || state_hdr2 == HDR_FAIL_IO) ? -EIO : -EINVAL;
*/
return 0;
err:
log_dbg(cd, "LUKS2 header read failed (%d).", r);
free(json_area1);
free(json_area2);
json_object_put(jobj_hdr1);
json_object_put(jobj_hdr2);
hdr->jobj = NULL;
return r;
}
int LUKS2_hdr_version_unlocked(struct crypt_device *cd, const char *backup_file)
{
struct {
char magic[LUKS2_MAGIC_L];
uint16_t version;
} __attribute__ ((packed)) hdr;
struct device *device = NULL;
int r = 0, devfd = -1, flags;
if (!backup_file)
device = crypt_metadata_device(cd);
else if (device_alloc(cd, &device, backup_file) < 0)
return 0;
if (!device)
return 0;
flags = O_RDONLY;
if (device_direct_io(device))
flags |= O_DIRECT;
devfd = open(device_path(device), flags);
if (devfd < 0)
goto err;
if ((read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), &hdr, sizeof(hdr), 0) == sizeof(hdr)) &&
!memcmp(hdr.magic, LUKS2_MAGIC_1ST, LUKS2_MAGIC_L))
r = (int)be16_to_cpu(hdr.version);
err:
if (devfd != -1)
close(devfd);
if (backup_file)
device_free(cd, device);
return r;
}

View File

@@ -1,182 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef _CRYPTSETUP_LUKS2_INTERNAL_H
#define _CRYPTSETUP_LUKS2_INTERNAL_H
#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#include <json-c/json.h>
#include "internal.h"
#include "base64.h"
#include "luks2.h"
#define UNUSED(x) (void)(x)
/* override useless forward slash escape when supported by json-c */
#ifndef JSON_C_TO_STRING_NOSLASHESCAPE
#define JSON_C_TO_STRING_NOSLASHESCAPE 0
#endif
/*
* On-disk access function prototypes
*/
int LUKS2_disk_hdr_read(struct crypt_device *cd, struct luks2_hdr *hdr,
struct device *device, int do_recovery, int do_blkprobe);
int LUKS2_disk_hdr_write(struct crypt_device *cd, struct luks2_hdr *hdr,
struct device *device);
/*
* JSON struct access helpers
*/
json_object *LUKS2_get_keyslot_jobj(struct luks2_hdr *hdr, int keyslot);
json_object *LUKS2_get_token_jobj(struct luks2_hdr *hdr, int token);
json_object *LUKS2_get_digest_jobj(struct luks2_hdr *hdr, int digest);
json_object *LUKS2_get_segment_jobj(struct luks2_hdr *hdr, int segment);
json_object *LUKS2_get_tokens_jobj(struct luks2_hdr *hdr);
void hexprint_base64(struct crypt_device *cd, json_object *jobj,
const char *sep, const char *line_sep);
json_object *parse_json_len(struct crypt_device *cd, const char *json_area,
uint64_t max_length, int *json_len);
uint64_t json_object_get_uint64(json_object *jobj);
uint32_t json_object_get_uint32(json_object *jobj);
json_object *json_object_new_uint64(uint64_t value);
int json_object_object_add_by_uint(json_object *jobj, unsigned key, json_object *jobj_val);
void json_object_object_del_by_uint(json_object *jobj, unsigned key);
void JSON_DBG(struct crypt_device *cd, json_object *jobj, const char *desc);
/*
* LUKS2 JSON validation
*/
/* validation helper */
json_object *json_contains(struct crypt_device *cd, json_object *jobj, const char *name,
const char *section, const char *key, json_type type);
int LUKS2_hdr_validate(struct crypt_device *cd, json_object *hdr_jobj, uint64_t json_size);
int LUKS2_keyslot_validate(struct crypt_device *cd, json_object *hdr_jobj,
json_object *hdr_keyslot, const char *key);
int LUKS2_check_json_size(struct crypt_device *cd, const struct luks2_hdr *hdr);
int LUKS2_token_validate(struct crypt_device *cd, json_object *hdr_jobj,
json_object *jobj_token, const char *key);
void LUKS2_token_dump(struct crypt_device *cd, int token);
/*
* LUKS2 JSON repair for known glitches
*/
void LUKS2_hdr_repair(struct crypt_device *cd, json_object *jobj_hdr);
void LUKS2_keyslots_repair(struct crypt_device *cd, json_object *jobj_hdr);
/*
* JSON array helpers
*/
struct json_object *LUKS2_array_jobj(struct json_object *array, const char *num);
struct json_object *LUKS2_array_remove(struct json_object *array, const char *num);
/*
* Plugins API
*/
/**
* LUKS2 keyslots handlers (EXPERIMENTAL)
*/
typedef int (*keyslot_alloc_func)(struct crypt_device *cd, int keyslot,
size_t volume_key_len,
const struct luks2_keyslot_params *params);
typedef int (*keyslot_update_func)(struct crypt_device *cd, int keyslot,
const struct luks2_keyslot_params *params);
typedef int (*keyslot_open_func) (struct crypt_device *cd, int keyslot,
const char *password, size_t password_len,
char *volume_key, size_t volume_key_len);
typedef int (*keyslot_store_func)(struct crypt_device *cd, int keyslot,
const char *password, size_t password_len,
const char *volume_key, size_t volume_key_len);
typedef int (*keyslot_wipe_func) (struct crypt_device *cd, int keyslot);
typedef int (*keyslot_dump_func) (struct crypt_device *cd, int keyslot);
typedef int (*keyslot_validate_func) (struct crypt_device *cd, json_object *jobj_keyslot);
typedef void(*keyslot_repair_func) (struct crypt_device *cd, json_object *jobj_keyslot);
/* see LUKS2_luks2_to_luks1 */
int placeholder_keyslot_alloc(struct crypt_device *cd,
int keyslot,
uint64_t area_offset,
uint64_t area_length,
size_t volume_key_len);
/* validate all keyslot implementations in hdr json */
int LUKS2_keyslots_validate(struct crypt_device *cd, json_object *hdr_jobj);
typedef struct {
const char *name;
keyslot_alloc_func alloc;
keyslot_update_func update;
keyslot_open_func open;
keyslot_store_func store;
keyslot_wipe_func wipe;
keyslot_dump_func dump;
keyslot_validate_func validate;
keyslot_repair_func repair;
} keyslot_handler;
/**
* LUKS2 digest handlers (EXPERIMENTAL)
*/
typedef int (*digest_verify_func)(struct crypt_device *cd, int digest,
const char *volume_key, size_t volume_key_len);
typedef int (*digest_store_func) (struct crypt_device *cd, int digest,
const char *volume_key, size_t volume_key_len);
typedef int (*digest_dump_func) (struct crypt_device *cd, int digest);
typedef struct {
const char *name;
digest_verify_func verify;
digest_store_func store;
digest_dump_func dump;
} digest_handler;
const digest_handler *LUKS2_digest_handler_type(struct crypt_device *cd, const char *type);
/**
* LUKS2 token handlers (internal use only)
*/
typedef int (*builtin_token_get_func) (json_object *jobj_token, void *params);
typedef int (*builtin_token_set_func) (json_object **jobj_token, const void *params);
typedef struct {
/* internal only section used by builtin tokens */
builtin_token_get_func get;
builtin_token_set_func set;
/* public token handler */
const crypt_token_handler *h;
} token_handler;
int token_keyring_set(json_object **, const void *);
int token_keyring_get(json_object *, void *);
int LUKS2_find_area_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
size_t keylength, uint64_t *area_offset, uint64_t *area_length);
#endif

View File

@@ -1,311 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, LUKS2 header format code
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
#include <uuid/uuid.h>
#include <assert.h>
struct area {
uint64_t offset;
uint64_t length;
};
static size_t get_area_size(size_t keylength)
{
//FIXME: calculate this properly, for now it is AF_split_sectors
return size_round_up(keylength * 4000, 4096);
}
static size_t get_min_offset(struct luks2_hdr *hdr)
{
return 2 * hdr->hdr_size;
}
static size_t get_max_offset(struct crypt_device *cd)
{
return crypt_get_data_offset(cd) * SECTOR_SIZE;
}
int LUKS2_find_area_gap(struct crypt_device *cd, struct luks2_hdr *hdr,
size_t keylength, uint64_t *area_offset, uint64_t *area_length)
{
struct area areas[LUKS2_KEYSLOTS_MAX], sorted_areas[LUKS2_KEYSLOTS_MAX] = {};
int i, j, k, area_i;
size_t offset, length;
/* fill area offset + length table */
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
if (!LUKS2_keyslot_area(hdr, i, &areas[i].offset, &areas[i].length))
continue;
areas[i].length = 0;
areas[i].offset = 0;
}
/* sort table */
k = 0; /* index in sorted table */
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
offset = get_max_offset(cd) ?: UINT64_MAX;
area_i = -1;
/* search for the smallest offset in table */
for (j = 0; j < LUKS2_KEYSLOTS_MAX; j++)
if (areas[j].offset && areas[j].offset <= offset) {
area_i = j;
offset = areas[j].offset;
}
if (area_i >= 0) {
sorted_areas[k].length = areas[area_i].length;
sorted_areas[k].offset = areas[area_i].offset;
areas[area_i].length = 0;
areas[area_i].offset = 0;
k++;
}
}
/* search for the gap we can use */
offset = get_min_offset(hdr);
length = get_area_size(keylength);
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
/* skip empty */
if (sorted_areas[i].offset == 0 || sorted_areas[i].length == 0)
continue;
/* enough space before the used area */
if ((offset < sorted_areas[i].offset) && ((offset + length) <= sorted_areas[i].offset))
break;
/* both offset and length are already aligned to 4096 bytes */
offset = sorted_areas[i].offset + sorted_areas[i].length;
}
if (get_max_offset(cd) && (offset + length) > get_max_offset(cd)) {
log_err(cd, _("No space for new keyslot."));
return -EINVAL;
}
log_dbg(cd, "Found area %zu -> %zu", offset, length + offset);
/*
log_dbg("Area offset min: %zu, max %zu, slots max %u",
get_min_offset(hdr), get_max_offset(cd), LUKS2_KEYSLOTS_MAX);
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++)
log_dbg("SLOT[%02i]: %-8" PRIu64 " -> %-8" PRIu64, i,
sorted_areas[i].offset,
sorted_areas[i].length + sorted_areas[i].offset);
*/
*area_offset = offset;
*area_length = length;
return 0;
}
int LUKS2_check_metadata_area_size(uint64_t metadata_size)
{
/* see LUKS2_HDR2_OFFSETS */
return (metadata_size != 0x004000 &&
metadata_size != 0x008000 && metadata_size != 0x010000 &&
metadata_size != 0x020000 && metadata_size != 0x040000 &&
metadata_size != 0x080000 && metadata_size != 0x100000 &&
metadata_size != 0x200000 && metadata_size != 0x400000);
}
int LUKS2_check_keyslots_area_size(uint64_t keyslots_size)
{
return (MISALIGNED_4K(keyslots_size) ||
keyslots_size > LUKS2_MAX_KEYSLOTS_SIZE);
}
int LUKS2_generate_hdr(
struct crypt_device *cd,
struct luks2_hdr *hdr,
const struct volume_key *vk,
const char *cipherName,
const char *cipherMode,
const char *integrity,
const char *uuid,
unsigned int sector_size, /* in bytes */
uint64_t data_offset, /* in bytes */
uint64_t align_offset, /* in bytes */
uint64_t required_alignment,
uint64_t metadata_size,
uint64_t keyslots_size)
{
struct json_object *jobj_segment, *jobj_integrity, *jobj_keyslots, *jobj_segments, *jobj_config;
char cipher[128];
uuid_t partitionUuid;
int digest;
if (!metadata_size)
metadata_size = LUKS2_HDR_16K_LEN;
hdr->hdr_size = metadata_size;
if (data_offset && data_offset < get_min_offset(hdr)) {
log_err(cd, _("Requested data offset is too small."));
return -EINVAL;
}
/* Increase keyslot size according to data offset */
if (!keyslots_size && data_offset)
keyslots_size = data_offset - get_min_offset(hdr);
/* keyslots size has to be 4 KiB aligned */
keyslots_size -= (keyslots_size % 4096);
if (keyslots_size > LUKS2_MAX_KEYSLOTS_SIZE)
keyslots_size = LUKS2_MAX_KEYSLOTS_SIZE;
if (!keyslots_size) {
assert(LUKS2_DEFAULT_HDR_SIZE > 2 * LUKS2_HDR_OFFSET_MAX);
keyslots_size = LUKS2_DEFAULT_HDR_SIZE - get_min_offset(hdr);
}
/* Decrease keyslots_size if we have smaller data_offset */
if (data_offset && (keyslots_size + get_min_offset(hdr)) > data_offset) {
keyslots_size = data_offset - get_min_offset(hdr);
log_dbg(cd, "Decreasing keyslot area size to %" PRIu64
" bytes due to the requested data offset %"
PRIu64 " bytes.", keyslots_size, data_offset);
}
/* Data offset has priority */
if (!data_offset && required_alignment) {
data_offset = size_round_up(get_min_offset(hdr) + keyslots_size,
(size_t)required_alignment);
data_offset += align_offset;
}
log_dbg(cd, "Formatting LUKS2 with JSON metadata area %" PRIu64
" bytes and keyslots area %" PRIu64 " bytes.",
metadata_size - LUKS2_HDR_BIN_LEN, keyslots_size);
if (keyslots_size < (LUKS2_HDR_OFFSET_MAX - 2*LUKS2_HDR_16K_LEN))
log_std(cd, _("WARNING: keyslots area (%" PRIu64 " bytes) is very small,"
" available LUKS2 keyslot count is very limited.\n"),
keyslots_size);
hdr->seqid = 1;
hdr->version = 2;
memset(hdr->label, 0, LUKS2_LABEL_L);
strcpy(hdr->checksum_alg, "sha256");
crypt_random_get(cd, (char*)hdr->salt1, LUKS2_SALT_L, CRYPT_RND_SALT);
crypt_random_get(cd, (char*)hdr->salt2, LUKS2_SALT_L, CRYPT_RND_SALT);
if (uuid && uuid_parse(uuid, partitionUuid) == -1) {
log_err(cd, _("Wrong LUKS UUID format provided."));
return -EINVAL;
}
if (!uuid)
uuid_generate(partitionUuid);
uuid_unparse(partitionUuid, hdr->uuid);
if (*cipherMode != '\0')
snprintf(cipher, sizeof(cipher), "%s-%s", cipherName, cipherMode);
else
snprintf(cipher, sizeof(cipher), "%s", cipherName);
hdr->jobj = json_object_new_object();
jobj_keyslots = json_object_new_object();
json_object_object_add(hdr->jobj, "keyslots", jobj_keyslots);
json_object_object_add(hdr->jobj, "tokens", json_object_new_object());
jobj_segments = json_object_new_object();
json_object_object_add(hdr->jobj, "segments", jobj_segments);
json_object_object_add(hdr->jobj, "digests", json_object_new_object());
jobj_config = json_object_new_object();
json_object_object_add(hdr->jobj, "config", jobj_config);
digest = LUKS2_digest_create(cd, "pbkdf2", hdr, vk);
if (digest < 0) {
json_object_put(hdr->jobj);
hdr->jobj = NULL;
return -EINVAL;
}
if (LUKS2_digest_segment_assign(cd, hdr, CRYPT_DEFAULT_SEGMENT, digest, 1, 0) < 0) {
json_object_put(hdr->jobj);
hdr->jobj = NULL;
return -EINVAL;
}
jobj_segment = json_object_new_object();
json_object_object_add(jobj_segment, "type", json_object_new_string("crypt"));
json_object_object_add(jobj_segment, "offset", json_object_new_uint64(data_offset));
json_object_object_add(jobj_segment, "iv_tweak", json_object_new_string("0"));
json_object_object_add(jobj_segment, "size", json_object_new_string("dynamic"));
json_object_object_add(jobj_segment, "encryption", json_object_new_string(cipher));
json_object_object_add(jobj_segment, "sector_size", json_object_new_int(sector_size));
if (integrity) {
jobj_integrity = json_object_new_object();
json_object_object_add(jobj_integrity, "type", json_object_new_string(integrity));
json_object_object_add(jobj_integrity, "journal_encryption", json_object_new_string("none"));
json_object_object_add(jobj_integrity, "journal_integrity", json_object_new_string("none"));
json_object_object_add(jobj_segment, "integrity", jobj_integrity);
}
json_object_object_add_by_uint(jobj_segments, CRYPT_DEFAULT_SEGMENT, jobj_segment);
json_object_object_add(jobj_config, "json_size", json_object_new_uint64(metadata_size - LUKS2_HDR_BIN_LEN));
json_object_object_add(jobj_config, "keyslots_size", json_object_new_uint64(keyslots_size));
JSON_DBG(cd, hdr->jobj, "Header JSON:");
return 0;
}
int LUKS2_wipe_header_areas(struct crypt_device *cd,
struct luks2_hdr *hdr)
{
int r;
uint64_t offset, length;
size_t wipe_block;
/* Wipe complete header, keyslots and padding areas with zeroes. */
offset = 0;
length = LUKS2_get_data_offset(hdr) * SECTOR_SIZE;
wipe_block = 1024 * 1024;
if (LUKS2_hdr_validate(cd, hdr->jobj, hdr->hdr_size - LUKS2_HDR_BIN_LEN))
return -EINVAL;
/* On detached header wipe at least the first 4k */
if (length == 0) {
length = 4096;
wipe_block = 4096;
}
log_dbg(cd, "Wiping LUKS areas (0x%06" PRIx64 " - 0x%06" PRIx64") with zeroes.",
offset, length + offset);
r = crypt_wipe_device(cd, crypt_metadata_device(cd), CRYPT_WIPE_ZERO,
offset, length, wipe_block, NULL, NULL);
if (r < 0)
return r;
/* Wipe keyslot area */
wipe_block = 1024 * 1024;
offset = get_min_offset(hdr);
length = LUKS2_keyslots_size(hdr->jobj);
log_dbg(cd, "Wiping keyslots area (0x%06" PRIx64 " - 0x%06" PRIx64") with random data.",
offset, length + offset);
return crypt_wipe_device(cd, crypt_metadata_device(cd), CRYPT_WIPE_RANDOM,
offset, length, wipe_block, NULL, NULL);
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,663 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, keyslot handling
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
/* Internal implementations */
extern const keyslot_handler luks2_keyslot;
static const keyslot_handler *keyslot_handlers[LUKS2_KEYSLOTS_MAX] = {
&luks2_keyslot,
NULL
};
static const keyslot_handler
*LUKS2_keyslot_handler_type(struct crypt_device *cd, const char *type)
{
int i;
for (i = 0; i < LUKS2_KEYSLOTS_MAX && keyslot_handlers[i]; i++) {
if (!strcmp(keyslot_handlers[i]->name, type))
return keyslot_handlers[i];
}
return NULL;
}
static const keyslot_handler
*LUKS2_keyslot_handler(struct crypt_device *cd, int keyslot)
{
struct luks2_hdr *hdr;
json_object *jobj1, *jobj2;
if (keyslot < 0)
return NULL;
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return NULL;
if (!(jobj1 = LUKS2_get_keyslot_jobj(hdr, keyslot)))
return NULL;
if (!json_object_object_get_ex(jobj1, "type", &jobj2))
return NULL;
return LUKS2_keyslot_handler_type(cd, json_object_get_string(jobj2));
}
int LUKS2_keyslot_find_empty(struct luks2_hdr *hdr, const char *type)
{
int i;
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++)
if (!LUKS2_get_keyslot_jobj(hdr, i))
return i;
return -EINVAL;
}
/* Check if a keyslot is asssigned to specific segment */
int LUKS2_keyslot_for_segment(struct luks2_hdr *hdr, int keyslot, int segment)
{
int keyslot_digest, segment_digest;
/* no need to check anything */
if (segment == CRYPT_ANY_SEGMENT)
return 0;
keyslot_digest = LUKS2_digest_by_keyslot(hdr, keyslot);
if (keyslot_digest < 0)
return -EINVAL;
segment_digest = LUKS2_digest_by_segment(hdr, segment);
if (segment_digest < 0)
return segment_digest;
return segment_digest == keyslot_digest ? 0 : -ENOENT;
}
/* Number of keyslots assigned to a segment or all keyslots for CRYPT_ANY_SEGMENT */
int LUKS2_keyslot_active_count(struct luks2_hdr *hdr, int segment)
{
int num = 0;
json_object *jobj_keyslots;
json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots);
json_object_object_foreach(jobj_keyslots, slot, val) {
UNUSED(val);
if (!LUKS2_keyslot_for_segment(hdr, atoi(slot), segment))
num++;
}
return num;
}
int LUKS2_keyslot_cipher_incompatible(struct crypt_device *cd, const char *cipher_spec)
{
char cipher[MAX_CIPHER_LEN], cipher_mode[MAX_CIPHER_LEN];
if (!cipher_spec || !strcmp(cipher_spec, "null") || !strcmp(cipher_spec, "cipher_null"))
return 1;
if (crypt_parse_name_and_mode(cipher_spec, cipher, NULL, cipher_mode) < 0)
return 1;
/* Keyslot is already authenticated; we cannot use integrity tags here */
if (crypt_get_integrity_tag_size(cd))
return 1;
/* Wrapped key schemes cannot be used for keyslot encryption */
if (crypt_cipher_wrapped_key(cipher, cipher_mode))
return 1;
/* Check if crypto backend can use the cipher */
if (crypt_cipher_ivsize(cipher, cipher_mode) < 0)
return 1;
return 0;
}
int LUKS2_keyslot_params_default(struct crypt_device *cd, struct luks2_hdr *hdr,
struct luks2_keyslot_params *params)
{
const struct crypt_pbkdf_type *pbkdf = crypt_get_pbkdf_type(cd);
const char *cipher_spec;
size_t key_size;
int r;
if (!hdr || !pbkdf || !params)
return -EINVAL;
/*
* set keyslot area encryption parameters
*/
params->area_type = LUKS2_KEYSLOT_AREA_RAW;
cipher_spec = crypt_keyslot_get_encryption(cd, CRYPT_ANY_SLOT, &key_size);
if (!cipher_spec || !key_size)
return -EINVAL;
params->area.raw.key_size = key_size;
r = snprintf(params->area.raw.encryption, sizeof(params->area.raw.encryption), "%s", cipher_spec);
if (r < 0 || (size_t)r >= sizeof(params->area.raw.encryption))
return -EINVAL;
/*
* set keyslot AF parameters
*/
params->af_type = LUKS2_KEYSLOT_AF_LUKS1;
/* currently we use hash for AF from pbkdf settings */
r = snprintf(params->af.luks1.hash, sizeof(params->af.luks1.hash), "%s", pbkdf->hash);
if (r < 0 || (size_t)r >= sizeof(params->af.luks1.hash))
return -EINVAL;
params->af.luks1.stripes = 4000;
return 0;
}
int LUKS2_keyslot_pbkdf(struct luks2_hdr *hdr, int keyslot, struct crypt_pbkdf_type *pbkdf)
{
json_object *jobj_keyslot, *jobj_kdf, *jobj;
if (!hdr || !pbkdf)
return -EINVAL;
if (LUKS2_keyslot_info(hdr, keyslot) == CRYPT_SLOT_INVALID)
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -ENOENT;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf))
return -EINVAL;
if (!json_object_object_get_ex(jobj_kdf, "type", &jobj))
return -EINVAL;
memset(pbkdf, 0, sizeof(*pbkdf));
pbkdf->type = json_object_get_string(jobj);
if (json_object_object_get_ex(jobj_kdf, "hash", &jobj))
pbkdf->hash = json_object_get_string(jobj);
if (json_object_object_get_ex(jobj_kdf, "iterations", &jobj))
pbkdf->iterations = json_object_get_int(jobj);
if (json_object_object_get_ex(jobj_kdf, "time", &jobj))
pbkdf->iterations = json_object_get_int(jobj);
if (json_object_object_get_ex(jobj_kdf, "memory", &jobj))
pbkdf->max_memory_kb = json_object_get_int(jobj);
if (json_object_object_get_ex(jobj_kdf, "cpus", &jobj))
pbkdf->parallel_threads = json_object_get_int(jobj);
return 0;
}
static int LUKS2_keyslot_unbound(struct luks2_hdr *hdr, int keyslot)
{
json_object *jobj_digest, *jobj_segments;
int digest = LUKS2_digest_by_keyslot(hdr, keyslot);
if (digest < 0)
return 0;
if (!(jobj_digest = LUKS2_get_digest_jobj(hdr, digest)))
return 0;
json_object_object_get_ex(jobj_digest, "segments", &jobj_segments);
if (!jobj_segments || !json_object_is_type(jobj_segments, json_type_array) ||
json_object_array_length(jobj_segments) == 0)
return 1;
return 0;
}
crypt_keyslot_info LUKS2_keyslot_info(struct luks2_hdr *hdr, int keyslot)
{
if(keyslot >= LUKS2_KEYSLOTS_MAX || keyslot < 0)
return CRYPT_SLOT_INVALID;
if (!LUKS2_get_keyslot_jobj(hdr, keyslot))
return CRYPT_SLOT_INACTIVE;
if (LUKS2_keyslot_unbound(hdr, keyslot))
return CRYPT_SLOT_UNBOUND;
if (LUKS2_keyslot_active_count(hdr, CRYPT_DEFAULT_SEGMENT) == 1 &&
!LUKS2_keyslot_for_segment(hdr, keyslot, CRYPT_DEFAULT_SEGMENT))
return CRYPT_SLOT_ACTIVE_LAST;
return CRYPT_SLOT_ACTIVE;
}
int LUKS2_keyslot_area(struct luks2_hdr *hdr,
int keyslot,
uint64_t *offset,
uint64_t *length)
{
json_object *jobj_keyslot, *jobj_area, *jobj;
if(LUKS2_keyslot_info(hdr, keyslot) == CRYPT_SLOT_INVALID)
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -ENOENT;
if (!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
if (!json_object_object_get_ex(jobj_area, "offset", &jobj))
return -EINVAL;
*offset = json_object_get_int64(jobj);
if (!json_object_object_get_ex(jobj_area, "size", &jobj))
return -EINVAL;
*length = json_object_get_int64(jobj);
return 0;
}
static int LUKS2_open_and_verify(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int segment,
const char *password,
size_t password_len,
struct volume_key **vk)
{
const keyslot_handler *h;
int key_size, r;
if (!(h = LUKS2_keyslot_handler(cd, keyslot)))
return -ENOENT;
r = h->validate(cd, LUKS2_get_keyslot_jobj(hdr, keyslot));
if (r) {
log_dbg(cd, "Keyslot %d validation failed.", keyslot);
return r;
}
r = LUKS2_keyslot_for_segment(hdr, keyslot, segment);
if (r) {
if (r == -ENOENT)
log_dbg(cd, "Keyslot %d unusable for segment %d.", keyslot, segment);
return r;
}
key_size = LUKS2_get_volume_key_size(hdr, segment);
if (key_size < 0)
key_size = LUKS2_get_keyslot_stored_key_size(hdr, keyslot);
if (key_size < 0)
return -EINVAL;
*vk = crypt_alloc_volume_key(key_size, NULL);
if (!*vk)
return -ENOMEM;
r = h->open(cd, keyslot, password, password_len, (*vk)->key, (*vk)->keylength);
if (r < 0)
log_dbg(cd, "Keyslot %d (%s) open failed with %d.", keyslot, h->name, r);
else
r = LUKS2_digest_verify(cd, hdr, *vk, keyslot);
if (r < 0) {
crypt_free_volume_key(*vk);
*vk = NULL;
}
return r < 0 ? r : keyslot;
}
static int LUKS2_keyslot_open_priority(struct crypt_device *cd,
struct luks2_hdr *hdr,
crypt_keyslot_priority priority,
const char *password,
size_t password_len,
int segment,
struct volume_key **vk)
{
json_object *jobj_keyslots, *jobj;
crypt_keyslot_priority slot_priority;
int keyslot, r = -ENOENT;
json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots);
json_object_object_foreach(jobj_keyslots, slot, val) {
if (!json_object_object_get_ex(val, "priority", &jobj))
slot_priority = CRYPT_SLOT_PRIORITY_NORMAL;
else
slot_priority = json_object_get_int(jobj);
keyslot = atoi(slot);
if (slot_priority != priority) {
log_dbg(cd, "Keyslot %d priority %d != %d (required), skipped.",
keyslot, slot_priority, priority);
continue;
}
r = LUKS2_open_and_verify(cd, hdr, keyslot, segment, password, password_len, vk);
/* Do not retry for errors that are no -EPERM or -ENOENT,
former meaning password wrong, latter key slot unusable for segment */
if ((r != -EPERM) && (r != -ENOENT))
break;
}
return r;
}
int LUKS2_keyslot_open(struct crypt_device *cd,
int keyslot,
int segment,
const char *password,
size_t password_len,
struct volume_key **vk)
{
struct luks2_hdr *hdr;
int r_prio, r = -EINVAL;
hdr = crypt_get_hdr(cd, CRYPT_LUKS2);
if (keyslot == CRYPT_ANY_SLOT) {
r_prio = LUKS2_keyslot_open_priority(cd, hdr, CRYPT_SLOT_PRIORITY_PREFER,
password, password_len, segment, vk);
if (r_prio >= 0)
r = r_prio;
else if (r_prio != -EPERM && r_prio != -ENOENT)
r = r_prio;
else
r = LUKS2_keyslot_open_priority(cd, hdr, CRYPT_SLOT_PRIORITY_NORMAL,
password, password_len, segment, vk);
/* Prefer password wrong to no entry from priority slot */
if (r_prio == -EPERM && r == -ENOENT)
r = r_prio;
} else
r = LUKS2_open_and_verify(cd, hdr, keyslot, segment, password, password_len, vk);
return r;
}
int LUKS2_keyslot_store(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
const char *password,
size_t password_len,
const struct volume_key *vk,
const struct luks2_keyslot_params *params)
{
const keyslot_handler *h;
int r;
if (keyslot == CRYPT_ANY_SLOT)
return -EINVAL;
if (!LUKS2_get_keyslot_jobj(hdr, keyslot)) {
/* Try to allocate default and empty keyslot type */
h = LUKS2_keyslot_handler_type(cd, "luks2");
if (!h)
return -EINVAL;
r = h->alloc(cd, keyslot, vk->keylength, params);
if (r)
return r;
} else {
if (!(h = LUKS2_keyslot_handler(cd, keyslot)))
return -EINVAL;
r = h->update(cd, keyslot, params);
if (r) {
log_dbg(cd, "Failed to update keyslot %d json.", keyslot);
return r;
}
}
r = h->validate(cd, LUKS2_get_keyslot_jobj(hdr, keyslot));
if (r) {
log_dbg(cd, "Keyslot validation failed.");
return r;
}
return h->store(cd, keyslot, password, password_len,
vk->key, vk->keylength);
}
int LUKS2_keyslot_wipe(struct crypt_device *cd,
struct luks2_hdr *hdr,
int keyslot,
int wipe_area_only)
{
struct device *device = crypt_metadata_device(cd);
uint64_t area_offset, area_length;
int r;
json_object *jobj_keyslot, *jobj_keyslots;
const keyslot_handler *h;
h = LUKS2_keyslot_handler(cd, keyslot);
if (!json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots))
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -ENOENT;
if (wipe_area_only)
log_dbg(cd, "Wiping keyslot %d area only.", keyslot);
/* Just check that nobody uses the metadata now */
r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write lock on device %s."),
device_path(device));
return r;
}
device_write_unlock(cd, device);
/* secure deletion of possible key material in keyslot area */
r = crypt_keyslot_area(cd, keyslot, &area_offset, &area_length);
if (r && r != -ENOENT)
return r;
/* We can destroy the binary keyslot area now without lock */
if (!r) {
r = crypt_wipe_device(cd, device, CRYPT_WIPE_SPECIAL, area_offset,
area_length, area_length, NULL, NULL);
if (r) {
if (r == -EACCES) {
log_err(cd, _("Cannot write to device %s, permission denied."),
device_path(device));
r = -EINVAL;
} else
log_err(cd, _("Cannot wipe device %s."), device_path(device));
return r;
}
}
if (wipe_area_only)
return r;
/* Slot specific wipe */
if (h) {
r = h->wipe(cd, keyslot);
if (r < 0)
return r;
} else
log_dbg(cd, "Wiping keyslot %d without specific-slot handler loaded.", keyslot);
json_object_object_del_by_uint(jobj_keyslots, keyslot);
return LUKS2_hdr_write(cd, hdr);
}
int LUKS2_keyslot_dump(struct crypt_device *cd, int keyslot)
{
const keyslot_handler *h;
if (!(h = LUKS2_keyslot_handler(cd, keyslot)))
return -EINVAL;
return h->dump(cd, keyslot);
}
crypt_keyslot_priority LUKS2_keyslot_priority_get(struct crypt_device *cd,
struct luks2_hdr *hdr, int keyslot)
{
json_object *jobj_keyslot, *jobj_priority;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return CRYPT_SLOT_PRIORITY_INVALID;
if (!json_object_object_get_ex(jobj_keyslot, "priority", &jobj_priority))
return CRYPT_SLOT_PRIORITY_NORMAL;
return json_object_get_int(jobj_priority);
}
int LUKS2_keyslot_priority_set(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, crypt_keyslot_priority priority, int commit)
{
json_object *jobj_keyslot;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -EINVAL;
if (priority == CRYPT_SLOT_PRIORITY_NORMAL)
json_object_object_del(jobj_keyslot, "priority");
else
json_object_object_add(jobj_keyslot, "priority", json_object_new_int(priority));
return commit ? LUKS2_hdr_write(cd, hdr) : 0;
}
int placeholder_keyslot_alloc(struct crypt_device *cd,
int keyslot,
uint64_t area_offset,
uint64_t area_length,
size_t volume_key_len)
{
struct luks2_hdr *hdr;
json_object *jobj_keyslots, *jobj_keyslot, *jobj_area;
log_dbg(cd, "Allocating placeholder keyslot %d for LUKS1 down conversion.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
if (keyslot < 0 || keyslot >= LUKS2_KEYSLOTS_MAX)
return -EINVAL;
if (LUKS2_get_keyslot_jobj(hdr, keyslot))
return -EINVAL;
if (!json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots))
return -EINVAL;
jobj_keyslot = json_object_new_object();
json_object_object_add(jobj_keyslot, "type", json_object_new_string("placeholder"));
/*
* key_size = -1 makes placeholder keyslot impossible to pass validation.
* It's a safeguard against accidentally storing temporary conversion
* LUKS2 header.
*/
json_object_object_add(jobj_keyslot, "key_size", json_object_new_int(-1));
/* Area object */
jobj_area = json_object_new_object();
json_object_object_add(jobj_area, "offset", json_object_new_uint64(area_offset));
json_object_object_add(jobj_area, "size", json_object_new_uint64(area_length));
json_object_object_add(jobj_keyslot, "area", jobj_area);
json_object_object_add_by_uint(jobj_keyslots, keyslot, jobj_keyslot);
return 0;
}
static unsigned LUKS2_get_keyslot_digests_count(json_object *hdr_jobj, int keyslot)
{
char num[16];
json_object *jobj_digests, *jobj_keyslots;
unsigned count = 0;
if (!json_object_object_get_ex(hdr_jobj, "digests", &jobj_digests))
return 0;
if (snprintf(num, sizeof(num), "%u", keyslot) < 0)
return 0;
json_object_object_foreach(jobj_digests, key, val) {
UNUSED(key);
json_object_object_get_ex(val, "keyslots", &jobj_keyslots);
if (LUKS2_array_jobj(jobj_keyslots, num))
count++;
}
return count;
}
/* run only on header that passed basic format validation */
int LUKS2_keyslots_validate(struct crypt_device *cd, json_object *hdr_jobj)
{
const keyslot_handler *h;
int keyslot;
json_object *jobj_keyslots, *jobj_type;
if (!json_object_object_get_ex(hdr_jobj, "keyslots", &jobj_keyslots))
return -EINVAL;
json_object_object_foreach(jobj_keyslots, slot, val) {
keyslot = atoi(slot);
json_object_object_get_ex(val, "type", &jobj_type);
h = LUKS2_keyslot_handler_type(cd, json_object_get_string(jobj_type));
if (!h)
continue;
if (h->validate && h->validate(cd, val)) {
log_dbg(cd, "Keyslot type %s validation failed on keyslot %d.", h->name, keyslot);
return -EINVAL;
}
if (!strcmp(h->name, "luks2") && LUKS2_get_keyslot_digests_count(hdr_jobj, keyslot) != 1) {
log_dbg(cd, "Keyslot %d is not assigned to exactly 1 digest.", keyslot);
return -EINVAL;
}
}
return 0;
}
void LUKS2_keyslots_repair(struct crypt_device *cd, json_object *jobj_keyslots)
{
const keyslot_handler *h;
json_object *jobj_type;
json_object_object_foreach(jobj_keyslots, slot, val) {
UNUSED(slot);
if (!json_object_is_type(val, json_type_object) ||
!json_object_object_get_ex(val, "type", &jobj_type) ||
!json_object_is_type(jobj_type, json_type_string))
continue;
h = LUKS2_keyslot_handler_type(cd, json_object_get_string(jobj_type));
if (h && h->repair)
h->repair(cd, val);
}
}

View File

@@ -1,785 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, LUKS2 type keyslot handler
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
/* FIXME: move keyslot encryption to crypto backend */
#include "../luks1/af.h"
#define LUKS_SALTSIZE 32
#define LUKS_SLOT_ITERATIONS_MIN 1000
#define LUKS_STRIPES 4000
static int luks2_encrypt_to_storage(char *src, size_t srcLength,
const char *cipher, const char *cipher_mode,
struct volume_key *vk, unsigned int sector,
struct crypt_device *cd)
{
struct device *device = crypt_metadata_device(cd);
#ifndef ENABLE_AF_ALG /* Support for old kernel without Crypto API */
int r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write lock on device %s."), device_path(device));
return r;
}
r = LUKS_encrypt_to_storage(src, srcLength, cipher, cipher_mode, vk, sector, cd);
device_write_unlock(cd, crypt_metadata_device(cd));
return r;
#else
struct crypt_storage *s;
int devfd = -1, r;
/* Only whole sector writes supported */
if (MISALIGNED_512(srcLength))
return -EINVAL;
/* Encrypt buffer */
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
if (r) {
log_dbg(cd, "Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
return r;
}
r = crypt_storage_encrypt(s, 0, srcLength / SECTOR_SIZE, src);
crypt_storage_destroy(s);
if (r)
return r;
r = device_write_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire write lock on device %s."),
device_path(device));
return r;
}
devfd = device_open_locked(cd, device, O_RDWR);
if (devfd >= 0) {
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), src,
srcLength, sector * SECTOR_SIZE) < 0)
r = -EIO;
else
r = 0;
device_sync(cd, device, devfd);
close(devfd);
} else
r = -EIO;
device_write_unlock(cd, device);
if (r)
log_err(cd, _("IO error while encrypting keyslot."));
return r;
#endif
}
static int luks2_decrypt_from_storage(char *dst, size_t dstLength,
const char *cipher, const char *cipher_mode, struct volume_key *vk,
unsigned int sector, struct crypt_device *cd)
{
struct device *device = crypt_metadata_device(cd);
#ifndef ENABLE_AF_ALG /* Support for old kernel without Crypto API */
int r = device_read_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire read lock on device %s."), device_path(device));
return r;
}
r = LUKS_decrypt_from_storage(dst, dstLength, cipher, cipher_mode, vk, sector, cd);
device_read_unlock(cd, crypt_metadata_device(cd));
return r;
#else
struct crypt_storage *s;
int devfd = -1, r;
/* Only whole sector writes supported */
if (MISALIGNED_512(dstLength))
return -EINVAL;
r = crypt_storage_init(&s, 0, cipher, cipher_mode, vk->key, vk->keylength);
if (r) {
log_dbg(cd, "Userspace crypto wrapper cannot use %s-%s (%d).",
cipher, cipher_mode, r);
return r;
}
r = device_read_lock(cd, device);
if (r) {
log_err(cd, _("Failed to acquire read lock on device %s."),
device_path(device));
crypt_storage_destroy(s);
return r;
}
devfd = device_open_locked(cd, device, O_RDONLY);
if (devfd >= 0) {
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), dst,
dstLength, sector * SECTOR_SIZE) < 0)
r = -EIO;
else
r = 0;
close(devfd);
} else
r = -EIO;
device_read_unlock(cd, device);
/* Decrypt buffer */
if (!r)
r = crypt_storage_decrypt(s, 0, dstLength / SECTOR_SIZE, dst);
else
log_err(cd, _("IO error while decrypting keyslot."));
crypt_storage_destroy(s);
return r;
#endif
}
static int luks2_keyslot_get_pbkdf_params(json_object *jobj_keyslot,
struct crypt_pbkdf_type *pbkdf, char *salt)
{
json_object *jobj_kdf, *jobj1, *jobj2;
size_t salt_len;
if (!jobj_keyslot || !pbkdf)
return -EINVAL;
memset(pbkdf, 0, sizeof(*pbkdf));
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf))
return -EINVAL;
if (!json_object_object_get_ex(jobj_kdf, "type", &jobj1))
return -EINVAL;
pbkdf->type = json_object_get_string(jobj1);
if (!strcmp(pbkdf->type, CRYPT_KDF_PBKDF2)) {
if (!json_object_object_get_ex(jobj_kdf, "hash", &jobj2))
return -EINVAL;
pbkdf->hash = json_object_get_string(jobj2);
if (!json_object_object_get_ex(jobj_kdf, "iterations", &jobj2))
return -EINVAL;
pbkdf->iterations = json_object_get_int(jobj2);
pbkdf->max_memory_kb = 0;
pbkdf->parallel_threads = 0;
} else {
if (!json_object_object_get_ex(jobj_kdf, "time", &jobj2))
return -EINVAL;
pbkdf->iterations = json_object_get_int(jobj2);
if (!json_object_object_get_ex(jobj_kdf, "memory", &jobj2))
return -EINVAL;
pbkdf->max_memory_kb = json_object_get_int(jobj2);
if (!json_object_object_get_ex(jobj_kdf, "cpus", &jobj2))
return -EINVAL;
pbkdf->parallel_threads = json_object_get_int(jobj2);
}
if (!json_object_object_get_ex(jobj_kdf, "salt", &jobj2))
return -EINVAL;
salt_len = LUKS_SALTSIZE;
if (!base64_decode(json_object_get_string(jobj2),
json_object_get_string_len(jobj2),
salt, &salt_len))
return -EINVAL;
if (salt_len != LUKS_SALTSIZE)
return -EINVAL;
return 0;
}
static int luks2_keyslot_set_key(struct crypt_device *cd,
json_object *jobj_keyslot,
const char *password, size_t passwordLen,
const char *volume_key, size_t volume_key_len)
{
struct volume_key *derived_key;
char salt[LUKS_SALTSIZE], cipher[MAX_CIPHER_LEN], cipher_mode[MAX_CIPHER_LEN];
char *AfKey = NULL;
const char *af_hash = NULL;
size_t AFEKSize, keyslot_key_len;
json_object *jobj2, *jobj_kdf, *jobj_af, *jobj_area;
uint64_t area_offset;
struct crypt_pbkdf_type pbkdf;
int r;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf) ||
!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
/* prevent accidental volume key size change after allocation */
if (!json_object_object_get_ex(jobj_keyslot, "key_size", &jobj2))
return -EINVAL;
if (json_object_get_int(jobj2) != (int)volume_key_len)
return -EINVAL;
if (!json_object_object_get_ex(jobj_area, "offset", &jobj2))
return -EINVAL;
area_offset = json_object_get_uint64(jobj2);
if (!json_object_object_get_ex(jobj_area, "encryption", &jobj2))
return -EINVAL;
r = crypt_parse_name_and_mode(json_object_get_string(jobj2), cipher, NULL, cipher_mode);
if (r < 0)
return r;
if (!json_object_object_get_ex(jobj_area, "key_size", &jobj2))
return -EINVAL;
keyslot_key_len = json_object_get_int(jobj2);
if (!json_object_object_get_ex(jobj_af, "hash", &jobj2))
return -EINVAL;
af_hash = json_object_get_string(jobj2);
if (luks2_keyslot_get_pbkdf_params(jobj_keyslot, &pbkdf, salt))
return -EINVAL;
/*
* Allocate derived key storage.
*/
derived_key = crypt_alloc_volume_key(keyslot_key_len, NULL);
if (!derived_key)
return -ENOMEM;
/*
* Calculate keyslot content, split and store it to keyslot area.
*/
r = crypt_pbkdf(pbkdf.type, pbkdf.hash, password, passwordLen,
salt, LUKS_SALTSIZE,
derived_key->key, derived_key->keylength,
pbkdf.iterations, pbkdf.max_memory_kb,
pbkdf.parallel_threads);
if (r < 0) {
crypt_free_volume_key(derived_key);
return r;
}
// FIXME: verity key_size to AFEKSize
AFEKSize = AF_split_sectors(volume_key_len, LUKS_STRIPES) * SECTOR_SIZE;
AfKey = crypt_safe_alloc(AFEKSize);
if (!AfKey) {
crypt_free_volume_key(derived_key);
return -ENOMEM;
}
r = AF_split(cd, volume_key, AfKey, volume_key_len, LUKS_STRIPES, af_hash);
if (r == 0) {
log_dbg(cd, "Updating keyslot area [0x%04x].", (unsigned)area_offset);
/* FIXME: sector_offset should be size_t, fix LUKS_encrypt... accordingly */
r = luks2_encrypt_to_storage(AfKey, AFEKSize, cipher, cipher_mode,
derived_key, (unsigned)(area_offset / SECTOR_SIZE), cd);
}
crypt_safe_free(AfKey);
crypt_free_volume_key(derived_key);
if (r < 0)
return r;
return 0;
}
static int luks2_keyslot_get_key(struct crypt_device *cd,
json_object *jobj_keyslot,
const char *password, size_t passwordLen,
char *volume_key, size_t volume_key_len)
{
struct volume_key *derived_key;
struct crypt_pbkdf_type pbkdf;
char *AfKey;
size_t AFEKSize;
const char *af_hash = NULL;
char salt[LUKS_SALTSIZE], cipher[MAX_CIPHER_LEN], cipher_mode[MAX_CIPHER_LEN];
json_object *jobj2, *jobj_af, *jobj_area;
uint64_t area_offset;
size_t keyslot_key_len;
int r;
if (!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
if (luks2_keyslot_get_pbkdf_params(jobj_keyslot, &pbkdf, salt))
return -EINVAL;
if (!json_object_object_get_ex(jobj_af, "hash", &jobj2))
return -EINVAL;
af_hash = json_object_get_string(jobj2);
if (!json_object_object_get_ex(jobj_area, "offset", &jobj2))
return -EINVAL;
area_offset = json_object_get_uint64(jobj2);
if (!json_object_object_get_ex(jobj_area, "encryption", &jobj2))
return -EINVAL;
r = crypt_parse_name_and_mode(json_object_get_string(jobj2), cipher, NULL, cipher_mode);
if (r < 0)
return r;
if (!json_object_object_get_ex(jobj_area, "key_size", &jobj2))
return -EINVAL;
keyslot_key_len = json_object_get_int(jobj2);
/*
* Allocate derived key storage space.
*/
derived_key = crypt_alloc_volume_key(keyslot_key_len, NULL);
if (!derived_key)
return -ENOMEM;
AFEKSize = AF_split_sectors(volume_key_len, LUKS_STRIPES) * SECTOR_SIZE;
AfKey = crypt_safe_alloc(AFEKSize);
if (!AfKey) {
crypt_free_volume_key(derived_key);
return -ENOMEM;
}
/*
* Calculate derived key, decrypt keyslot content and merge it.
*/
r = crypt_pbkdf(pbkdf.type, pbkdf.hash, password, passwordLen,
salt, LUKS_SALTSIZE,
derived_key->key, derived_key->keylength,
pbkdf.iterations, pbkdf.max_memory_kb,
pbkdf.parallel_threads);
if (r == 0) {
log_dbg(cd, "Reading keyslot area [0x%04x].", (unsigned)area_offset);
/* FIXME: sector_offset should be size_t, fix LUKS_decrypt... accordingly */
r = luks2_decrypt_from_storage(AfKey, AFEKSize, cipher, cipher_mode,
derived_key, (unsigned)(area_offset / SECTOR_SIZE), cd);
}
if (r == 0)
r = AF_merge(cd, AfKey, volume_key, volume_key_len, LUKS_STRIPES, af_hash);
crypt_free_volume_key(derived_key);
crypt_safe_free(AfKey);
return r;
}
/*
* currently we support update of only:
*
* - af hash function
* - kdf params
*/
static int luks2_keyslot_update_json(struct crypt_device *cd,
json_object *jobj_keyslot,
const struct luks2_keyslot_params *params)
{
const struct crypt_pbkdf_type *pbkdf;
json_object *jobj_af, *jobj_area, *jobj_kdf;
char salt[LUKS_SALTSIZE], *salt_base64 = NULL;
int r;
/* jobj_keyslot is not yet validated */
if (!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
/* update area encryption parameters */
json_object_object_add(jobj_area, "encryption", json_object_new_string(params->area.raw.encryption));
json_object_object_add(jobj_area, "key_size", json_object_new_int(params->area.raw.key_size));
pbkdf = crypt_get_pbkdf_type(cd);
if (!pbkdf)
return -EINVAL;
r = crypt_benchmark_pbkdf_internal(cd, CONST_CAST(struct crypt_pbkdf_type *)pbkdf, params->area.raw.key_size);
if (r < 0)
return r;
/* refresh whole 'kdf' object */
jobj_kdf = json_object_new_object();
if (!jobj_kdf)
return -ENOMEM;
json_object_object_add(jobj_kdf, "type", json_object_new_string(pbkdf->type));
if (!strcmp(pbkdf->type, CRYPT_KDF_PBKDF2)) {
json_object_object_add(jobj_kdf, "hash", json_object_new_string(pbkdf->hash));
json_object_object_add(jobj_kdf, "iterations", json_object_new_int(pbkdf->iterations));
} else {
json_object_object_add(jobj_kdf, "time", json_object_new_int(pbkdf->iterations));
json_object_object_add(jobj_kdf, "memory", json_object_new_int(pbkdf->max_memory_kb));
json_object_object_add(jobj_kdf, "cpus", json_object_new_int(pbkdf->parallel_threads));
}
json_object_object_add(jobj_keyslot, "kdf", jobj_kdf);
/*
* Regenerate salt and add it in 'kdf' object
*/
r = crypt_random_get(cd, salt, LUKS_SALTSIZE, CRYPT_RND_SALT);
if (r < 0)
return r;
base64_encode_alloc(salt, LUKS_SALTSIZE, &salt_base64);
if (!salt_base64)
return -ENOMEM;
json_object_object_add(jobj_kdf, "salt", json_object_new_string(salt_base64));
free(salt_base64);
/* update 'af' hash */
json_object_object_add(jobj_af, "hash", json_object_new_string(params->af.luks1.hash));
JSON_DBG(cd, jobj_keyslot, "Keyslot JSON:");
return 0;
}
static int luks2_keyslot_alloc(struct crypt_device *cd,
int keyslot,
size_t volume_key_len,
const struct luks2_keyslot_params *params)
{
struct luks2_hdr *hdr;
uint64_t area_offset, area_length;
json_object *jobj_keyslots, *jobj_keyslot, *jobj_af, *jobj_area;
int r;
log_dbg(cd, "Trying to allocate LUKS2 keyslot %d.", keyslot);
if (!params || params->area_type != LUKS2_KEYSLOT_AREA_RAW ||
params->af_type != LUKS2_KEYSLOT_AF_LUKS1) {
log_dbg(cd, "Invalid LUKS2 keyslot parameters.");
return -EINVAL;
}
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
if (keyslot == CRYPT_ANY_SLOT)
keyslot = LUKS2_keyslot_find_empty(hdr, "luks2");
if (keyslot < 0 || keyslot >= LUKS2_KEYSLOTS_MAX)
return -ENOMEM;
if (LUKS2_get_keyslot_jobj(hdr, keyslot)) {
log_dbg(cd, "Cannot modify already active keyslot %d.", keyslot);
return -EINVAL;
}
if (!json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots))
return -EINVAL;
r = LUKS2_find_area_gap(cd, hdr, volume_key_len, &area_offset, &area_length);
if (r < 0)
return r;
jobj_keyslot = json_object_new_object();
json_object_object_add(jobj_keyslot, "type", json_object_new_string("luks2"));
json_object_object_add(jobj_keyslot, "key_size", json_object_new_int(volume_key_len));
/* AF object */
jobj_af = json_object_new_object();
json_object_object_add(jobj_af, "type", json_object_new_string("luks1"));
json_object_object_add(jobj_af, "stripes", json_object_new_int(params->af.luks1.stripes));
json_object_object_add(jobj_keyslot, "af", jobj_af);
/* Area object */
jobj_area = json_object_new_object();
json_object_object_add(jobj_area, "type", json_object_new_string("raw"));
json_object_object_add(jobj_area, "offset", json_object_new_uint64(area_offset));
json_object_object_add(jobj_area, "size", json_object_new_uint64(area_length));
json_object_object_add(jobj_keyslot, "area", jobj_area);
json_object_object_add_by_uint(jobj_keyslots, keyslot, jobj_keyslot);
r = luks2_keyslot_update_json(cd, jobj_keyslot, params);
if (!r && LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for new keyslot.");
r = -ENOSPC;
}
if (r)
json_object_object_del_by_uint(jobj_keyslots, keyslot);
return r;
}
static int luks2_keyslot_open(struct crypt_device *cd,
int keyslot,
const char *password,
size_t password_len,
char *volume_key,
size_t volume_key_len)
{
struct luks2_hdr *hdr;
json_object *jobj_keyslot;
log_dbg(cd, "Trying to open LUKS2 keyslot %d.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -EINVAL;
return luks2_keyslot_get_key(cd, jobj_keyslot,
password, password_len,
volume_key, volume_key_len);
}
/*
* This function must not modify json.
* It's called after luks2 keyslot validation.
*/
static int luks2_keyslot_store(struct crypt_device *cd,
int keyslot,
const char *password,
size_t password_len,
const char *volume_key,
size_t volume_key_len)
{
struct luks2_hdr *hdr;
json_object *jobj_keyslot;
int r;
log_dbg(cd, "Calculating attributes for LUKS2 keyslot %d.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -EINVAL;
r = luks2_keyslot_set_key(cd, jobj_keyslot,
password, password_len,
volume_key, volume_key_len);
if (r < 0)
return r;
r = LUKS2_hdr_write(cd, hdr);
if (r < 0)
return r;
return keyslot;
}
static int luks2_keyslot_wipe(struct crypt_device *cd, int keyslot)
{
struct luks2_hdr *hdr;
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
/* Remove any reference of deleted keyslot from digests and tokens */
LUKS2_digest_assign(cd, hdr, keyslot, CRYPT_ANY_DIGEST, 0, 0);
LUKS2_token_assign(cd, hdr, keyslot, CRYPT_ANY_TOKEN, 0, 0);
return 0;
}
static int luks2_keyslot_dump(struct crypt_device *cd, int keyslot)
{
json_object *jobj_keyslot, *jobj1, *jobj_kdf, *jobj_af, *jobj_area;
jobj_keyslot = LUKS2_get_keyslot_jobj(crypt_get_hdr(cd, CRYPT_LUKS2), keyslot);
if (!jobj_keyslot)
return -EINVAL;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf) ||
!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
json_object_object_get_ex(jobj_area, "encryption", &jobj1);
log_std(cd, "\tCipher: %s\n", json_object_get_string(jobj1));
json_object_object_get_ex(jobj_area, "key_size", &jobj1);
log_std(cd, "\tCipher key: %u bits\n", json_object_get_uint32(jobj1) * 8);
json_object_object_get_ex(jobj_kdf, "type", &jobj1);
log_std(cd, "\tPBKDF: %s\n", json_object_get_string(jobj1));
if (!strcmp(json_object_get_string(jobj1), CRYPT_KDF_PBKDF2)) {
json_object_object_get_ex(jobj_kdf, "hash", &jobj1);
log_std(cd, "\tHash: %s\n", json_object_get_string(jobj1));
json_object_object_get_ex(jobj_kdf, "iterations", &jobj1);
log_std(cd, "\tIterations: %" PRIu64 "\n", json_object_get_uint64(jobj1));
} else {
json_object_object_get_ex(jobj_kdf, "time", &jobj1);
log_std(cd, "\tTime cost: %" PRIu64 "\n", json_object_get_int64(jobj1));
json_object_object_get_ex(jobj_kdf, "memory", &jobj1);
log_std(cd, "\tMemory: %" PRIu64 "\n", json_object_get_int64(jobj1));
json_object_object_get_ex(jobj_kdf, "cpus", &jobj1);
log_std(cd, "\tThreads: %" PRIu64 "\n", json_object_get_int64(jobj1));
}
json_object_object_get_ex(jobj_kdf, "salt", &jobj1);
log_std(cd, "\tSalt: ");
hexprint_base64(cd, jobj1, " ", " ");
json_object_object_get_ex(jobj_af, "stripes", &jobj1);
log_std(cd, "\tAF stripes: %u\n", json_object_get_int(jobj1));
json_object_object_get_ex(jobj_af, "hash", &jobj1);
log_std(cd, "\tAF hash: %s\n", json_object_get_string(jobj1));
json_object_object_get_ex(jobj_area, "offset", &jobj1);
log_std(cd, "\tArea offset:%" PRIu64 " [bytes]\n", json_object_get_uint64(jobj1));
json_object_object_get_ex(jobj_area, "size", &jobj1);
log_std(cd, "\tArea length:%" PRIu64 " [bytes]\n", json_object_get_uint64(jobj1));
return 0;
}
static int luks2_keyslot_validate(struct crypt_device *cd, json_object *jobj_keyslot)
{
json_object *jobj_kdf, *jobj_af, *jobj_area, *jobj1;
const char *type;
int count;
if (!jobj_keyslot)
return -EINVAL;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf) ||
!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
count = json_object_object_length(jobj_kdf);
jobj1 = json_contains(cd, jobj_kdf, "", "kdf section", "type", json_type_string);
if (!jobj1)
return -EINVAL;
type = json_object_get_string(jobj1);
if (!strcmp(type, CRYPT_KDF_PBKDF2)) {
if (count != 4 || /* type, salt, hash, iterations only */
!json_contains(cd, jobj_kdf, "kdf type", type, "hash", json_type_string) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "iterations", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "salt", json_type_string))
return -EINVAL;
} else if (!strcmp(type, CRYPT_KDF_ARGON2I) || !strcmp(type, CRYPT_KDF_ARGON2ID)) {
if (count != 5 || /* type, salt, time, memory, cpus only */
!json_contains(cd, jobj_kdf, "kdf type", type, "time", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "memory", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "cpus", json_type_int) ||
!json_contains(cd, jobj_kdf, "kdf type", type, "salt", json_type_string))
return -EINVAL;
}
if (!json_object_object_get_ex(jobj_af, "type", &jobj1))
return -EINVAL;
if (!strcmp(json_object_get_string(jobj1), "luks1")) {
if (!json_contains(cd, jobj_af, "", "luks1 af", "hash", json_type_string) ||
!json_contains(cd, jobj_af, "", "luks1 af", "stripes", json_type_int))
return -EINVAL;
} else
return -EINVAL;
// FIXME check numbered
if (!json_object_object_get_ex(jobj_area, "type", &jobj1))
return -EINVAL;
if (!strcmp(json_object_get_string(jobj1), "raw")) {
if (!json_contains(cd, jobj_area, "area", "raw type", "encryption", json_type_string) ||
!json_contains(cd, jobj_area, "area", "raw type", "key_size", json_type_int) ||
!json_contains(cd, jobj_area, "area", "raw type", "offset", json_type_string) ||
!json_contains(cd, jobj_area, "area", "raw type", "size", json_type_string))
return -EINVAL;
} else
return -EINVAL;
return 0;
}
static int luks2_keyslot_update(struct crypt_device *cd,
int keyslot,
const struct luks2_keyslot_params *params)
{
struct luks2_hdr *hdr;
json_object *jobj_keyslot;
int r;
log_dbg(cd, "Updating LUKS2 keyslot %d.", keyslot);
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return -EINVAL;
r = luks2_keyslot_update_json(cd, jobj_keyslot, params);
if (!r && LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for updated keyslot %d.", keyslot);
r = -ENOSPC;
}
return r;
}
static void luks2_keyslot_repair(struct crypt_device *cd, json_object *jobj_keyslot)
{
const char *type;
json_object *jobj_kdf, *jobj_type;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf) ||
!json_object_is_type(jobj_kdf, json_type_object))
return;
if (!json_object_object_get_ex(jobj_kdf, "type", &jobj_type) ||
!json_object_is_type(jobj_type, json_type_string))
return;
type = json_object_get_string(jobj_type);
if (!strcmp(type, CRYPT_KDF_PBKDF2)) {
/* type, salt, hash, iterations only */
json_object_object_foreach(jobj_kdf, key, val) {
UNUSED(val);
if (!strcmp(key, "type") || !strcmp(key, "salt") ||
!strcmp(key, "hash") || !strcmp(key, "iterations"))
continue;
json_object_object_del(jobj_kdf, key);
}
} else if (!strcmp(type, CRYPT_KDF_ARGON2I) || !strcmp(type, CRYPT_KDF_ARGON2ID)) {
/* type, salt, time, memory, cpus only */
json_object_object_foreach(jobj_kdf, key, val) {
UNUSED(val);
if (!strcmp(key, "type") || !strcmp(key, "salt") ||
!strcmp(key, "time") || !strcmp(key, "memory") ||
!strcmp(key, "cpus"))
continue;
json_object_object_del(jobj_kdf, key);
}
}
}
const keyslot_handler luks2_keyslot = {
.name = "luks2",
.alloc = luks2_keyslot_alloc,
.update = luks2_keyslot_update,
.open = luks2_keyslot_open,
.store = luks2_keyslot_store,
.wipe = luks2_keyslot_wipe,
.dump = luks2_keyslot_dump,
.validate = luks2_keyslot_validate,
.repair = luks2_keyslot_repair
};

View File

@@ -1,863 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, LUKS1 conversion code
*
* Copyright (C) 2015-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2015-2019 Ondrej Kozina
* Copyright (C) 2015-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "luks2_internal.h"
#include "../luks1/luks.h"
#include "../luks1/af.h"
static int json_luks1_keyslot(const struct luks_phdr *hdr_v1, int keyslot, struct json_object **keyslot_object)
{
char *base64_str, cipher[LUKS_CIPHERNAME_L+LUKS_CIPHERMODE_L];
size_t base64_len;
struct json_object *keyslot_obj, *field, *jobj_kdf, *jobj_af, *jobj_area;
uint64_t offset, area_size, offs_a, offs_b, length;
keyslot_obj = json_object_new_object();
json_object_object_add(keyslot_obj, "type", json_object_new_string("luks2"));
json_object_object_add(keyslot_obj, "key_size", json_object_new_int64(hdr_v1->keyBytes));
/* KDF */
jobj_kdf = json_object_new_object();
json_object_object_add(jobj_kdf, "type", json_object_new_string(CRYPT_KDF_PBKDF2));
json_object_object_add(jobj_kdf, "hash", json_object_new_string(hdr_v1->hashSpec));
json_object_object_add(jobj_kdf, "iterations", json_object_new_int64(hdr_v1->keyblock[keyslot].passwordIterations));
/* salt field */
base64_len = base64_encode_alloc(hdr_v1->keyblock[keyslot].passwordSalt, LUKS_SALTSIZE, &base64_str);
if (!base64_str) {
json_object_put(keyslot_obj);
json_object_put(jobj_kdf);
if (!base64_len)
return -EINVAL;
return -ENOMEM;
}
field = json_object_new_string_len(base64_str, base64_len);
free(base64_str);
json_object_object_add(jobj_kdf, "salt", field);
json_object_object_add(keyslot_obj, "kdf", jobj_kdf);
/* AF */
jobj_af = json_object_new_object();
json_object_object_add(jobj_af, "type", json_object_new_string("luks1"));
json_object_object_add(jobj_af, "hash", json_object_new_string(hdr_v1->hashSpec));
/* stripes field ignored, fixed to LUKS_STRIPES (4000) */
json_object_object_add(jobj_af, "stripes", json_object_new_int(4000));
json_object_object_add(keyslot_obj, "af", jobj_af);
/* Area */
jobj_area = json_object_new_object();
json_object_object_add(jobj_area, "type", json_object_new_string("raw"));
/* encryption algorithm field */
if (*hdr_v1->cipherMode != '\0') {
(void) snprintf(cipher, sizeof(cipher), "%s-%s", hdr_v1->cipherName, hdr_v1->cipherMode);
json_object_object_add(jobj_area, "encryption", json_object_new_string(cipher));
} else
json_object_object_add(jobj_area, "encryption", json_object_new_string(hdr_v1->cipherName));
/* area */
if (LUKS_keyslot_area(hdr_v1, 0, &offs_a, &length) ||
LUKS_keyslot_area(hdr_v1, 1, &offs_b, &length) ||
LUKS_keyslot_area(hdr_v1, keyslot, &offset, &length)) {
json_object_put(keyslot_obj);
json_object_put(jobj_area);
return -EINVAL;
}
area_size = offs_b - offs_a;
json_object_object_add(jobj_area, "key_size", json_object_new_int(hdr_v1->keyBytes));
json_object_object_add(jobj_area, "offset", json_object_new_uint64(offset));
json_object_object_add(jobj_area, "size", json_object_new_uint64(area_size));
json_object_object_add(keyslot_obj, "area", jobj_area);
*keyslot_object = keyslot_obj;
return 0;
}
static int json_luks1_keyslots(const struct luks_phdr *hdr_v1, struct json_object **keyslots_object)
{
int keyslot, r;
struct json_object *keyslot_obj, *field;
keyslot_obj = json_object_new_object();
if (!keyslot_obj)
return -ENOMEM;
for (keyslot = 0; keyslot < LUKS_NUMKEYS; keyslot++) {
if (hdr_v1->keyblock[keyslot].active != LUKS_KEY_ENABLED)
continue;
r = json_luks1_keyslot(hdr_v1, keyslot, &field);
if (r) {
json_object_put(keyslot_obj);
return r;
}
json_object_object_add_by_uint(keyslot_obj, keyslot, field);
}
*keyslots_object = keyslot_obj;
return 0;
}
static int json_luks1_segment(const struct luks_phdr *hdr_v1, struct json_object **segment_object)
{
const char *c;
char cipher[LUKS_CIPHERNAME_L+LUKS_CIPHERMODE_L];
struct json_object *segment_obj, *field;
uint64_t number;
segment_obj = json_object_new_object();
if (!segment_obj)
return -ENOMEM;
/* type field */
field = json_object_new_string("crypt");
if (!field) {
json_object_put(segment_obj);
return -ENOMEM;
}
json_object_object_add(segment_obj, "type", field);
/* offset field */
number = (uint64_t)hdr_v1->payloadOffset * SECTOR_SIZE;
field = json_object_new_uint64(number);
if (!field) {
json_object_put(segment_obj);
return -ENOMEM;
}
json_object_object_add(segment_obj, "offset", field);
/* iv_tweak field */
field = json_object_new_string("0");
if (!field) {
json_object_put(segment_obj);
return -ENOMEM;
}
json_object_object_add(segment_obj, "iv_tweak", field);
/* length field */
field = json_object_new_string("dynamic");
if (!field) {
json_object_put(segment_obj);
return -ENOMEM;
}
json_object_object_add(segment_obj, "size", field);
/* cipher field */
if (*hdr_v1->cipherMode != '\0') {
(void) snprintf(cipher, sizeof(cipher), "%s-%s", hdr_v1->cipherName, hdr_v1->cipherMode);
c = cipher;
} else
c = hdr_v1->cipherName;
field = json_object_new_string(c);
if (!field) {
json_object_put(segment_obj);
return -ENOMEM;
}
json_object_object_add(segment_obj, "encryption", field);
/* block field */
field = json_object_new_int(SECTOR_SIZE);
if (!field) {
json_object_put(segment_obj);
return -ENOMEM;
}
json_object_object_add(segment_obj, "sector_size", field);
*segment_object = segment_obj;
return 0;
}
static int json_luks1_segments(const struct luks_phdr *hdr_v1, struct json_object **segments_object)
{
int r;
struct json_object *segments_obj, *field;
segments_obj = json_object_new_object();
if (!segments_obj)
return -ENOMEM;
r = json_luks1_segment(hdr_v1, &field);
if (r) {
json_object_put(segments_obj);
return r;
}
json_object_object_add_by_uint(segments_obj, CRYPT_DEFAULT_SEGMENT, field);
*segments_object = segments_obj;
return 0;
}
static int json_luks1_digest(const struct luks_phdr *hdr_v1, struct json_object **digest_object)
{
char keyslot_str[2], *base64_str;
int ks;
size_t base64_len;
struct json_object *digest_obj, *array, *field;
digest_obj = json_object_new_object();
if (!digest_obj)
return -ENOMEM;
/* type field */
field = json_object_new_string("pbkdf2");
if (!field) {
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_object_add(digest_obj, "type", field);
/* keyslots array */
array = json_object_new_array();
if (!array) {
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_object_add(digest_obj, "keyslots", json_object_get(array));
for (ks = 0; ks < LUKS_NUMKEYS; ks++) {
if (hdr_v1->keyblock[ks].active != LUKS_KEY_ENABLED)
continue;
(void) snprintf(keyslot_str, sizeof(keyslot_str), "%d", ks);
field = json_object_new_string(keyslot_str);
if (!field || json_object_array_add(array, field) < 0) {
json_object_put(field);
json_object_put(array);
json_object_put(digest_obj);
return -ENOMEM;
}
}
json_object_put(array);
/* segments array */
array = json_object_new_array();
if (!array) {
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_object_add(digest_obj, "segments", json_object_get(array));
field = json_object_new_string("0");
if (!field || json_object_array_add(array, field) < 0) {
json_object_put(field);
json_object_put(array);
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_put(array);
/* hash field */
field = json_object_new_string(hdr_v1->hashSpec);
if (!field) {
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_object_add(digest_obj, "hash", field);
/* salt field */
base64_len = base64_encode_alloc(hdr_v1->mkDigestSalt, LUKS_SALTSIZE, &base64_str);
if (!base64_str) {
json_object_put(digest_obj);
if (!base64_len)
return -EINVAL;
return -ENOMEM;
}
field = json_object_new_string_len(base64_str, base64_len);
free(base64_str);
if (!field) {
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_object_add(digest_obj, "salt", field);
/* digest field */
base64_len = base64_encode_alloc(hdr_v1->mkDigest, LUKS_DIGESTSIZE, &base64_str);
if (!base64_str) {
json_object_put(digest_obj);
if (!base64_len)
return -EINVAL;
return -ENOMEM;
}
field = json_object_new_string_len(base64_str, base64_len);
free(base64_str);
if (!field) {
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_object_add(digest_obj, "digest", field);
/* iterations field */
field = json_object_new_int64(hdr_v1->mkDigestIterations);
if (!field) {
json_object_put(digest_obj);
return -ENOMEM;
}
json_object_object_add(digest_obj, "iterations", field);
*digest_object = digest_obj;
return 0;
}
static int json_luks1_digests(const struct luks_phdr *hdr_v1, struct json_object **digests_object)
{
int r;
struct json_object *digests_obj, *field;
digests_obj = json_object_new_object();
if (!digests_obj)
return -ENOMEM;
r = json_luks1_digest(hdr_v1, &field);
if (r) {
json_object_put(digests_obj);
return r;
}
json_object_object_add(digests_obj, "0", field);
*digests_object = digests_obj;
return 0;
}
static int json_luks1_object(struct luks_phdr *hdr_v1, struct json_object **luks1_object, uint64_t keyslots_size)
{
int r;
struct json_object *luks1_obj, *field;
uint64_t json_size;
luks1_obj = json_object_new_object();
if (!luks1_obj)
return -ENOMEM;
/* keyslots field */
r = json_luks1_keyslots(hdr_v1, &field);
if (r) {
json_object_put(luks1_obj);
return r;
}
json_object_object_add(luks1_obj, "keyslots", field);
/* tokens field */
field = json_object_new_object();
if (!field) {
json_object_put(luks1_obj);
return -ENOMEM;
}
json_object_object_add(luks1_obj, "tokens", field);
/* segments field */
r = json_luks1_segments(hdr_v1, &field);
if (r) {
json_object_put(luks1_obj);
return r;
}
json_object_object_add(luks1_obj, "segments", field);
/* digests field */
r = json_luks1_digests(hdr_v1, &field);
if (r) {
json_object_put(luks1_obj);
return r;
}
json_object_object_add(luks1_obj, "digests", field);
/* config field */
/* anything else? */
field = json_object_new_object();
if (!field) {
json_object_put(luks1_obj);
return -ENOMEM;
}
json_object_object_add(luks1_obj, "config", field);
json_size = LUKS2_HDR_16K_LEN - LUKS2_HDR_BIN_LEN;
json_object_object_add(field, "json_size", json_object_new_uint64(json_size));
json_object_object_add(field, "keyslots_size", json_object_new_uint64(keyslots_size));
*luks1_object = luks1_obj;
return 0;
}
static void move_keyslot_offset(json_object *jobj, int offset_add)
{
json_object *jobj1, *jobj2, *jobj_area;
uint64_t offset = 0;
json_object_object_get_ex(jobj, "keyslots", &jobj1);
json_object_object_foreach(jobj1, key, val) {
UNUSED(key);
json_object_object_get_ex(val, "area", &jobj_area);
json_object_object_get_ex(jobj_area, "offset", &jobj2);
offset = json_object_get_uint64(jobj2) + offset_add;
json_object_object_add(jobj_area, "offset", json_object_new_uint64(offset));
}
}
/* FIXME: return specific error code for partial write error (aka keyslots are gone) */
static int move_keyslot_areas(struct crypt_device *cd, off_t offset_from,
off_t offset_to, size_t buf_size)
{
struct device *device = crypt_metadata_device(cd);
void *buf = NULL;
int r = -EIO, devfd = -1;
log_dbg(cd, "Moving keyslot areas of size %zu from %jd to %jd.",
buf_size, (intmax_t)offset_from, (intmax_t)offset_to);
if (posix_memalign(&buf, crypt_getpagesize(), buf_size))
return -ENOMEM;
devfd = device_open(cd, device, O_RDWR);
if (devfd == -1) {
free(buf);
return -EIO;
}
/* This can safely fail (for block devices). It only allocates space if it is possible. */
if (posix_fallocate(devfd, offset_to, buf_size))
log_dbg(cd, "Preallocation (fallocate) of new keyslot area not available.");
/* Try to read *new* area to check that area is there (trimmed backup). */
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), buf, buf_size,
offset_to)!= (ssize_t)buf_size)
goto out;
if (read_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), buf, buf_size,
offset_from)!= (ssize_t)buf_size)
goto out;
if (write_lseek_blockwise(devfd, device_block_size(cd, device),
device_alignment(device), buf, buf_size,
offset_to) != (ssize_t)buf_size)
goto out;
r = 0;
out:
device_sync(cd, device, devfd);
close(devfd);
crypt_memzero(buf, buf_size);
free(buf);
return r;
}
static int luks_header_in_use(struct crypt_device *cd)
{
int r;
r = lookup_dm_dev_by_uuid(cd, crypt_get_uuid(cd), crypt_get_type(cd));
if (r < 0)
log_err(cd, _("Can not check status of device with uuid: %s."), crypt_get_uuid(cd));
return r;
}
/* Check if there is a luksmeta area (foreign metadata created by the luksmeta package) */
static int luksmeta_header_present(struct crypt_device *cd, off_t luks1_size)
{
static const uint8_t LM_MAGIC[] = { 'L', 'U', 'K', 'S', 'M', 'E', 'T', 'A' };
struct device *device = crypt_metadata_device(cd);
void *buf = NULL;
int devfd, r = 0;
if (posix_memalign(&buf, crypt_getpagesize(), sizeof(LM_MAGIC)))
return -ENOMEM;
devfd = device_open(cd, device, O_RDONLY);
if (devfd == -1) {
free(buf);
return -EIO;
}
/* Note: we must not detect failure as problem here, header can be trimmed. */
if (read_lseek_blockwise(devfd, device_block_size(cd, device), device_alignment(device),
buf, sizeof(LM_MAGIC), luks1_size) == (ssize_t)sizeof(LM_MAGIC) &&
!memcmp(LM_MAGIC, buf, sizeof(LM_MAGIC))) {
log_err(cd, _("Unable to convert header with LUKSMETA additional metadata."));
r = -EBUSY;
}
close(devfd);
free(buf);
return r;
}
/* Convert LUKS1 -> LUKS2 */
int LUKS2_luks1_to_luks2(struct crypt_device *cd, struct luks_phdr *hdr1, struct luks2_hdr *hdr2)
{
int r;
json_object *jobj = NULL;
size_t buf_size, buf_offset, luks1_size, luks1_shift = 2 * LUKS2_HDR_16K_LEN - LUKS_ALIGN_KEYSLOTS;
uint64_t max_size = crypt_get_data_offset(cd) * SECTOR_SIZE;
/* for detached headers max size == device size */
if (!max_size && (r = device_size(crypt_metadata_device(cd), &max_size)))
return r;
luks1_size = LUKS_device_sectors(hdr1) << SECTOR_SHIFT;
luks1_size = size_round_up(luks1_size, LUKS_ALIGN_KEYSLOTS);
if (!luks1_size)
return -EINVAL;
if (LUKS_keyslots_offset(hdr1) != (LUKS_ALIGN_KEYSLOTS / SECTOR_SIZE)) {
log_dbg(cd, "Unsupported keyslots material offset: %zu.", LUKS_keyslots_offset(hdr1));
return -EINVAL;
}
if (luksmeta_header_present(cd, luks1_size))
return -EINVAL;
log_dbg(cd, "Max size: %" PRIu64 ", LUKS1 (full) header size %zu , required shift: %zu",
max_size, luks1_size, luks1_shift);
if ((max_size - luks1_size) < luks1_shift) {
log_err(cd, _("Unable to move keyslot area. Not enough space."));
return -EINVAL;
}
r = json_luks1_object(hdr1, &jobj, max_size - 2 * LUKS2_HDR_16K_LEN);
if (r < 0)
return r;
move_keyslot_offset(jobj, luks1_shift);
// fill hdr2
memset(hdr2, 0, sizeof(*hdr2));
hdr2->hdr_size = LUKS2_HDR_16K_LEN;
hdr2->seqid = 1;
hdr2->version = 2;
strncpy(hdr2->checksum_alg, "sha256", LUKS2_CHECKSUM_ALG_L);
crypt_random_get(cd, (char*)hdr2->salt1, sizeof(hdr2->salt1), CRYPT_RND_SALT);
crypt_random_get(cd, (char*)hdr2->salt2, sizeof(hdr2->salt2), CRYPT_RND_SALT);
strncpy(hdr2->uuid, crypt_get_uuid(cd), LUKS2_UUID_L-1); /* UUID should be max 36 chars */
hdr2->jobj = jobj;
/*
* It duplicates check in LUKS2_hdr_write() but we don't want to move
* keyslot areas in case it would fail later
*/
if (max_size < LUKS2_hdr_and_areas_size(hdr2->jobj)) {
r = -EINVAL;
goto out;
}
if ((r = luks_header_in_use(cd))) {
if (r > 0)
r = -EBUSY;
goto out;
}
// move keyslots 4k -> 32k offset
buf_offset = 2 * LUKS2_HDR_16K_LEN;
buf_size = luks1_size - LUKS_ALIGN_KEYSLOTS;
if ((r = move_keyslot_areas(cd, 8 * SECTOR_SIZE, buf_offset, buf_size)) < 0) {
log_err(cd, _("Unable to move keyslot area."));
goto out;
}
// Write JSON hdr2
r = LUKS2_hdr_write(cd, hdr2);
out:
LUKS2_hdr_free(cd, hdr2);
return r;
}
static int keyslot_LUKS1_compatible(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, uint32_t key_size, const char *hash)
{
json_object *jobj_keyslot, *jobj, *jobj_kdf, *jobj_af;
uint64_t l2_offset, l2_length;
size_t ks_key_size;
const char *ks_cipher, *data_cipher;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr, keyslot);
if (!jobj_keyslot)
return 1;
if (!json_object_object_get_ex(jobj_keyslot, "type", &jobj) ||
strcmp(json_object_get_string(jobj), "luks2"))
return 0;
/* Using PBKDF2, this implies memory and parallel is not used. */
jobj = NULL;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf) ||
!json_object_object_get_ex(jobj_kdf, "type", &jobj) ||
strcmp(json_object_get_string(jobj), CRYPT_KDF_PBKDF2) ||
!json_object_object_get_ex(jobj_kdf, "hash", &jobj) ||
strcmp(json_object_get_string(jobj), hash))
return 0;
jobj = NULL;
if (!json_object_object_get_ex(jobj_keyslot, "af", &jobj_af) ||
!json_object_object_get_ex(jobj_af, "stripes", &jobj) ||
json_object_get_int(jobj) != LUKS_STRIPES)
return 0;
jobj = NULL;
if (!json_object_object_get_ex(jobj_af, "hash", &jobj) ||
(crypt_hash_size(json_object_get_string(jobj)) < 0) ||
strcmp(json_object_get_string(jobj), hash))
return 0;
/* FIXME: should this go to validation code instead (aka invalid luks2 header if assigned to segment 0)? */
/* FIXME: check all keyslots are assigned to segment id 0, and segments count == 1 */
ks_cipher = LUKS2_get_keyslot_cipher(hdr, keyslot, &ks_key_size);
data_cipher = LUKS2_get_cipher(hdr, CRYPT_DEFAULT_SEGMENT);
if (!ks_cipher || !data_cipher || key_size != ks_key_size || strcmp(ks_cipher, data_cipher)) {
log_dbg(cd, "Cipher in keyslot %d is different from volume key encryption.", keyslot);
return 0;
}
if (LUKS2_keyslot_area(hdr, keyslot, &l2_offset, &l2_length))
return 0;
if (l2_length != (size_round_up(AF_split_sectors(key_size, LUKS_STRIPES) * SECTOR_SIZE, 4096))) {
log_dbg(cd, "Area length in LUKS2 keyslot (%d) is not compatible with LUKS1", keyslot);
return 0;
}
return 1;
}
/* Convert LUKS2 -> LUKS1 */
int LUKS2_luks2_to_luks1(struct crypt_device *cd, struct luks2_hdr *hdr2, struct luks_phdr *hdr1)
{
size_t buf_size, buf_offset;
char cipher[LUKS_CIPHERNAME_L-1], cipher_mode[LUKS_CIPHERMODE_L-1];
char digest[LUKS_DIGESTSIZE], digest_salt[LUKS_SALTSIZE];
const char *hash;
size_t len;
json_object *jobj_keyslot, *jobj_digest, *jobj_segment, *jobj_kdf, *jobj_area, *jobj1, *jobj2;
uint32_t key_size;
int i, r, last_active = 0;
uint64_t offset, area_length;
char buf[256], luksMagic[] = LUKS_MAGIC;
jobj_digest = LUKS2_get_digest_jobj(hdr2, 0);
if (!jobj_digest)
return -EINVAL;
jobj_segment = LUKS2_get_segment_jobj(hdr2, CRYPT_DEFAULT_SEGMENT);
if (!jobj_segment)
return -EINVAL;
json_object_object_get_ex(hdr2->jobj, "digests", &jobj1);
if (!json_object_object_get_ex(jobj_digest, "type", &jobj2) ||
strcmp(json_object_get_string(jobj2), "pbkdf2") ||
json_object_object_length(jobj1) != 1) {
log_err(cd, _("Cannot convert to LUKS1 format - key slot digests are not LUKS1 compatible."));
return -EINVAL;
}
if (!json_object_object_get_ex(jobj_digest, "hash", &jobj2))
return -EINVAL;
hash = json_object_get_string(jobj2);
r = crypt_parse_name_and_mode(LUKS2_get_cipher(hdr2, CRYPT_DEFAULT_SEGMENT), cipher, NULL, cipher_mode);
if (r < 0)
return r;
if (crypt_cipher_wrapped_key(cipher, cipher_mode)) {
log_err(cd, _("Cannot convert to LUKS1 format - device uses wrapped key cipher %s."), cipher);
return -EINVAL;
}
r = LUKS2_tokens_count(hdr2);
if (r < 0)
return r;
if (r > 0) {
log_err(cd, _("Cannot convert to LUKS1 format - LUKS2 header contains %u token(s)."), r);
return -EINVAL;
}
r = LUKS2_get_volume_key_size(hdr2, 0);
if (r < 0)
return -EINVAL;
key_size = r;
for (i = 0; i < LUKS2_KEYSLOTS_MAX; i++) {
if (LUKS2_keyslot_info(hdr2, i) == CRYPT_SLOT_INACTIVE)
continue;
if (LUKS2_keyslot_info(hdr2, i) == CRYPT_SLOT_INVALID) {
log_err(cd, _("Cannot convert to LUKS1 format - keyslot %u is in invalid state."), i);
return -EINVAL;
}
if (i >= LUKS_NUMKEYS) {
log_err(cd, _("Cannot convert to LUKS1 format - slot %u (over maximum slots) is still active."), i);
return -EINVAL;
}
if (!keyslot_LUKS1_compatible(cd, hdr2, i, key_size, hash)) {
log_err(cd, _("Cannot convert to LUKS1 format - keyslot %u is not LUKS1 compatible."), i);
return -EINVAL;
}
}
memset(hdr1, 0, sizeof(*hdr1));
for (i = 0; i < LUKS_NUMKEYS; i++) {
hdr1->keyblock[i].active = LUKS_KEY_DISABLED;
hdr1->keyblock[i].stripes = LUKS_STRIPES;
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr2, i);
if (jobj_keyslot) {
if (!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
if (!json_object_object_get_ex(jobj_area, "offset", &jobj1))
return -EINVAL;
offset = json_object_get_uint64(jobj1);
} else {
if (LUKS2_find_area_gap(cd, hdr2, key_size, &offset, &area_length))
return -EINVAL;
/*
* We have to create placeholder luks2 keyslots in place of all
* inactive keyslots. Otherwise we would allocate all
* inactive luks1 keyslots over same binary keyslot area.
*/
if (placeholder_keyslot_alloc(cd, i, offset, area_length, key_size))
return -EINVAL;
}
offset /= SECTOR_SIZE;
if (offset > UINT32_MAX)
return -EINVAL;
hdr1->keyblock[i].keyMaterialOffset = offset;
hdr1->keyblock[i].keyMaterialOffset -=
((2 * LUKS2_HDR_16K_LEN - LUKS_ALIGN_KEYSLOTS) / SECTOR_SIZE);
if (!jobj_keyslot)
continue;
hdr1->keyblock[i].active = LUKS_KEY_ENABLED;
last_active = i;
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf))
continue;
if (!json_object_object_get_ex(jobj_kdf, "iterations", &jobj1))
continue;
hdr1->keyblock[i].passwordIterations = json_object_get_uint32(jobj1);
if (!json_object_object_get_ex(jobj_kdf, "salt", &jobj1))
continue;
len = sizeof(buf);
memset(buf, 0, len);
if (!base64_decode(json_object_get_string(jobj1),
json_object_get_string_len(jobj1), buf, &len))
continue;
if (len > 0 && len != LUKS_SALTSIZE)
continue;
memcpy(hdr1->keyblock[i].passwordSalt, buf, LUKS_SALTSIZE);
}
if (!jobj_keyslot) {
jobj_keyslot = LUKS2_get_keyslot_jobj(hdr2, last_active);
if (!jobj_keyslot)
return -EINVAL;
}
if (!json_object_object_get_ex(jobj_keyslot, "area", &jobj_area))
return -EINVAL;
if (!json_object_object_get_ex(jobj_area, "encryption", &jobj1))
return -EINVAL;
r = crypt_parse_name_and_mode(json_object_get_string(jobj1), cipher, NULL, cipher_mode);
if (r < 0)
return r;
strncpy(hdr1->cipherName, cipher, sizeof(hdr1->cipherName) - 1);
strncpy(hdr1->cipherMode, cipher_mode, sizeof(hdr1->cipherMode) - 1);
if (!json_object_object_get_ex(jobj_keyslot, "kdf", &jobj_kdf))
return -EINVAL;
if (!json_object_object_get_ex(jobj_kdf, "hash", &jobj1))
return -EINVAL;
strncpy(hdr1->hashSpec, json_object_get_string(jobj1), sizeof(hdr1->hashSpec) - 1);
hdr1->keyBytes = key_size;
if (!json_object_object_get_ex(jobj_digest, "iterations", &jobj1))
return -EINVAL;
hdr1->mkDigestIterations = json_object_get_uint32(jobj1);
if (!json_object_object_get_ex(jobj_digest, "digest", &jobj1))
return -EINVAL;
len = sizeof(digest);
if (!base64_decode(json_object_get_string(jobj1),
json_object_get_string_len(jobj1), digest, &len))
return -EINVAL;
/* We can store full digest here, not only sha1 length */
if (len < LUKS_DIGESTSIZE)
return -EINVAL;
memcpy(hdr1->mkDigest, digest, LUKS_DIGESTSIZE);
if (!json_object_object_get_ex(jobj_digest, "salt", &jobj1))
return -EINVAL;
len = sizeof(digest_salt);
if (!base64_decode(json_object_get_string(jobj1),
json_object_get_string_len(jobj1), digest_salt, &len))
return -EINVAL;
if (len != LUKS_SALTSIZE)
return -EINVAL;
memcpy(hdr1->mkDigestSalt, digest_salt, LUKS_SALTSIZE);
if (!json_object_object_get_ex(jobj_segment, "offset", &jobj1))
return -EINVAL;
offset = json_object_get_uint64(jobj1) / SECTOR_SIZE;
if (offset > UINT32_MAX)
return -EINVAL;
/* FIXME: LUKS1 requires offset == 0 || offset >= luks1_hdr_size */
hdr1->payloadOffset = offset;
strncpy(hdr1->uuid, hdr2->uuid, UUID_STRING_L); /* max 36 chars */
hdr1->uuid[UUID_STRING_L-1] = '\0';
memcpy(hdr1->magic, luksMagic, LUKS_MAGIC_L);
hdr1->version = 1;
r = luks_header_in_use(cd);
if (r)
return r > 0 ? -EBUSY : r;
// move keyslots 32k -> 4k offset
buf_offset = 2 * LUKS2_HDR_16K_LEN;
buf_size = LUKS2_keyslots_size(hdr2->jobj);
r = move_keyslot_areas(cd, buf_offset, 8 * SECTOR_SIZE, buf_size);
if (r < 0) {
log_err(cd, _("Unable to move keyslot area."));
return r;
}
crypt_wipe_device(cd, crypt_metadata_device(cd), CRYPT_WIPE_ZERO, 0,
8 * SECTOR_SIZE, 8 * SECTOR_SIZE, NULL, NULL);
// Write LUKS1 hdr
return LUKS_write_phdr(hdr1, cd);
}

View File

@@ -1,606 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, token handling
*
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Milan Broz
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <assert.h>
#include "luks2_internal.h"
/* Builtin tokens */
extern const crypt_token_handler keyring_handler;
static token_handler token_handlers[LUKS2_TOKENS_MAX] = {
/* keyring builtin token */
{
.get = token_keyring_get,
.set = token_keyring_set,
.h = &keyring_handler
},
};
static int is_builtin_candidate(const char *type)
{
return !strncmp(type, LUKS2_BUILTIN_TOKEN_PREFIX, LUKS2_BUILTIN_TOKEN_PREFIX_LEN);
}
int crypt_token_register(const crypt_token_handler *handler)
{
int i;
if (is_builtin_candidate(handler->name)) {
log_dbg(NULL, "'" LUKS2_BUILTIN_TOKEN_PREFIX "' is reserved prefix for builtin tokens.");
return -EINVAL;
}
for (i = 0; i < LUKS2_TOKENS_MAX && token_handlers[i].h; i++) {
if (!strcmp(token_handlers[i].h->name, handler->name)) {
log_dbg(NULL, "Keyslot handler %s is already registered.", handler->name);
return -EINVAL;
}
}
if (i == LUKS2_TOKENS_MAX)
return -EINVAL;
token_handlers[i].h = handler;
return 0;
}
static const token_handler
*LUKS2_token_handler_type_internal(struct crypt_device *cd, const char *type)
{
int i;
for (i = 0; i < LUKS2_TOKENS_MAX && token_handlers[i].h; i++)
if (!strcmp(token_handlers[i].h->name, type))
return token_handlers + i;
return NULL;
}
static const crypt_token_handler
*LUKS2_token_handler_type(struct crypt_device *cd, const char *type)
{
const token_handler *th = LUKS2_token_handler_type_internal(cd, type);
return th ? th->h : NULL;
}
static const token_handler
*LUKS2_token_handler_internal(struct crypt_device *cd, int token)
{
struct luks2_hdr *hdr;
json_object *jobj1, *jobj2;
if (token < 0)
return NULL;
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return NULL;
if (!(jobj1 = LUKS2_get_token_jobj(hdr, token)))
return NULL;
if (!json_object_object_get_ex(jobj1, "type", &jobj2))
return NULL;
return LUKS2_token_handler_type_internal(cd, json_object_get_string(jobj2));
}
static const crypt_token_handler
*LUKS2_token_handler(struct crypt_device *cd, int token)
{
const token_handler *th = LUKS2_token_handler_internal(cd, token);
return th ? th->h : NULL;
}
static int LUKS2_token_find_free(struct luks2_hdr *hdr)
{
int i;
for (i = 0; i < LUKS2_TOKENS_MAX; i++)
if (!LUKS2_get_token_jobj(hdr, i))
return i;
return -EINVAL;
}
int LUKS2_token_create(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *json,
int commit)
{
const crypt_token_handler *h;
const token_handler *th;
json_object *jobj_tokens, *jobj_type, *jobj;
enum json_tokener_error jerr;
char num[16];
if (token == CRYPT_ANY_TOKEN) {
if (!json)
return -EINVAL;
token = LUKS2_token_find_free(hdr);
}
if (token < 0 || token >= LUKS2_TOKENS_MAX)
return -EINVAL;
if (!json_object_object_get_ex(hdr->jobj, "tokens", &jobj_tokens))
return -EINVAL;
snprintf(num, sizeof(num), "%d", token);
/* Remove token */
if (!json)
json_object_object_del(jobj_tokens, num);
else {
jobj = json_tokener_parse_verbose(json, &jerr);
if (!jobj) {
log_dbg(cd, "Token JSON parse failed.");
return -EINVAL;
}
if (LUKS2_token_validate(cd, hdr->jobj, jobj, num)) {
json_object_put(jobj);
return -EINVAL;
}
json_object_object_get_ex(jobj, "type", &jobj_type);
if (is_builtin_candidate(json_object_get_string(jobj_type))) {
th = LUKS2_token_handler_type_internal(cd, json_object_get_string(jobj_type));
if (!th || !th->set) {
log_dbg(cd, "%s is builtin token candidate with missing handler", json_object_get_string(jobj_type));
json_object_put(jobj);
return -EINVAL;
}
h = th->h;
} else
h = LUKS2_token_handler_type(cd, json_object_get_string(jobj_type));
if (h && h->validate && h->validate(cd, json)) {
json_object_put(jobj);
log_dbg(cd, "Token type %s validation failed.", h->name);
return -EINVAL;
}
json_object_object_add(jobj_tokens, num, jobj);
if (LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for new token.");
json_object_object_del(jobj_tokens, num);
return -ENOSPC;
}
}
if (commit)
return LUKS2_hdr_write(cd, hdr) ?: token;
return token;
}
crypt_token_info LUKS2_token_status(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char **type)
{
const char *tmp;
const token_handler *th;
json_object *jobj_type, *jobj_token;
if (token < 0 || token >= LUKS2_TOKENS_MAX)
return CRYPT_TOKEN_INVALID;
if (!(jobj_token = LUKS2_get_token_jobj(hdr, token)))
return CRYPT_TOKEN_INACTIVE;
json_object_object_get_ex(jobj_token, "type", &jobj_type);
tmp = json_object_get_string(jobj_type);
if ((th = LUKS2_token_handler_type_internal(cd, tmp))) {
if (type)
*type = th->h->name;
return th->set ? CRYPT_TOKEN_INTERNAL : CRYPT_TOKEN_EXTERNAL;
}
if (type)
*type = tmp;
return is_builtin_candidate(tmp) ? CRYPT_TOKEN_INTERNAL_UNKNOWN : CRYPT_TOKEN_EXTERNAL_UNKNOWN;
}
int LUKS2_builtin_token_get(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *type,
void *params)
{
const token_handler *th = LUKS2_token_handler_type_internal(cd, type);
// internal error
assert(th && th->get);
return th->get(LUKS2_get_token_jobj(hdr, token), params) ?: token;
}
int LUKS2_builtin_token_create(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *type,
const void *params,
int commit)
{
const token_handler *th;
int r;
json_object *jobj_token, *jobj_tokens;
th = LUKS2_token_handler_type_internal(cd, type);
// at this point all builtin handlers must exist and have validate fn defined
assert(th && th->set && th->h->validate);
if (token == CRYPT_ANY_TOKEN) {
if ((token = LUKS2_token_find_free(hdr)) < 0)
log_err(cd, _("No free token slot."));
}
if (token < 0 || token >= LUKS2_TOKENS_MAX)
return -EINVAL;
r = th->set(&jobj_token, params);
if (r) {
log_err(cd, _("Failed to create builtin token %s."), type);
return r;
}
// builtin tokens must produce valid json
r = LUKS2_token_validate(cd, hdr->jobj, jobj_token, "new");
assert(!r);
r = th->h->validate(cd, json_object_to_json_string_ext(jobj_token,
JSON_C_TO_STRING_PLAIN | JSON_C_TO_STRING_NOSLASHESCAPE));
assert(!r);
json_object_object_get_ex(hdr->jobj, "tokens", &jobj_tokens);
json_object_object_add_by_uint(jobj_tokens, token, jobj_token);
if (LUKS2_check_json_size(cd, hdr)) {
log_dbg(cd, "Not enough space in header json area for new %s token.", type);
json_object_object_del_by_uint(jobj_tokens, token);
return -ENOSPC;
}
if (commit)
return LUKS2_hdr_write(cd, hdr) ?: token;
return token;
}
static int LUKS2_token_open(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
char **buffer,
size_t *buffer_len,
void *usrptr)
{
const char *json;
const crypt_token_handler *h;
int r;
if (!(h = LUKS2_token_handler(cd, token)))
return -ENOENT;
if (h->validate) {
if (LUKS2_token_json_get(cd, hdr, token, &json))
return -EINVAL;
if (h->validate(cd, json)) {
log_dbg(cd, "Token %d (%s) validation failed.", token, h->name);
return -EINVAL;
}
}
r = h->open(cd, token, buffer, buffer_len, usrptr);
if (r < 0)
log_dbg(cd, "Token %d (%s) open failed with %d.", token, h->name, r);
return r;
}
static void LUKS2_token_buffer_free(struct crypt_device *cd,
int token,
void *buffer,
size_t buffer_len)
{
const crypt_token_handler *h = LUKS2_token_handler(cd, token);
if (h->buffer_free)
h->buffer_free(buffer, buffer_len);
else {
crypt_memzero(buffer, buffer_len);
free(buffer);
}
}
static int LUKS2_keyslot_open_by_token(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
int segment,
const char *buffer,
size_t buffer_len,
struct volume_key **vk)
{
const crypt_token_handler *h;
json_object *jobj_token, *jobj_token_keyslots, *jobj;
const char *num = NULL;
int i, r;
if (!(h = LUKS2_token_handler(cd, token)))
return -ENOENT;
jobj_token = LUKS2_get_token_jobj(hdr, token);
if (!jobj_token)
return -EINVAL;
json_object_object_get_ex(jobj_token, "keyslots", &jobj_token_keyslots);
if (!jobj_token_keyslots)
return -EINVAL;
/* Try to open keyslot referenced in token */
r = -EINVAL;
for (i = 0; i < (int) json_object_array_length(jobj_token_keyslots) && r < 0; i++) {
jobj = json_object_array_get_idx(jobj_token_keyslots, i);
num = json_object_get_string(jobj);
log_dbg(cd, "Trying to open keyslot %s with token %d (type %s).", num, token, h->name);
r = LUKS2_keyslot_open(cd, atoi(num), segment, buffer, buffer_len, vk);
}
if (r >= 0 && num)
return atoi(num);
return r;
}
int LUKS2_token_open_and_activate(struct crypt_device *cd,
struct luks2_hdr *hdr,
int token,
const char *name,
uint32_t flags,
void *usrptr)
{
int keyslot, r;
char *buffer;
size_t buffer_len;
struct volume_key *vk = NULL;
r = LUKS2_token_open(cd, hdr, token, &buffer, &buffer_len, usrptr);
if (r < 0)
return r;
r = LUKS2_keyslot_open_by_token(cd, hdr, token,
(flags & CRYPT_ACTIVATE_ALLOW_UNBOUND_KEY) ?
CRYPT_ANY_SEGMENT : CRYPT_DEFAULT_SEGMENT,
buffer, buffer_len, &vk);
LUKS2_token_buffer_free(cd, token, buffer, buffer_len);
if (r < 0)
return r;
keyslot = r;
if ((name || (flags & CRYPT_ACTIVATE_KEYRING_KEY)) && crypt_use_keyring_for_vk(cd))
r = LUKS2_volume_key_load_in_keyring_by_keyslot(cd, hdr, vk, keyslot);
if (r >= 0 && name)
r = LUKS2_activate(cd, name, vk, flags);
if (r < 0 && vk)
crypt_drop_keyring_key(cd, vk->key_description);
crypt_free_volume_key(vk);
return r < 0 ? r : keyslot;
}
int LUKS2_token_open_and_activate_any(struct crypt_device *cd,
struct luks2_hdr *hdr,
const char *name,
uint32_t flags)
{
char *buffer;
json_object *tokens_jobj;
size_t buffer_len;
int keyslot, token, r = -EINVAL;
struct volume_key *vk = NULL;
json_object_object_get_ex(hdr->jobj, "tokens", &tokens_jobj);
json_object_object_foreach(tokens_jobj, slot, val) {
UNUSED(val);
token = atoi(slot);
r = LUKS2_token_open(cd, hdr, token, &buffer, &buffer_len, NULL);
if (r < 0)
continue;
r = LUKS2_keyslot_open_by_token(cd, hdr, token,
(flags & CRYPT_ACTIVATE_ALLOW_UNBOUND_KEY) ?
CRYPT_ANY_SEGMENT : CRYPT_DEFAULT_SEGMENT,
buffer, buffer_len, &vk);
LUKS2_token_buffer_free(cd, token, buffer, buffer_len);
if (r >= 0)
break;
}
keyslot = r;
if (r >= 0 && (name || (flags & CRYPT_ACTIVATE_KEYRING_KEY)) && crypt_use_keyring_for_vk(cd))
r = LUKS2_volume_key_load_in_keyring_by_keyslot(cd, hdr, vk, keyslot);
if (r >= 0 && name)
r = LUKS2_activate(cd, name, vk, flags);
if (r < 0 && vk)
crypt_drop_keyring_key(cd, vk->key_description);
crypt_free_volume_key(vk);
return r < 0 ? r : keyslot;
}
void LUKS2_token_dump(struct crypt_device *cd, int token)
{
const crypt_token_handler *h;
json_object *jobj_token;
h = LUKS2_token_handler(cd, token);
if (h && h->dump) {
jobj_token = LUKS2_get_token_jobj(crypt_get_hdr(cd, CRYPT_LUKS2), token);
if (jobj_token)
h->dump(cd, json_object_to_json_string_ext(jobj_token,
JSON_C_TO_STRING_PLAIN | JSON_C_TO_STRING_NOSLASHESCAPE));
}
}
int LUKS2_token_json_get(struct crypt_device *cd, struct luks2_hdr *hdr,
int token, const char **json)
{
json_object *jobj_token;
jobj_token = LUKS2_get_token_jobj(hdr, token);
if (!jobj_token)
return -EINVAL;
*json = json_object_to_json_string_ext(jobj_token,
JSON_C_TO_STRING_PLAIN | JSON_C_TO_STRING_NOSLASHESCAPE);
return 0;
}
static int assign_one_keyslot(struct crypt_device *cd, struct luks2_hdr *hdr,
int token, int keyslot, int assign)
{
json_object *jobj1, *jobj_token, *jobj_token_keyslots;
char num[16];
log_dbg(cd, "Keyslot %i %s token %i.", keyslot, assign ? "assigned to" : "unassigned from", token);
jobj_token = LUKS2_get_token_jobj(hdr, token);
if (!jobj_token)
return -EINVAL;
json_object_object_get_ex(jobj_token, "keyslots", &jobj_token_keyslots);
if (!jobj_token_keyslots)
return -EINVAL;
snprintf(num, sizeof(num), "%d", keyslot);
if (assign) {
jobj1 = LUKS2_array_jobj(jobj_token_keyslots, num);
if (!jobj1)
json_object_array_add(jobj_token_keyslots, json_object_new_string(num));
} else {
jobj1 = LUKS2_array_remove(jobj_token_keyslots, num);
if (jobj1)
json_object_object_add(jobj_token, "keyslots", jobj1);
}
return 0;
}
static int assign_one_token(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, int token, int assign)
{
json_object *jobj_keyslots;
int r = 0;
if (!LUKS2_get_token_jobj(hdr, token))
return -EINVAL;
if (keyslot == CRYPT_ANY_SLOT) {
json_object_object_get_ex(hdr->jobj, "keyslots", &jobj_keyslots);
json_object_object_foreach(jobj_keyslots, key, val) {
UNUSED(val);
r = assign_one_keyslot(cd, hdr, token, atoi(key), assign);
if (r < 0)
break;
}
} else
r = assign_one_keyslot(cd, hdr, token, keyslot, assign);
return r;
}
int LUKS2_token_assign(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, int token, int assign, int commit)
{
json_object *jobj_tokens;
int r = 0;
if (token == CRYPT_ANY_TOKEN) {
json_object_object_get_ex(hdr->jobj, "tokens", &jobj_tokens);
json_object_object_foreach(jobj_tokens, key, val) {
UNUSED(val);
r = assign_one_token(cd, hdr, keyslot, atoi(key), assign);
if (r < 0)
break;
}
} else
r = assign_one_token(cd, hdr, keyslot, token, assign);
if (r < 0)
return r;
// FIXME: do not write header in nothing changed
if (commit)
return LUKS2_hdr_write(cd, hdr) ?: token;
return token;
}
int LUKS2_token_is_assigned(struct crypt_device *cd, struct luks2_hdr *hdr,
int keyslot, int token)
{
int i;
json_object *jobj_token, *jobj_token_keyslots, *jobj;
if (keyslot < 0 || keyslot >= LUKS2_KEYSLOTS_MAX || token < 0 || token >= LUKS2_TOKENS_MAX)
return -EINVAL;
jobj_token = LUKS2_get_token_jobj(hdr, token);
if (!jobj_token)
return -ENOENT;
json_object_object_get_ex(jobj_token, "keyslots", &jobj_token_keyslots);
for (i = 0; i < (int) json_object_array_length(jobj_token_keyslots); i++) {
jobj = json_object_array_get_idx(jobj_token_keyslots, i);
if (keyslot == atoi(json_object_get_string(jobj)))
return 0;
}
return -ENOENT;
}
int LUKS2_tokens_count(struct luks2_hdr *hdr)
{
json_object *jobj_tokens = LUKS2_get_tokens_jobj(hdr);
if (!jobj_tokens)
return -EINVAL;
return json_object_object_length(jobj_tokens);
}

View File

@@ -1,170 +0,0 @@
/*
* LUKS - Linux Unified Key Setup v2, kernel keyring token
*
* Copyright (C) 2016-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2016-2019 Ondrej Kozina
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <assert.h>
#include "luks2_internal.h"
static int keyring_open(struct crypt_device *cd,
int token,
char **buffer,
size_t *buffer_len,
void *usrptr __attribute__((unused)))
{
json_object *jobj_token, *jobj_key;
struct luks2_hdr *hdr;
int r;
if (!(hdr = crypt_get_hdr(cd, CRYPT_LUKS2)))
return -EINVAL;
jobj_token = LUKS2_get_token_jobj(hdr, token);
if (!jobj_token)
return -EINVAL;
json_object_object_get_ex(jobj_token, "key_description", &jobj_key);
r = keyring_get_passphrase(json_object_get_string(jobj_key), buffer, buffer_len);
if (r == -ENOTSUP) {
log_dbg(cd, "Kernel keyring features disabled.");
return -EINVAL;
} else if (r < 0) {
log_dbg(cd, "keyring_get_passphrase failed (error %d)", r);
return -EINVAL;
}
return 0;
}
static int keyring_validate(struct crypt_device *cd __attribute__((unused)),
const char *json)
{
enum json_tokener_error jerr;
json_object *jobj_token, *jobj_key;
int r = 1;
log_dbg(cd, "Validating keyring token json");
jobj_token = json_tokener_parse_verbose(json, &jerr);
if (!jobj_token) {
log_dbg(cd, "Keyring token JSON parse failed.");
return r;
}
if (json_object_object_length(jobj_token) != 3) {
log_dbg(cd, "Keyring token is expected to have exactly 3 fields.");
goto out;
}
if (!json_object_object_get_ex(jobj_token, "key_description", &jobj_key)) {
log_dbg(cd, "missing key_description field.");
goto out;
}
if (!json_object_is_type(jobj_key, json_type_string)) {
log_dbg(cd, "key_description is not a string.");
goto out;
}
/* TODO: perhaps check that key description is in '%s:%s'
* format where both strings are not empty */
r = !strlen(json_object_get_string(jobj_key));
out:
json_object_put(jobj_token);
return r;
}
static void keyring_dump(struct crypt_device *cd, const char *json)
{
enum json_tokener_error jerr;
json_object *jobj_token, *jobj_key;
jobj_token = json_tokener_parse_verbose(json, &jerr);
if (!jobj_token)
return;
if (!json_object_object_get_ex(jobj_token, "key_description", &jobj_key)) {
json_object_put(jobj_token);
return;
}
log_std(cd, "\tKey description: %s\n", json_object_get_string(jobj_key));
json_object_put(jobj_token);
}
int token_keyring_set(json_object **jobj_builtin_token,
const void *params)
{
json_object *jobj_token, *jobj;
const struct crypt_token_params_luks2_keyring *keyring_params = (const struct crypt_token_params_luks2_keyring *) params;
jobj_token = json_object_new_object();
if (!jobj_token)
return -ENOMEM;
jobj = json_object_new_string(LUKS2_TOKEN_KEYRING);
if (!jobj) {
json_object_put(jobj_token);
return -ENOMEM;
}
json_object_object_add(jobj_token, "type", jobj);
jobj = json_object_new_array();
if (!jobj) {
json_object_put(jobj_token);
return -ENOMEM;
}
json_object_object_add(jobj_token, "keyslots", jobj);
jobj = json_object_new_string(keyring_params->key_description);
if (!jobj) {
json_object_put(jobj_token);
return -ENOMEM;
}
json_object_object_add(jobj_token, "key_description", jobj);
*jobj_builtin_token = jobj_token;
return 0;
}
int token_keyring_get(json_object *jobj_token,
void *params)
{
json_object *jobj;
struct crypt_token_params_luks2_keyring *keyring_params = (struct crypt_token_params_luks2_keyring *) params;
json_object_object_get_ex(jobj_token, "type", &jobj);
assert(!strcmp(json_object_get_string(jobj), LUKS2_TOKEN_KEYRING));
json_object_object_get_ex(jobj_token, "key_description", &jobj);
keyring_params->key_description = json_object_get_string(jobj);
return 0;
}
const crypt_token_handler keyring_handler = {
.name = LUKS2_TOKEN_KEYRING,
.open = keyring_open,
.validate = keyring_validate,
.dump = keyring_dump
};

View File

@@ -1,7 +1,7 @@
/*
* cryptsetup kernel RNG access functions
*
* Copyright (C) 2010-2019 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2012, Red Hat, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -28,10 +28,6 @@
#include "libcryptsetup.h"
#include "internal.h"
#ifndef O_CLOEXEC
#define O_CLOEXEC 0
#endif
static int random_initialised = 0;
#define URANDOM_DEVICE "/dev/urandom"
@@ -156,24 +152,24 @@ int crypt_random_init(struct crypt_device *ctx)
/* Used for CRYPT_RND_NORMAL */
if(urandom_fd == -1)
urandom_fd = open(URANDOM_DEVICE, O_RDONLY | O_CLOEXEC);
urandom_fd = open(URANDOM_DEVICE, O_RDONLY);
if(urandom_fd == -1)
goto fail;
/* Used for CRYPT_RND_KEY */
if(random_fd == -1)
random_fd = open(RANDOM_DEVICE, O_RDONLY | O_NONBLOCK | O_CLOEXEC);
random_fd = open(RANDOM_DEVICE, O_RDONLY | O_NONBLOCK);
if(random_fd == -1)
goto fail;
if (crypt_fips_mode())
log_verbose(ctx, _("Running in FIPS mode."));
log_verbose(ctx, _("Running in FIPS mode.\n"));
random_initialised = 1;
return 0;
fail:
crypt_random_exit();
log_err(ctx, _("Fatal error during RNG initialisation."));
log_err(ctx, _("Fatal error during RNG initialisation.\n"));
return -ENOSYS;
}
@@ -210,12 +206,13 @@ int crypt_random_get(struct crypt_device *ctx, char *buf, size_t len, int qualit
}
break;
default:
log_err(ctx, _("Unknown RNG quality requested."));
log_err(ctx, _("Unknown RNG quality requested.\n"));
return -EINVAL;
}
if (status)
log_err(ctx, _("Error reading from RNG."));
log_err(ctx, _("Error %d reading from RNG: %s\n"),
errno, strerror(errno));
return status;
}

File diff suppressed because it is too large Load Diff

14
lib/tcrypt/Makefile.am Normal file
View File

@@ -0,0 +1,14 @@
moduledir = $(libdir)/cryptsetup
noinst_LTLIBRARIES = libtcrypt.la
libtcrypt_la_CFLAGS = -Wall $(AM_CFLAGS) @CRYPTO_CFLAGS@
libtcrypt_la_SOURCES = \
tcrypt.c \
tcrypt.h
AM_CPPFLAGS = -include config.h \
-I$(top_srcdir)/lib \
-I$(top_srcdir)/lib/crypto_backend

Some files were not shown because too many files have changed in this diff Show More