Don't use a 7.1 EAC3 input file for which our decoder is not
bitexact; instead just use the asynth-44100-8.wav file
which (as a 7.1 file) exhibits the same issue fixed by
1b3f4842c1.
(Either the encoder or the resampler are still not completely
bitexact, so we limit the number of frames output.)
Also switch to a framecrc test so that the output channel layout
is directly contained in the ref file.
Reviewed-by: James Almer <jamrial@gmail.com>
Reviewed-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The file has buggy timestamps (it uses B-frames, yet pts==dts)
and therefore the last frame is currently discarded by FFmpeg cli.
Using -fps_mode passthrough avoids this and provides coverage
of the SVQ3 draining logic.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
I have several .ts captures where video and audio codec changes even
though the PMT version does not change and the PIDs stay the same.
This happens during transition to/from slate (mpeg2 video and audio)
to network broadcast (hevc video and eac3 audio in private PES).
I've updated fate ts-demux expected results.
Bitstream generated using the reference encoder, then edited to fix the
colour description and an extra metadata block added. FFmpeg decoder
output is identical to the reference decoder output.
The content used is the first three frames of "Waterfall" from the SVT
Open Content Video Test Suite 2022. This is copyright Sveriges
Television AB and is used under the Creative Commons Attribution 4.0
International License.
The earlier code only used the counts from the last slice.
The two FATE tests using slices show compression improvements
due to this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is unnecessary, as we already have the entries sorted by
probability and therefore implicitly by length. All we need
on top of that to build the tree is the number of entries
of a given length.
Doing so gives a 3.6% speedup of ff_mjpeg_encode_huffman_close()
here; it also saves about 640B of .text here.
The new code puts values with higher probability to the left
of the tree. The old code did not and therefore
the FATE checksums needed to be updated. Due to MJPEG's
0xFF unescaping file sizes as well as file checksums
needed to be updated; the decoded picture hashes stayed
the same. Given that codes on the left of the tree have
on average fewer bits set than codes on the right, the
file sizes mostly improve (all except vsynth3-mjpeg-444).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
A sample with a particular partitioning structure that could not be read
correctly before 26c5d8cf5d
Signed-off-by: Frank Plowman <post@frankplowman.com>
Signed-off-by: James Almer <jamrial@gmail.com>
this is a good compromise between speed and compression
-rw-r----- 1 michael michael 180987 Feb 6 14:29 lena-def.png
-rw-r----- 1 michael michael 128430 Feb 6 14:36 lena-pavg.png
-rw-r----- 1 michael michael 126269 Feb 6 14:36 lena-pmixed.png
-rw-r----- 1 michael michael 180987 Feb 6 14:35 lena-pnone.png
-rw-r----- 1 michael michael 127758 Feb 6 14:35 lena-ppaeth.png
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Since 45eeb1f785
optimal Huffman tables are the default (without slice-threading).
This made the fate-vsynth*-mjpeg-{trell-,}-huffman tests
identical to their corresponding tests without "-huffman".
This is of course wasteful, so switch the two tests with
"-huffman" counterparts back to the default tables.
Also use one of these tests to test slice threaded encoding.
It has so far been untested.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is currently not due to endianness. This forced to add
workarounds with sed in fate/mxf.mak (which are removed
in this commit).
This is supposed to fix the enhanced-flv-hevc-hdr10 test
on big endian systems.
Reviewed-by: Zhao Zhili <quinkblack-at-foxmail.com@ffmpeg.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This commit adds a 32-bit *integer* planar RGBA format.
Vulkan FFv1 decoding is best performed on separate planes, rather than
packed RGBA (i.e. RGBA128), hence this is useful as an intermediate format.
Encoding was untested before this.
Notice that the filesize degradation is partially due to
mpegvideo no longer using progressive_sequence and
progressive_frame.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When there are multiple candidates for macroblock type, the encoder
tries them all. In order to do so, it keeps several sets of states
containing the variables that get modified when encoding
the macroblock and in the end uses the best of these.
Yet one variable was set, but not included in this state:
The current macroblock's qscale value in the current picture's
qscale_table. This may currently be set multiple times in
mpv_reconstruct_mb(), yet it is read when adaptive_quant is true.
Currently, the value read can be the value set by the last attempt
to write the current macroblock and not the initial value.
Fix this by only setting the qscale_table value in one place
outside of mpv_reconstruct_mb() (where it does not belong at all).
Reviewed-by: Ramiro Polla <ramiro.polla@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>