Compare commits

..

141 Commits

Author SHA1 Message Date
Michael Niedermayer
dc91b913b6 RELEASE_NOTES: Based on the version from 4.3
Name suggested by Lynne, Gyan, Reto, Zane, Jan, Derek

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-08 22:55:16 +02:00
Michael Niedermayer
aeba1a4c20 avcodec/msp2dec: Check available space in RLE decoder
Fixes: out of array read
Fixes: 32968/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MSP2_fuzzer-5315296027082752

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit caaf463311)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-08 22:55:16 +02:00
Michael Niedermayer
d22550dd61 avformat/mov: check offset for overflow in mov_probe()
Fixes: Invalid read of size 4
Fixes: ASAN_Deadlysignal.zip

Found-by: Hardik Shah <hardik05@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 0f6a3405e8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-08 22:55:16 +02:00
Anton Khirnov
2a7f1bc282 lavc/pngdec: always create a copy for APNG_DISPOSE_OP_BACKGROUND
Calling av_frame_make_writable() from decoders is tricky, especially
when frame threading is used. It is much simpler and safer to just make
a private copy of the frame.
This is not expected to have a major performance impact, since
APNG_DISPOSE_OP_BACKGROUND is not used often and
av_frame_make_writable() would typically make a copy anyway.

Found-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b593abda6c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-08 22:55:16 +02:00
Marton Balint
25e794a1ea avformat/url: add ff_make_absolulte_url2 to be able to test windows path cases
Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit fb4da90fec)
2021-04-08 17:38:06 +02:00
Marton Balint
d622923b36 avformat/url: fix ff_make_absolute_url with Windows file paths
Ugly, but a lot less broken than it was.

Fixes ticket #9166.

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit 5dc5f289ce)
2021-04-08 17:35:09 +02:00
Anton Khirnov
c64180fac8 lavc/pngdec: improve chunk length check
The length does not cover the chunk type or CRC.

(cherry picked from commit ae08eec6a1)
Signed-off-by: Anton Khirnov <anton@khirnov.net>
2021-04-08 14:15:30 +02:00
Anton Khirnov
8ee432dc23 lavc/pngdec: restructure exporting frame meta/side data
This data cannot be stored in PNGDecContext.picture, because the
corresponding chunks may be read after the call to
ff_thread_finish_setup(), at which point modifying shared context data
is a race.

Store intermediate state in the context and then write it directly to
the output frame.

Fixes exporting frame metadata after 5663301560
Fixes #8972

Found-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 8d74baccff)
Signed-off-by: Anton Khirnov <anton@khirnov.net>
2021-04-08 14:15:30 +02:00
Anton Khirnov
5f21bbed8a lavc/pngdec: remove unnecessary context variables
Do not store the image buffer pointer/linesize in the context, just
access them directly from the frame.
Stop assuming that linesize is the same for the current and last frame.

(cherry picked from commit 89ea5057bf)
Signed-off-by: Anton Khirnov <anton@khirnov.net>
2021-04-08 14:15:30 +02:00
Anton Khirnov
53ecdbfbe5 lavc/pngdec: perform APNG blending in-place
Saves an allocation+free and two frame copies per each frame.

(cherry picked from commit 5a50bd88db)
Signed-off-by: Anton Khirnov <anton@khirnov.net>
2021-04-08 14:15:30 +02:00
Andreas Rheinhardt
5c457c673f avcodec/mpegvideo_enc: Don't segfault on unorthodox mpeg_quant
The (deprecated) field AVCodecContext.mpeg_quant has no range
restriction; MpegEncContext.mpeg_quant is restricted to 0..1.
If the former is set, the latter is overwritten with it without
checking the range. This can trigger an av_assert2() with the MPEG-4
encoder when writing said field.

Fix this by just setting MpegEncContext.mpeg_quant to 1 if
AVCodecContext.mpeg_quant is set.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit d393c45051)
2021-04-08 11:59:08 +02:00
Andreas Rheinhardt
fb7cd45977 avcodec/encode: Fix check for allowed LJPEG pixel formats
The pix_fmts of the LJPEG encoder already contain all supported pixel
formats (including the ones only supported when strictness is unofficial
or less); yet the check in ff_encode_preinit() ignored this list in case
strictness is unofficial or less. But the encoder presumed that it is
always applied and blacklists some of the entries in pix_fmts when
strictness is > unofficial. The result is that if one uses an entry not
on that list and sets strictness to unofficial, said entry passes both
checks and this can lead to segfaults lateron (e.g. when using gray).

Fix this by removing the exception for LJPEG in ff_encode_preinit().

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 6e8e9b7633)
2021-04-08 11:58:59 +02:00
Andreas Rheinhardt
44d218e99a avformat/rmdec: Don't rely on unspecified order of evaluation
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 4666ce0aef)
2021-04-08 11:58:05 +02:00
Andreas Rheinhardt
be5970fcaa avformat/rmdec: Fix memleaks upon read_header failure
For both the RealMedia as well as the IVR demuxer (which share the same
context) each AVStream's priv_data contains an AVPacket that might
contain data (even when reading the header) and therefore needs to be
unreferenced. Up until now, this has not always been done:

The RealMedia demuxer didn't do it when allocating a new stream's
priv_data failed although there might be other streams with packets to
unreference. (The reason for this was that until recently rm_read_close()
couldn't handle an AVStream without priv_data, so one had to choose
between a potential crash and a memleak.)

The IVR demuxer meanwhile never ever called read_close so that the data
already contained in packets leaks upon error.

This patch fixes both demuxers by adding the appropriate cleanup code.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 9a471c5437)
2021-04-08 11:57:57 +02:00
Andreas Rheinhardt
c72fca598c avcodec/vc1dec: Fix memleak upon allocation error
ff_vc1_decode_init_alloc_tables() had one error path that forgot to free
already allocated buffers; these would then be overwritten on the next
allocation attempt (or they would just not be freed in case this
happened during init, as the decoders for which it is used do not have
the FF_CODEC_CAP_INIT_CLEANUP set).

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 98060a198e)
2021-04-08 11:57:07 +02:00
Andreas Rheinhardt
b0997b8526 avcodec/rv34, mpegvideo: Fix segfault upon frame size change error
The RealVideo 3.0 and 4.0 decoders call ff_mpv_common_init() only during
their init function and not during decode_frame(); when the size of the
frame changes, they call ff_mpv_common_frame_size_change(). Yet upon
error, said function calls ff_mpv_common_end() which frees the whole
MpegEncContext and not only those parts that
ff_mpv_common_frame_size_change() reinits. As a result, the context will
never be usable again; worse, because decode_frame() contains no check
for whether the context is initialized or not, it is presumed that it is
initialized, leading to segfaults. Basically the same happens if
rv34_decoder_realloc() fails.

This commit fixes this by only resetting the parts that
ff_mpv_common_frame_size_change() changes upon error and by actually
checking whether the context is in need of reinitialization in
ff_rv34_decode_frame().

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 9abda1365c)
2021-04-08 11:56:44 +02:00
Andreas Rheinhardt
4562719c7d avcodec/rv10: Don't presume context to be initialized
In case of resolution changes rv20_decode_picture_header() closes and
reopens its MpegEncContext; it checks the latter for errors, yet when
an error happens, it might happen that no new attempt at
reinitialization is performed when decoding the next frame; this leads
to crashes lateron.

This commit fixes this by making sure that initialization will always
be attempted if the context is currently not initialized.

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 8ffd3ef9d9)
2021-04-08 11:56:35 +02:00
Andreas Rheinhardt
6d7dfabfb0 avcodec/mpegvideo: Factor common freeing code out
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 9bab7de175)
2021-04-08 11:56:26 +02:00
Andreas Rheinhardt
63277aa98e avcodec/mpegvideo: Fix memleak upon allocation error
When slice-threading is used, ff_mpv_common_init() duplicates
the first MpegEncContext and allocates some buffers for each
MpegEncContext (the first as well as the copies). But the count of
allocated MpegEncContexts is not updated until after everything has
been allocated and if an error happens after the first one has been
allocated, only the first one is freed; the others leak.

This commit fixes this: The count is now set before the copies are
allocated. Furthermore, the copies are now created and initialized
before the first MpegEncContext, so that the buffers exclusively owned
by each MpegEncContext are still NULL in the src MpegEncContext so
that no double-free happens upon allocation failure.

Given that this effectively touches every line of the init code,
it has also been factored out in a function of its own in order to
remove code duplication with the same code in
ff_mpv_common_frame_size_change() (which was never called when using
more than one slice (and if it were, there would be potential
double-frees)).

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit ff0706cde8)
2021-04-08 11:56:17 +02:00
Andreas Rheinhardt
0155d5cd74 Revert "avcodec: add FF_CODEC_CAP_INIT_CLEANUP for all codecs which use ff_mpv_common_init()"
This mostly reverts commit 4b2863ff01.
Said commit removed the freeing code from ff_mpv_common_init(),
ff_mpv_common_frame_size_change() and ff_mpeg_framesize_alloc() and
instead added the FF_CODEC_CAP_INIT_CLEANUP to several codecs that use
ff_mpv_common_init(). This introduced several bugs:

a) Several decoders using ff_mpv_common_init() in their init function were
forgotten: This affected FLV, Intel H.263, RealVideo 3.0 and V4.0 as well as
VC-1/WMV3.
b) ff_mpv_common_init() is not only called from the init function of
codecs, it is also called from AVCodec.decode functions. If an error
happens after an allocation has succeeded, it can lead to memleaks;
furthermore, it is now possible for the MpegEncContext to be marked as
initialized even when ff_mpv_common_init() returns an error and this can
lead to segfaults because decoders that call ff_mpv_common_init() when
decoding a frame can mistakenly think that the MpegEncContext has been
properly initialized. This can e.g. happen with H.261 or MPEG-4.
c) Removing code for freeing from ff_mpeg_framesize_alloc() (which can't
be called from any init function) can lead to segfaults because the
check for whether it needs to allocate consists of checking whether the
first of the buffers allocated there has been allocated. This part has
already been fixed in 76cea1d2ce.
d) ff_mpv_common_frame_size_change() can also not be reached from any
AVCodec.init function; yet the changes can e.g. lead to segfaults with
decoders using ff_h263_decode_frame() upon allocation failure, because
the MpegEncContext will upon return be flagged as both initialized and
not in need of reinitialization (granted, the fact that
ff_h263_decode_frame() clears context_reinit before the context has been
reinited is a bug in itself). With the earlier version, the context
would be cleaned upon failure and it would be attempted to initialize
the context again in the next call to ff_h263_decode_frame().

While a) could be fixed by adding the missing FF_CODEC_CAP_INIT_CLEANUP,
keeping the current approach would entail adding cleanup code to several
other places because of b). Therefore ff_mpv_common_init() is again made
to clean up after itself; the changes to the wmv2 decoder and the SVQ1
encoder have not been reverted: The former fixed a memleak, the latter
allowed to remove cleanup code.

Fixes: double free
Fixes: ff_free_picture_tables.mp4
Fixes: ff_mpeg_update_thread_context.mp4
Fixes: decode_colskip.mp4
Fixes: memset.mp4

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit d4b9e117ce)
2021-04-08 11:56:07 +02:00
Andreas Rheinhardt
ed7efbe3ab avcodec/wmavoice: Check operations that can fail
There might be segfaults on failure.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit e93875b756)
2021-04-08 11:55:32 +02:00
Andreas Rheinhardt
6aad0b1bb5 avcodec/mjpegdec: Fix leak in case ICC array allocations fail partially
If only one of the two arrays used for the ICC profile could be
successfully allocated, it might be overwritten and leak when
the next ICC entry is encountered. Fix this by using a common struct,
so that one has only one array to allocate.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit a5b2f06b0c)
2021-04-08 11:55:17 +02:00
Andreas Rheinhardt
5621d10b7a avcodec/tiff: Avoid forward declarations
In this case it also fixes a potential for compilation failures:
Not all compilers can handle the case in which a function with
a forward declaration declared with an attribute to always inline it
is called before the function body appears. E.g. GCC 4.2.1 on OS X 10.6
doesn't like it.

Reviewed-by: Pavel Koshevoy <pkoshevoy@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit e5d6af7b35)
2021-04-08 11:54:24 +02:00
Andreas Rheinhardt
1761cc0cb0 avcodec/pthread_frame: Reindentation
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 6599960940)
2021-04-08 11:53:16 +02:00
Andreas Rheinhardt
562ff3ee0e avcodec/pthread_frame: Check initializing mutexes/condition variables
Up until now, initializing the mutexes/condition variables wasn't
checked by ff_frame_thread_init(). This commit changes this.

Given that it is not documented to be save to destroy a zeroed but
otherwise uninitialized mutex/condition variable, one has to choose
between two approaches: Either one duplicates the code to free them
in ff_frame_thread_init() in case of errors or one records which have
been successfully initialized. This commit takes the latter approach:
For each of the two structures with mutexes/condition variables
an array containing the offsets of the members to initialize is added.
Said array is used both for initializing and freeing and the only thing
that needs to be recorded is how many of these have been successfully
initialized.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit c85fcc96b7)
2021-04-08 11:53:03 +02:00
Andreas Rheinhardt
aa8f8748ca avcodec/pthread_frame: Fix cleanup during init
In case an error happened when setting up the child threads,
ff_frame_thread_init() would up until now call ff_frame_thread_free()
to clean up all threads set up so far, including the current, not
properly initialized one.
But a half-allocated context needs special handling which
ff_frame_thread_frame_free() doesn't provide.
Notably, if allocating the AVCodecInternal, the codec's private data
or setting the options fails, the codec's close function will be
called (if there is one); it will also be called if the codec's init
function fails, regardless of whether the FF_CODEC_CAP_INIT_CLEANUP
is set. This is not supported by all codecs; in ticket #9099 it led
to a crash.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit e9b6617579)
2021-04-08 11:52:52 +02:00
Andreas Rheinhardt
0401246845 avcodec/pthread_frame: Factor initializing single thread out
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 24ee151402)
2021-04-08 11:52:44 +02:00
Mark Plomer
76b5f726aa avcodec/dv_profile: PAL DV files with dsf flag 0 - detect via pal flag and buf_size
Some old DV AVI files have the DSF-Flag of frames set to 0, although it
is PAL (maybe rendered with an old Ulead Media Studio Pro) ... this causes
ffmpeg/VLC-player to produce/play corrupted video (other players/editors
like VirtualDub work fine).

Fixes ticket #8333 and replaces/extends hack for ticket #2177

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit 6ef5d8ca86)
2021-04-03 20:05:15 +02:00
Michael Niedermayer
6a7a39878f avcodec/cfhd: Keep track of which subbands have been read
This avoids use of uninitialized data
also several checks are inside the band reading code
so it is important that it is run at least once

Fixes: out of array accesses
Fixes: 28209/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CFHD_fuzzer-5684714694377472
Fixes: 32124/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CFHD_fuzzer-5425980681355264
Fixes: 30519/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CFHD_fuzzer-4558757155700736

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit da8c86dd8b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-03 19:43:39 +02:00
Michael Niedermayer
a80b0ee981 avcodec/cfhd: Require valid setup before Lowpass coefficients, BandHeader and BandSecondPass
Previously the code skipped all security checks when these where encountered but prior data was incorrect.
Also replace an always true condition by an assert

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3b88c88fa1)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-03 19:43:39 +02:00
Michael Niedermayer
de40b2fe41 avcodec/cfhd: Check transform_type consistently
Fixes: out of array accesses
Fixes: 29754/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CFHD_fuzzer-6333598414274560
Fixes: 30519/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CFHD_fuzzer-6298424511168512
Fixes: 30739/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CFHD_fuzzer-5011292836462592

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 20473a93d2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-03 19:43:39 +02:00
Alan Kelly
4aeedf4c2a libswscale/x86/yuv2yuvX: Removes unrolling for mmx and mmxext
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3ce8d09244)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-03 19:43:39 +02:00
Alan Kelly
95aacf30e3 libswscale/x86/swscale: Only call ff_yuv2yuvX functions if the input size is > 0
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit dc57762cb4)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-03 19:43:39 +02:00
Alan Kelly
6bc2058d00 tests/checkasm/sw_scale: adds additional tests sizes for yux2yuvX
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e1484bc455)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-03 19:43:39 +02:00
Andreas Rheinhardt
54dd729cee avcodec/mjpegdec: Check initializing Huffman tables
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit d5ddfec6c3)
2021-04-03 18:08:02 +02:00
Andreas Rheinhardt
1f3735892b avcodec/mjpegdec: Fix leak in case of invalid external Huffman tables
When using external Huffman tables fails during init, the decoder
reverts back to using the default Huffman tables; and when doing so,
the current VLC tables leak because init_default_huffman_tables()
doesn't free them before overwriting them.

Sample:
samples.ffmpeg.org/archive/all/avi+mjpeg+pcm_s16le++mjpeg-interlace.avi

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 3cc685b7bc)
2021-04-03 18:07:58 +02:00
Andreas Rheinhardt
edbc26e38b avcodec/a64multienc: Don't use static buffers, fix potential races
render_charset() used static buffers that are always completely
initialized before every use, so that it is unnecessary for the
values in these arrays to be kept after leaving the function.
Given that this is not only unnecessary, but harmful due to the
possibility of data races if several instances of a64multi/a64multi5
run simultaneously these buffers have been replaced by ordinary buffers
on the stack (they are small enough for this).

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 0ca09335aa)
2021-04-03 16:46:43 +02:00
Andreas Rheinhardt
8bc3cdf007 avcodec/rawdec: Free bitstream_buf
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 5c0f6d53da)
2021-04-03 13:29:30 +02:00
Andreas Rheinhardt
639c60f5aa avformat/vividas: Fix crash when seeking without audio stream
The current code tries the access the codecpar of a nonexistent
audio stream when seeking. Stop that. Fixes ticket #9121.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit af867e59d9)
2021-04-03 07:20:39 +02:00
Andreas Rheinhardt
0fe3383066 avcodec/ass_split: Don't presume strlen to be >= 2
Fixes potential heap-buffer-overflow.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit f38f791a23)
2021-04-02 21:44:25 +02:00
Andreas Rheinhardt
eff72f86e2 avcodec/binkaudio: Check return value of functions that can fail
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 0062aca592)
2021-04-02 21:44:15 +02:00
Andreas Rheinhardt
632262f184 avcodec/binkaudio: Fix memleak upon init failure
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 85aed2e390)
2021-04-02 21:44:06 +02:00
Andreas Rheinhardt
236ddfbe1c avcodec/flacenc: Fix memleak upon init error
An AVMD5 struct would leak if an error happened after its allocation.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 56bd071e54)
2021-04-02 21:43:58 +02:00
Andreas Rheinhardt
affb55d4b4 avcodec/proresenc_anatoliy: Fix memleak upon init error
A buffer may leak in case of YUVA444P10 with dimensions that are not
both divisible by 16.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit d789d72d30)
2021-04-02 21:43:27 +02:00
Andreas Rheinhardt
60433ae94f avcodec/bsf: Fix segfault when freeing half-allocated BSF
When allocating a BSF fails, it could happen that the BSF's close
function has been called despite a failure to allocate the private data.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 9bf2b32da0)
2021-04-02 21:43:18 +02:00
Andreas Rheinhardt
82b9da7662 avcodec/av1_metadata_bsf: Check for the existence of units
Fixes a crash with ISOBMFF extradata containing no OBUs.

Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 8081a0b10f)
2021-04-02 21:43:08 +02:00
Andreas Rheinhardt
0ccd2540b0 avcodec/h264_metadata_bsf: Don't add AUD to extradata
This is a regression since switching to the generic CBS BSF code.

Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit b917218c35)
2021-04-02 21:43:00 +02:00
Andreas Rheinhardt
7f139498f5 avcodec/msmpeg4enc: Don't use code for static init that can fail
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit f0042e573e)
2021-04-02 21:42:49 +02:00
Andreas Rheinhardt
b51d5b222e avformat/dss: Don't prematurely modify context variable
The DSS demuxer currently decrements a counter that should be positive
at the beginning of read_packet; should it become negative, it means
that the data to be read can't be read contiguosly, but has to be read
in two parts. In this case the counter is incremented again after the
first read if said read succeeded; if not, the counter stays negative.

This can lead to problems in further read_packet calls; in tickets #9020
and #9023 it led to segfaults if one tries to seek lateron if the seek
failed and generic seek tried to read from the beginning. But it could
also happen when av_new_packet() failed and the user attempted to read
again afterwards.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit afa511ad34)
2021-04-02 21:42:37 +02:00
Andreas Rheinhardt
70028ce7fd avformat/utils: Check allocations for failure
There would be leaks in case of failure.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 543e4a1942)
2021-04-02 21:42:29 +02:00
Andreas Rheinhardt
ffb599458f avcodec/ac3enc: Use actual size of buffer in init_put_bits()
Since the very beginning (since de6d9b6404)
the AC-3 encoder used AC3_MAX_CODED_FRAME_SIZE (namely 3840) for the
size of the output buffer (without any check at all).
This causes problems when encoding EAC-3 for which the maximum is too small,
smaller than the actual size of the buffer: One can run into asserts used
by the PutBits API. Ticket #8513 is about such a case and this commit
fixes it by using the real size of the buffer.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 968c158abd)
2021-04-02 21:42:15 +02:00
Andreas Rheinhardt
55ad9ece31 avcodec/flashsv2enc: Fix undefined NULL + 0
Affected the vsynth*-flashsv2 FATE-tests.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit b7b73e83e3)
2021-04-02 21:41:55 +02:00
Andreas Rheinhardt
3d473a8925 avutil/pixdesc: Fix 1 << 32
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit b7565b65b8)
2021-04-02 21:41:47 +02:00
Andreas Rheinhardt
b4b2f88cab avcodec/motion_est: Fix invalid left shift of negative numbers
Affected many FATE-tests.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 3ef65fd4d1)
2021-04-02 21:41:36 +02:00
Andreas Rheinhardt
cc3b05e424 avfilter/vf_codecview: Fix undefined left shifts of negative numbers
Affected the filter-codecview-mvs FATE-test.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 3c151e7999)
2021-04-02 21:41:26 +02:00
Andreas Rheinhardt
195cce45cf avcodec/g2meet: Fix undefined NULL + 0
Affected the g2m4 FATE-test.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit a86f3e983e)
2021-04-02 21:41:14 +02:00
Andreas Rheinhardt
c7a95509b3 avutil/base64: Fix undefined NULL + 0
Affected the base64 FATE test.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit bbf8431b1b)
2021-04-02 21:41:05 +02:00
Andreas Rheinhardt
6906a2b471 avcodec/vmdvideo: Fix NULL + 0
Affected the FATE tests filter-gradfun-sample and sierra-vmd-video.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 566bf56791)
2021-04-02 21:40:54 +02:00
Andreas Rheinhardt
4eb44966a6 avcodec/mss12: Don't apply non-zero offset to null pointer
Affected the FATE tests mss2-wmv and mss1-pal.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 8429661db8)
2021-04-02 21:40:40 +02:00
Andreas Rheinhardt
9a2b994a71 avcodec/lcldec: Fix undefined NULL + 0
Affected the FATE tests vsynth*-zlib, mszh and zlib.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit dd9cbd1cc3)
2021-04-02 21:40:27 +02:00
Andreas Rheinhardt
58b961d8bb avcodec/qtrleenc: Fix negative linesizes, don't use NULL + offset
Before commit f1e17eb446, the qtrle
encoder had undefined pointer arithmetic: Outside of a loop, two
pointers were set to point to the ith element (with index i-1) of
a line of a frame. At the end of each loop iteration, these pointers
were decremented, so that they pointed to the -1th element of the line
after the loop. Furthermore, one of these pointers can be NULL (in which
case all pointer arithmetic is automatically undefined behaviour).

Commit f1e17eb44 added a check in order to ensure that the elements
never point to the -1th element of the array: The pointers are only
decremented if they are bigger than the frame's base pointer
(i.e. AVFrame.data[0]). Yet this check does not work at all in case of
negative linesizes; furthermore in case the pointer that can be NULL is
NULL initializing it still involves undefined pointer arithmetic.

This commit fixes both of these issues: First, non-NULL pointers are
initialized to point to the element after the ith element and
decrementing is moved to the beginning of the loop. Second, if a pointer
is NULL, it is just made to point to the other pointer, as this allows
to avoid checks before decrementing it.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 911fe69c5f)
2021-04-02 21:40:17 +02:00
Andreas Rheinhardt
6614f33a0b avcodec/qtrleenc: Use keyframe when no previous frame is available
If keeping a reference to an earlier frame failed, the next frame must
be an I frame for lack of reference frame. This commit implements this.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit d5fc16a6a8)
2021-04-02 21:40:07 +02:00
Andreas Rheinhardt
67e401e3cb libswresample/audioconvert: Fix undefined NULL + 0
Affected 26 FATE tests like swr-resample_async-s16p-44100-8000.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 64977ed7ae)
2021-04-02 21:39:54 +02:00
Andreas Rheinhardt
789dadccc0 avcodec/proresdec2: Don't apply non-zero offset to null pointer
Affected ProRes without alpha; affected 32 FATE tests, e.g. prores-422,
prores-422_proxy, prores-422_lt or matroska-prores-header-insertion-bz2.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit f83976344e)
2021-04-02 21:39:47 +02:00
Andreas Rheinhardt
09510d9ffd avcodec/mpegvideo_enc: Don't apply non-zero offset to null pointer
Affected many FATE tests (mostly vsynth ones).

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 4863671d88)
2021-04-02 21:39:37 +02:00
Andreas Rheinhardt
816d4bee4a avfilter/af_hdcd: Fix undefined shifts
Affected the filter-hdcd-* FATE tests.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 9eadd616b7)
2021-04-02 21:39:27 +02:00
Andreas Rheinhardt
a8fb9c9d27 avcodec/dcaenc: Fix undefined left shift of negative numbers
Affected the acodec-dca and acodec-dca2 FATE tests.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 659a925939)
2021-04-02 21:39:19 +02:00
Andreas Rheinhardt
5e2e8e1b9e avcodec/mjpegenc: Fix segfault when freeing incomplete context
When allocating the MJpegContext fails (or if the dimensions run afoul
of the 65500x65500 limit), an attempt to free a subbuffer of said
context leads to a segfault in ff_mjpeg_encode_close().
Seems to be a regression since 467d9e27e0.

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
(cherry picked from commit 84ac35ecb8)
2021-04-02 21:39:04 +02:00
Andreas Rheinhardt
28dd12c9b7 avfilter/vf_paletteuse: Fix left shift outside of range of int
by keeping the variable uint32_t which in this situation is the natural
type anyway. This affected the FATE-test filter-paletteuse-sierra2_4a.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 797c2ecc8f)
2021-04-02 21:38:30 +02:00
Andreas Rheinhardt
da4b64ea02 avfilter/asrc_sine: Fix invalid left shift of negative number
by using a multiplication instead. The multiplication can never overflow
an int because the sin-factor is only an int16_t.

Affected the FATE-tests filter-concat and filter-concat-vfr.

Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 55b46902c1)
2021-04-02 21:38:21 +02:00
Andreas Rheinhardt
9f011f0876 avformat/webmdashenc: Don't pass NULL to memcmp
Affects the FATE-tests webm-dash-manifest-unaligned-video-streams,
webm-dash-manifest and webm-dash-manifest-representations.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit a42c47b77f)
2021-04-02 21:38:12 +02:00
Andreas Rheinhardt
955be73bc5 avformat/libmodplug: Fix memleaks on error
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit df6dc331dd)
2021-04-02 21:37:20 +02:00
Andreas Rheinhardt
3f94e061cb avformat/libgme: Fix memleaks on errors
Also free the gme_info_t structure immediately after its use.
This simplifies cleanup, because it might be unsafe to call
gme_free_info(NULL) (or even worse, gme_track_info() might even
on error set the pointer to the gme_info_t structure to something
else than NULL).

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 05457a3661)
2021-04-02 21:37:09 +02:00
Andreas Rheinhardt
a01cf1fe54 avformat/aadec: Fix leak on error
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 3ec3370dea)
2021-04-02 21:37:00 +02:00
Andreas Rheinhardt
fe8ae68738 avformat/jacosubdec: Fix leak on error
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 4f11685e4c)
2021-04-02 21:36:51 +02:00
Andreas Rheinhardt
3f851a7719 avcodec/vc1dec: Postpone allocating sprite frame to avoid segfault
Up until now, the VC-1 decoders allocated an AVFrame for usage with
sprites during vc1_decode_init(); yet said AVFrame can be freed if
(re)initializing the context (which happens ordinarily during decoding)
fails. The AVFrame does not get allocated again lateron in this case,
leading to segfaults.

Fix this by moving the allocation of said frame immediately before it is
used (this also means that said frame won't be allocated at all any more
in case of a regular (i.e. non-image) stream).

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit ea70c39dee)
2021-04-02 21:36:31 +02:00
Andreas Rheinhardt
b4b3af795c avcodec/avcodec: Update check for identical colorspace/primaries/trc names
If the numerical constants for colorspace, transfer characteristics
and color primaries coincide, the current code presumes the
corresponding names to be identical and prints only one of them obtained
via av_get_colorspace_name(). There are two issues with this: The first
is that the underlying assumption is wrong: The names only coincide in
the 0-7 range, they differ for more recent additions. The second is that
av_get_colorspace_name() is outdated itself; it has not been updated
with the names of the newly defined colorspaces.

Fix both of this by using the names from
av_color_(space|primaries|transfer)_name() and comparing them via
strcmp; don't use av_get_colorspace_name() at all.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit e65a5df4fa)
2021-04-02 21:36:20 +02:00
Andreas Rheinhardt
0bbf1f4785 avcodec/avcodec: Don't use NULL for %s printf specifier
Our "get name" functions can return NULL for invalid/unknown
arguments. So check for this.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 88b7d9fd36)
2021-04-02 21:35:55 +02:00
Andreas Rheinhardt
a57ba45eb4 avformat/webpenc: Fix memleak when trailer is never written
When the trailer is never written (or when a stream switches from
non-animation mode to animation mode mid-stream), a cached packet
(if existing) would leak. Fix this by adding a deinit function.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 3903c139a9)
2021-04-02 21:35:42 +02:00
Andreas Rheinhardt
ceb5863d04 avformat/webpenc: Fix memleak when using invalid packets
The WebP muxer sometimes caches a packet it receives to write it later;
yet if a cached packet is too small (so small as to be invalid),
it is cached, but not written and not unreferenced. Such a packet leaks,
either by being overwritten by the next packet or because it is never
unreferenced at all.

Fix this by not caching unusable packets at all; and error out on
invalid packets.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit f9043de99a)
2021-04-02 21:35:29 +02:00
Zane van Iperen
cc8eba0ab8 avcodec/adpcmenc: don't share a single AVClass between multiple AVCodecs.
Temporary fix until AVClass::child_class_next is gone.

Reviewed-By: James Almer <jamrial@gmail.com>
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit aa1cfe05a5)
2021-04-02 09:01:59 +10:00
Michael Niedermayer
829d4b009f avcodec/pnm_parser: Check image size addition for overflow
Fixes: assertion failure
Fixes: out of array access
Fixes: 32664/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PGMYUV_fuzzer-6533642202513408.fuzz
Fixes: 32669/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PGMYUV_fuzzer-6001928875147264

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 79ac8d5546)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:45 +02:00
Michael Niedermayer
426c52c2ce avcodec/lscrdec: Check length in decode_idat()
Fixes: out of array access
Fixes: 32264/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_LSCR_fuzzer-6684504010915840

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c01cd2a8b2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:45 +02:00
Michael Niedermayer
15f1648f7f tools/target_dem_fuzzer: Fix packet leak
Fixes: 32121/clusterfuzz-testcase-minimized-ffmpeg_IO_DEMUXER_fuzzer-4512973109460992

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6055b93379)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:45 +02:00
Michael Niedermayer
45f40cec3a avformat/imx: Check palette chunk size
Fixes: out of array write
Fixes: 32116/clusterfuzz-testcase-minimized-ffmpeg_dem_SIMBIOSIS_IMX_fuzzer-6702533894602752

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f7a5150447)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:45 +02:00
Michael Niedermayer
de9f4351fa avcodec/h265_metadata_bsf: Check nb_units before accessing the first in h265_metadata_update_fragment()
Fixes: null pointer dereference
Fixes: 32113/clusterfuzz-testcase-minimized-ffmpeg_BSF_HEVC_METADATA_fuzzer-4803262287052800

Same as 0c48c332ee

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 497ea04dbd)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:45 +02:00
Michael Niedermayer
1ff644e509 avformat/rmdec: use larger intermediate type for audio_framesize * sub_packet_h check
Fixes: signed integer overflow: 65535 * 65535 cannot be represented in type 'int'
Fixes: 31406/clusterfuzz-testcase-minimized-ffmpeg_dem_IVR_fuzzer-5024692843970560

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit cf2fd9204b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:45 +02:00
Michael Niedermayer
698d768d21 avcodec/exr: Check oe in huf_decode() before use
Fixes: out of array access
Fixes: 31386/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_EXR_fuzzer-5773234709594112

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9e8475c7c7)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:45 +02:00
Michael Niedermayer
137c998b48 avcodec/h264_slice: Check input SPS in ff_h264_update_thread_context()
Fixes: crash
Fixes: check_pkt.mp4

Found-by: Rafael Dutra <rafael.dutra@cispa.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ceae92cb29)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
d416d7f061 avcodec/mpegpicture: Keep ff_mpeg_framesize_alloc() failure state consistent
Fixes: null pointer dereference
Fixes: ff_put_pixels16_sse2.mp4

Found-by: Rafael Dutra <rafael.dutra@cispa.de>
Regression-since: 4b2863ff01
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 76cea1d2ce)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
807b703a48 avformat/mpc8: check for size overflow in mpc8_get_chunk_header()
Fixes: signed integer overflow: -9223372036854775760 - 50 cannot be represented in type 'long'
Fixes: 31673/clusterfuzz-testcase-minimized-ffmpeg_dem_MPC8_fuzzer-580134751869337

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6cc65d3d67)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
5978b8bd9c avformat/mov: Do not zero memory that is written too or unused
Fixes: OOM
Fixes: 31220/clusterfuzz-testcase-minimized-ffmpeg_dem_MOV_fuzzer-6033383962574848

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c1fe1114bc)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
ac0e9506d0 avcodec/mpegvideo: Update chroma_?_shift in ff_mpv_common_frame_size_change()
Fixes: out of array access
Fixes: 31201/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-4627865612189696.fuzz

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 87d87e6587)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
be3225153e avformat/mov: Ignore multiple STSC / STCO
Fixes: STSC / STCO inconsistency and assertion failure
Fixes: crbug1184666.mp4

Found-by: Chromium ASAN fuzzer
Reviewed-by: Matt Wolenetz <wolenetz@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2611d20d35)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
9b25cf8b06 avformat/utils: Extend overflow check in dts wrap in compute_pkt_fields()
Fixes: signed integer overflow: -9223372032574480351 - 4294967296 cannot be represented in type 'long long'
Fixes: 30022/clusterfuzz-testcase-minimized-ffmpeg_dem_KUX_fuzzer-5568610275819520

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b37ff29e0e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
f8fc6416b2 avfilter/vf_scale: Fix adding 0 to NULL (which is UB) in scale_slice()
Found-by: Jeremy Leconte <jleconte@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1cf96ce269)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
18bcfa81fc avutil/common: Add FF_PTR_ADD()
Suggested-by: Andreas Rheinhardt
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 522a5259e9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
8c99a06c5c avcodec/setts_bsf: Check timebase
Fixes: Division by 0
Fixes: 30952/clusterfuzz-testcase-minimized-ffmpeg_BSF_SETTS_fuzzer-6601016202100736

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 7fc8ba9068)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
9179ab9227 avformat/wtvdec: Check size in SBE2_STREAM_DESC_EVENT / stream2_guid
Fixes: signed integer overflow: 539033600 - -1910497124 cannot be represented in type 'int'
Fixes: 30928/clusterfuzz-testcase-minimized-ffmpeg_dem_WTV_fuzzer-5922630966312960

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1f74661543)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
6ef700dfb0 avformat/utils: Fix integer overflow with duration_gcd in ff_rfps_calculate()
Fixes: signed integer overflow: 136323327 * 281474976710656 cannot be represented in type 'long'
Fixes: 30913/clusterfuzz-testcase-minimized-ffmpeg_dem_IVF_fuzzer-5753392189931520

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6dc6e1cce0)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
72a03b3c06 tools/target_dec_fuzzer: Adjust threshold for H264
Fixes: Timeout (too long -> 3sec)
Fixes: 28047/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-4662727980875776

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 46c4f39307)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
ee059d8ef8 avformat/cafdec: Do not build an index if all packets are the same
Fixes: Timeout
Fixes: 28214/clusterfuzz-testcase-minimized-ffmpeg_dem_CAF_fuzzer-6495999421579264

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ea12590c8e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
419f62c902 avformat/vividas: Use equals check with n in read_sb_block()
Fixes: OOM
Fixes: 27780/clusterfuzz-testcase-minimized-ffmpeg_dem_VIVIDAS_fuzzer-5097985075314688

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e44214a824)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
59c05f51d5 avcodec/sonic: Use unsigned temporary in predictor_calc_error()
Fixes: signed integer overflow: -2147471366 - 18638 cannot be represented in type 'int'
Fixes: 30157/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SONIC_fuzzer-5171199746506752

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 075d793ba8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
79ff380da7 avformat/jacosubdec: Use 64bit intermediate for start/end timestamp shift
Fixes: signed integer overflow: -1957694447 + -1620425806 cannot be represented in type 'int'
Fixes: 30207/clusterfuzz-testcase-minimized-ffmpeg_dem_JACOSUB_fuzzer-5050791771635712

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2c477be08a)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
81178db83b avformat/flvdec: Check array entry number
Fixes: signed integer overflow: -2147483648 - 1 cannot be represented in type 'int'
Fixes: 30209/clusterfuzz-testcase-minimized-ffmpeg_dem_FLV_fuzzer-5724831658147840

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b5d8fe1c87)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
039ecef275 avcodec/h264_slice: Check sps in h264_slice_header_init()
Fixes: null pointer dereference
Fixes: h264_slice_header_init.mp4

Found-by: Rafael Dutra <rafael.dutra@cispa.de>
Tested-by: Rafael Dutra <rafael.dutra@cispa.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 8047243899)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
c5a61adcca avformat/movenc: Avoid loosing cluster array on failure
Fixes: crash
Fixes: check_pkt.mp4

Found-by: Rafael Dutra <rafael.dutra@cispa.de>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 5c2ff44f91)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
095f50e06e avformat/avidec: Check for dv streams before using priv_data in parse ##dc/##wb
Fixes: null pointer dereference
Fixes: 31588/clusterfuzz-testcase-minimized-ffmpeg_dem_AVI_fuzzer-6165716135968768

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f733688d30)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
2af5b3fa08 avformat/mov: Check sample size for overflow in mov_parse_stsd_audio()
Fixes: signed integer overflow: 2 * 1914708000 cannot be represented in type 'int'
Fixes: 31639/clusterfuzz-testcase-minimized-ffmpeg_dem_MOV_fuzzer-6303428239294464

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d35677736a)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
5d1e309e67 avcodec/sga: Check for array end in lzss_decompress()
Fixes: out of array access
Fixes: 31640/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SGA_fuzzer-5630883286614016
Fixes: 31619/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SGA_fuzzer-5176667708456960

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e8bd34fe4f)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
9a3e525b7c avformat/sbgdec: Check for overflow in last loop in expand_timestamps()
Fixes: signed integer overflow: 9223372036854775807 + 86400000000 cannot be represented in type 'long'
Fixes: 31003/clusterfuzz-testcase-minimized-ffmpeg_dem_SBG_fuzzer-6256298771480576

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f44068db1e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Michael Niedermayer
e42efdce95 avcodec/ffwavesynth: Avoid signed integer overflow in phi_at()
Fixes: signed integer overflow: 2314885530818453536 - -9070214327174160352 cannot be represented in type 'long'
Fixes: 31000/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FFWAVESYNTH_fuzzer-6558389742206976

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit be08b84f8b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-04-01 11:38:44 +02:00
Gyan Doshi
b26c6df919 rtpenc_mpegts: add AVClass to the muxer context 2021-04-01 09:36:26 +05:30
Gyan Doshi
7a74129fa9 avformat/rtpenc_mpegts: stop leaks
Fixes CID 1474460 & 1474461
2021-03-28 15:55:41 +05:30
Gyan Doshi
fd80c0b95f avformat/rtpenc_mpegts: convey options for rtp muxer
Cherry-picked 2c806aa2b4
2021-03-26 14:44:31 +05:30
Gyan Doshi
a6dc1e84d2 avformat/rtpenc_mpegts: relay streamid to mpegts muxer streams.
Cherry-picked 325bb04188
2021-03-26 14:44:06 +05:30
Gyan Doshi
390b6f0cba avformat/rtpenc_mpegts: convey options for mpeg-ts muxer
Fixes #5239

Cherry-picked affe911c65
2021-03-26 14:43:40 +05:30
Gyan Doshi
72389f7916 avformat/rtp_mpegts: typedef MuxChain struct
Cherry-picked 75fd3e1519
2021-03-26 14:43:08 +05:30
Gyan Doshi
9315b45dd2 configure: select child muxers for rtp_mpegts
Cherry-picked 36a5ae619a
2021-03-26 14:42:34 +05:30
Zane van Iperen
df9fbc442d avformat/pp_bnk: allow seeking to start
Allows "ffplay -loop" to work.

Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit 64fb63411d)
2021-03-25 16:34:42 +10:00
Zane van Iperen
2fd48331d5 avformat/alp: allow seeking to start
Allows "ffplay -loop" to work.

Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit ea9732c5d6)
2021-03-25 16:34:42 +10:00
Zane van Iperen
a98413afb9 avformat/kvag: allow seeking to start
Allows "ffplay -loop" to work.

Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit 3cc4a140ef)
2021-03-25 16:34:41 +10:00
Zane van Iperen
0cfea0581b avcodec/adpcm_ima_cunning: reset state on flush
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit e550667f61)
2021-03-25 16:34:41 +10:00
Zane van Iperen
0d00e151d1 avcodec/adpcm_ima_alp: reset state on flush
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit 257d9f91fc)
2021-03-25 16:34:41 +10:00
Zane van Iperen
990bccfad6 avcodec/adpcm_ima_ssi: reset state on flush
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit ff7bbd6d88)
2021-03-25 16:34:40 +10:00
Zane van Iperen
f0169e9d58 avcodec/adpcm_argo: reset state on flush
Commit 003b5c800f introduced seeking in argo_asf,
but this was missed, leading to non-deterministic output.

Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit 660c14a9b9)
2021-03-25 16:34:40 +10:00
Zane van Iperen
2057068495 avcodec/adpcm_aica: reset state in flush callback
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit efb58ec8f9)
2021-03-25 16:34:40 +10:00
Zane van Iperen
0b9d7b6f8d avcodec/adpcm_zork: reset state in flush callback
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit 95280cf3e7)
2021-03-25 16:34:39 +10:00
Zane van Iperen
ebe065c177 avcodec/adpcm: add comment to has_status field
Signed-off-by: Zane van Iperen <zane@zanevaniperen.com>
(cherry picked from commit 55a50885b9)
2021-03-25 16:34:39 +10:00
nyanmisaka
5f2018c490 avfilter/overlay_cuda: fix framesync with embedded PGS subtitle
Signed-off-by: nyanmisaka <nst799610810@gmail.com>
2021-03-25 04:36:41 +01:00
nyanmisaka
3d79b9357d avfilter/hwupload_cuda: add YUVA420P format support
Signed-off-by: nyanmisaka <nst799610810@gmail.com>
Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
2021-03-25 04:36:39 +01:00
James Almer
0be265e9a1 Revert "lavf: move AVStream.*index_entries* to AVStreamInternal"
This reverts commit cea7c19cda.

Until an API is added to make index_entries public in a proper way, keeping
this here is harmless.
2021-03-23 14:09:27 -03:00
Andreas Rheinhardt
5996184bea avcodec/put_bits: Restore x64 ABI compatibility with releases <= 4.3
88d80cb975 changed the type of
PutBitContext.BitBuf to uint64_t; it used to be an uint32_t.
While said structure is not public, it is nevertheless used by
certain avpriv functions and therefore crosses library boundaries:
avpriv_align_put_bits and avpriv_copy_bits were used in other libraries
in release 4.3 (and at the time of 88d80cb9) and so this commit broke
ABI.

This commit mitigates the trouble caused by this by using an uint32_t
again, but only for the 4.4 release branch and not the master branch,
as doing so for master, would break the ABI of master again, although
it is very unlikely that anyone would be helped by this (there don't
seem to be any users that combine libavcodec built from master and
libavformat from an old release: otherwise we would have received bug
reports about said ABI break).

Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
2021-03-23 01:21:29 +01:00
Andreas Rheinhardt
16af5236ae avcodec/avcodec: Sanitize options before using them
This is how it is supposed to happen, yet when using frame threading,
the codec's init function has been called before preinit. This can lead
to crashes when e.g. using unsupported lowres values for decoders
together with frame threading.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 746796ceb4)
2021-03-22 08:39:02 +01:00
Andreas Rheinhardt
2b114adcf4 avcodec/parser: Don't return pointer to stack buffer
When flushing, the parser receives a dummy buffer with padding
that lives on the stack of av_parser_parse2(). Certain parsers
(e.g. Dolby E) only analyze the input, but don't repack it. When
flushing, such parsers return a pointer to the stack buffer and
a size of 0. And this is also what av_parser_parse2() returns.

Fix this by always resetting poutbuf in case poutbuf_size is zero.

Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 9faf3f8bb0)
2021-03-22 08:17:33 +01:00
Andreas Rheinhardt
2a5c577ef3 avformat/pp_bnk: Fix memleaks when reading non-stereo tracks
Commit 6973df1122 added support
for music tracks by outputting its two containing tracks
together in one packet. But the actual data is not contiguous
in the file and therefore one can't simply use av_get_packet()
(which has been used before) for it. Therefore the packet was
now allocated via av_new_packet() and read via avio_read();
and this is also for non-music files.

This causes problems because one can now longer rely on things
done automatically by av_get_packet(): It automatically freed
the packet in case of errors; this lead to memleaks in several
FATE-tests covering this demuxer. Furthermore, in case the data
read is less than the data desired, the returned packet was not
zero-allocated (the packet's padding was uninitialized);
for music files the actual data could even be uninitialized.

The former problems are fixed by using av_get_packet() for
non-music files; the latter problem is handled by erroring out
unless both tracks could be fully read.

Reviewed-by: Zane van Iperen <zane@zanevaniperen.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
(cherry picked from commit 8a73313412)
2021-03-22 08:17:10 +01:00
Derek Buitenhuis
8f099e3a67 FATE: Add test for probing MOV/MP4 files with extended box sizes
The test sample has to have no file extension, otherwise probing
happens to work, based off file extension alone, and we want to
test the actual probing function.

Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
(cherry picked from commit e668c55649)
2021-03-21 23:22:06 -03:00
Derek Buitenhuis
cfe614787d avformat/mov: Fix extended atom size buffer length check
When extended atom size support was added to probing in
fec4a2d232, the buffer
size check was backwards, but probing continued to work
because there was no minimum size check yet, so despite
size being 1 on these atoms, and failing to read the 64-bit
size, the tag was still correctly read.

When 0b78016b2d introduced a
minimum size check, this exposed the bug, and broke probing
any files with extended atom sizes, such as entirely valid
large files that start whith mdat atoms.

Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
(cherry picked from commit 85f397c828)
2021-03-21 23:21:48 -03:00
James Almer
7efe57ba11 avformat: remove FF_API_INIT_PACKET from AVStream.attached_pic
This field needs to be replaced altogether, not just its type changed.
This will be done in a separate change.

Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 34f4f57800)
2021-03-21 19:07:09 -03:00
Michael Niedermayer
da4d578621 Update versions for 4.4
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2021-03-20 01:01:12 +01:00
4784 changed files with 201806 additions and 399791 deletions

6
.gitignore vendored
View File

@@ -19,12 +19,8 @@
*.swp
*.ver
*.version
*.metal.air
*.metallib
*.metallib.c
*.ptx
*.ptx.c
*.ptx.gz
*_g
\#*
.\#*
@@ -35,8 +31,8 @@
/ffprobe
/config.asm
/config.h
/config_components.h
/coverage.info
/avversion.h
/lcov/
/src
/mapfile

View File

@@ -1,3 +1,4 @@
<james.darnley@gmail.com> <jdarnley@obe.tv>
<jeebjp@gmail.com> <jan.ekstrom@aminocom.com>
<sw@jkqxz.net> <mrt@jkqxz.net>
<u@pkh.me> <cboesch@gopro.com>

View File

@@ -1,6 +1,6 @@
See the Git history of the project (https://git.ffmpeg.org/ffmpeg) to
See the Git history of the project (git://source.ffmpeg.org/ffmpeg) to
get the names of people who have contributed to FFmpeg.
To check the log, you can type the command "git log" in the FFmpeg
source directory, or browse the online repository at
https://git.ffmpeg.org/ffmpeg
http://source.ffmpeg.org.

840
Changelog
View File

@@ -1,845 +1,7 @@
Entries are sorted chronologically from oldest to youngest within each release,
releases are sorted from youngest to oldest.
version 6.1.4:
avutil/common: cast GET_BYTE/GET_16BIT returned value
avfilter/vf_drawtext: fix call GET_UTF8 with invalid argument
avfilter/vf_drawtext: fix incorrect text length
avfilter/vf_drawtext: Account for bbox text seperator
avcodec/utvideodec: Set B for the width= 1 case in restore_median_planar_il()
avcodec/osq: Fix 32bit sample overflow
avformat/rtpdec_rfc4175: Only change PayloadContext on success
avformat/rtpdec_rfc4175: Check dimensions
avformat/rtpdec_rfc4175: Fix memleak of sampling
avformat/http: Fix off by 1 error
avcodec/exr: spelling
avcodec/exr: use tile dimensions in pxr24 UINT case
avcodec/exr: Simple check for available channels
avformat/sctp: Check size in sctp_write()
avformat/rtmpproto: consider command line argument lengths
avformat/rtmpproto_ Check tcurl and flashver length
avcodec/g723_1enc: Make min_err 64bit
avcodec/vlc: Clear val8/16 in vlc_multi_gen() by av_mallocz()
avformat/rtpenc_h264_hevc: Check space for nal_length_size in ff_rtp_send_h264_hevc()
swscale/output: Fix integer overflow in yuv2ya16_X_c_template()
avcodec/exr: Check that DWA has 3 channels
avcodec/exr: check ac_size
avcodec/exr: Round dc_w/h up
avcodec/mjpegdec: Explain buf_size/width/height check
avformat/avidec: Fix integer overflow iff ULONG_MAX < INT64_MAX
fftools/ffmpeg_mux_init: Fix double-free on error
avformat/aviobuf: Keep checksum_ptr consistent in avio_seek()
avcodec/librsvgdec: fix compilation with librsvg 2.50.3
aacenc_tns: clamp filter direction energy measurement
avcodec/dxv: Check coded_height, to avoid invalid av_clip()
avcodec/aac/aacdec: dont allow ff_aac_output_configure() allocating a new frame if it has no frame
avformat/lrcdec: Fix fate-sub-lrc-ms-remux on x86-32
avcodec/sanm: Check w,h,left,top
avcodec/utvideodec: Clear plane_start array
fftools/ffmpeg_mux_init: Use 64bit for score computation in map_auto_video()
lavc/aarch64: Fix addp overflow in ff_pred16x16_plane_neon_10
avcodec/x86/pngdsp: add missing emms at the end of add_png_paeth_prediction
version 6.1.3:
libavfilter/dnn/dnn_backend_tf: Remove redundant av_freep() to avoid double free
avcodec/dxv: Check that we initialize op_data
avcodec/exr: Check for pixel type consistency in DWA
avcodec/libvorbisdec: avoid overflow when assinging sample rate from long to int
avcodec/g726: init missing sample rate
avformat/lrcdec: limit input timestamp range to avoid overflows
avcodec/scpr3: Clear clr
avcodec/ilbcdec: Clear cbvec when used with create_augmented_vector()
avcodec/jpeg2000dec: Make sure the 4 extra bytes allocated are initialized
avfilter/avf_showcqt: fix unbounded index when copying to fft_data
avcodec/aacsbr_template: Check ilb
avcodec/utvideodec: Set B for the width= 1 case
avcodec/ffv1: Clear state on alloc
avcodec/jpeg2000dec: implement cdef remapping during pixel format matching
avcodec/jpeg2000dec: move cdef default check into get_siz()
avcodec/exr: Check rle_raw_data and surroundings
avcodec/exr: Dont access outside xsize/ysize
examples: Add check and replace av_free() to avoid potential memory errors
libavcodec/tests/snowenc: Add av_free() to avoid memory leak
libavfilter/af_firequalizer: Add check for av_malloc_array()
libavcodec/videotoolbox_vp9: Move av_malloc() to avoid memory leak
avcodec/mpc8: init avctx->sample_rate
avcodec/cbs_h266_syntax_template: fix out of bounds access
avformat/libopenmpt: fix seeking weirdness
avformat/hls: add cmfv/cmfa exceptions
avformat/lrcdec: support arbitrary precision timestamp
avcodec/ffv1dec: Disable frame threading due to race condition
swscale/swscale_unscaled: use 8 line alignment for planarCopyWrapper with dithering
Update for 6.1.3
libavcodec/tests/motion: Add check for avcodec_alloc_context3()
avcodec/tests/avpacket: Add av_free() to avoid memory leak
examples: Add av_freep to avoid potential memory leak
avcodec/tests/avpacket: Add av_packet_free() to avoid memory leak
avcodec/fits: Clear naxis
avcodec/vqavideo; Check bytestream2_get_buffer() reading next_codebook_buffer
avcodec/lzf: Check for input space
avcodec/imc: Clear padding of buf16
avcodec/iff: Clear ham_buf
avcodec/cri: Check bytestream2_get_buffer() for end
avcodec/cri: Factor read_len out
avformat/dashdec: Allocate space for appended "/"
avcodec/mpegvideo_dec: Fix lowres=3 field select interlaced mpeg4 frame
avformat/mxg: clear AV_INPUT_BUFFER_PADDING_SIZE
avformat/vqf: Ensure that comm_chunk is fully read
avformat/mov: make sure file_checksum is fully initialized
avformat/asfdec_f: Check amount of value read
avcodec/jpegxl_parser: add sanity check for frame size
avformat/concatdec: Clip duration in one more case in get_best_effort_duration()
avcodec/ffv1dec: Check k in get_vlc_symbol()
avcodec/cfhd: Check idwt_buf size before allocation
avcodec/ivi: Check luma/chroma mb_size
avcodec/motion_est: don't add offsets to NULL pointers
swscale/swscale_unscaled: don't add offsets to NULL pointers
libavcodec/alsdec.c: Add check for av_malloc_array() and av_calloc()
avcodec/psd: Move frame allocation after RLE processing
avcodec/smacker: Move buffer allocation to later
avcodec/opus: don't materialize buf pointer from null
avcodec/speexdec: consider differing frame sizes in remaining space check
avformat/iff: Check nb_channels == 0 in CHNL
avcodec/osq: Request a coding mode 2 sample
avcodec/osq: Switch back to av_ceil_log2()
avcodec/osq: Add note about update_stats() count
avcodec/osq: Fix signed integer overflow in update_stats()
avcodec/mss2dsp: use FF_PTR_ADD to add offsets to a pointer
avformat/movenc: fix writing reserved bits in EC3SpecificBox
avcodec/hevc/hevcdec: Check num_entry_point_offsets
avcodec/speexdec: Pass and check remaining packets to decode functions
avcodec/rkmppdec: Fix double-free on error
avcodec/ppc/vp8dsp_altivec: Fix out-of-bounds access
fftools/ffmpeg_demux: don't flag timestamps as unreliable if they are generated
avformat/matroskadec: check that channels fit in signed 32bit int
avcodec/takdec: Check remaining space for first predictors
avcodec/svq3: Check there are bits left before decompression
avcodec/sonic: Check num_taps
avformat/imf_cpl: fix indention after previous commit
avformat/imf_cpl: do not continue looping forever
avformat/mov: reject negative ELST durations
avformat/avidec: Ignore duplicate GAB2
avcodec/h264_mb: Fix tmp_cr for arm
avcodec/vorbisdec: Dont treat overread as error
avformat/iff: Check nb_channels == 0 in MHDR
tests/fate/filter-video: Fix dependancy for codecview
libpostproc: check minimum size
avformat/hls: Fix flash1.bogulus.cfd support
avformat/hls: Split allowed_segment_extensions off allowed_extensions
avformat/hls: Fix Youtube AAC
avformat/hls: add fmp4 to allowed_extensions
avformat/hls: Add ec3 to allowed_extensions
avformat/hls: Add cmfv and cmfa to allowed_extensions
postproc/postprocess_template: Fix reading uninitialized pixels in dering_C()
configure: Clearer documentation for "disable-safe-bitstream-reader"
avcodec/osq: avoid undefined negation
swscale/output: Fix integer overflow in yuv2gbrp_full_X_c()
avcodec/libtheora: fix setting keyframe_mask
avfilter/buffersrc: check for valid sample rate
doc: replace http/git by https urls
configure: update copyright year
avformat/hls: Partially revert "reduce default max reload to 3"
avfilter/asrc_afirsrc: fix by one smaller allocation of buffer
avfilter/bwdif: account for chroma sub-sampling in min size calculation
avfilter/af_afwtdn: fix crash with EOF handling
avfilter/vf_colorcorrect: fix memory leaks
avfilter/vf_codecview: fix heap buffer overflow
avformat/iff: Check that we have a stream in read_dst_frame()
avformat/mlvdec: fix size checks
avformat/wavdec: Fix overflow of intermediate in block_align check
avformat/mxfdec: Check edit unit for overflow in mxf_set_current_edit_unit()
avformat/hls: Fix twitter
libavformat/hls: Be more restrictive on mpegts extensions
avformat/hls: .ts is always ok even if its a mov/mp4
avcodec/h263dec: Check against previous dimensions instead of coded
avformat/hls: Print input format in error message
avformat/hls: Be more picky on extensions
avformat/mxfdec: Check avio_read() success in mxf_decrypt_triplet()
avcodec/huffyuvdec: Initialize whole output for decode_gray_bitstream()
avformat/ipmovie: Check signature_buffer read
avformat/wtvdec: Initialize buf
avcodec/cbs_vp9: Initialize VP9RawSuperframeIndex
avformat/vqf: Propagate errors from add_metadata()
avformat/vqf: Check avio_read() in add_metadata()
avformat/dashdec: Check whitelist
avutil/avstring: dont mess with NULL pointers in av_match_list()
avfilter/vf_v360: Fix NULL pointer use
avcodec/mpegvideo_enc: Check FLV1 resolution limits
avcodec/ffv1enc: Fix handling of 32bit unsigned symbols
avcodec/vc1dec: Clear block_index in vc1_decode_reset()
avcodec/aacsbr_template: Clear n_q on error
avcodec/osq: Fixes several undefined overflows in do_decode()
swscale/output: Fix undefined overflow in yuv2rgba64_full_X_c_template()
avfilter/af_pan: Fix sscanf() use
avfilter/vf_grayworld: Use the correct pointer for av_log()
avfilter/vf_addroi: Add missing NULL termination to addroi_var_names[]()
avcodec/get_buffer: Use av_buffer_mallocz() for audio same as its done for video
avformat/jpegxl_anim_dec: clear buffer padding
avformat/rmdec: check that buf if completely filled
avcodec/cfhdenc: Clear dwt_tmp
avcodec/hapdec: Clear tex buffer
avformat/mxfdec: Check that key was read sucessfull
avformat/rpl: Fix check for negative values
avformat/mlvdec: Check avio_read()
avcodec/utils: Fix block align overflow for ADPCM_IMA_WAV
avformat/matroskadec: Check pre_ns for overflow
tools/target_dec_fuzzer: Adjust threshold for EACMV
tools/target_dec_fuzzer: Adjust threshold for MVC1
tools/target_dec_fuzzer: Adjust Threshold for indeo5
avutil/timecode: Avoid fps overflow in av_timecode_get_smpte_from_framenum()
avcodec/webp: Check ref_x/y
avcodec/ilbcdec: Initialize tempbuff2
avformat/qcp: Check for read failure in header
avcodec/eatgq: Check bytestream2_get_buffer() for failure
avformat/dxa: check bpc
swscale/slice: clear allocated memory in alloc_lines()
avcodec/h2645_parse: Ignore NAL with nuh_layer_id == 63
avcodec/mjpegdec: Disallow progressive bayer images
avformat/icodec: fix integer overflow with nb_pal
doc/developer: Document relationship between git accounts and MAINTAINERS
avformat/vividas: Check avio_read() for failure
avformat/ilbc: Check avio_read() for failure
avformat/nistspheredec: Clear buffer
avformat/mccdec: Initialize and check rate.den
avformat/rpl: check channels
INSTALL: explain the circular dependency issue and solution
avformat/mpegts: Initialize predefined_SLConfigDescriptor_seen
avformat/mxfdec: Fix overflow in midpoint computation
swscale/output: used unsigned for bit accumulation
avcodec/rangecoder: only perform renorm check/loop for callers that need it
avcodec/ffv1dec: Fix end computation with ec=2
avcodec/ffv1enc: Prevent generation of files with broken slices
avformat/matroskadec: Check desc_bytes so bits fit in 64bit
avformat/mov: Avoid overflow in dts
avcodec/ffv1enc: Correct error message about unsupported version
avcodec/ffv1enc: Slice combination is unsupported
avcodec/ffv1enc: 2Pass mode is not possible with golomb coding
avcodec/ffv1enc: Fix >8bit context size
avcodec/xan: Add basic input size check
avcodec/imm4: Check input size
avcodec/svq3: Check for minimum size input
avcodec/eacmv: Check input size for intra frames
tools/target_dec_fuzzer: Adapt threshold for RASC
avcodec/encode: Check bitrate
avcodec/cbs_h266_syntax_template: Check bit depth with range extension
avcodec/osq: use unsigned for decorrelation
avcodec/jfdctint_template: use unsigned z* in row_fdct()
avformat/asf: Check picsize
avcodec/osq: Treat sum = 0 as k = 0
avformat/mxfdec: Check timecode for overflow
avformat/mxfdec: More offset_temp checks
swscale/output: Fix undefined integer overflow in yuv2rgba64_2_c_template()
swscale/swscale: Use unsigned operation to avoid undefined behavior
avcodec/vc2enc: basic sanity check on slice_max_bytes
avformat/mvdec: Check if name was fully read
avcodec/wmavoice: Do not use uninitialized pitch[0]
avformat/argo_brp: Check that ASF chunk header is completely read
avcodec/notchlc: Check bytes left before reading
avcodec/vc1_block: propagate error codes
avformat/apetag: Check APETAGEX
avcodec/magicyuvenc: better slice height
avcodec/avcodec: Warn about data returned from get_buffer*()
avformat/av1dec: Better fix for 70872/clusterfuzz-testcase-minimized-ffmpeg_dem_OBU_fuzzer-6005782487826432
avcodec/apac: Fix discards const qualifier
avcodec/alsdec: clear last_acf_mantissa
avcodec/aic: Clear slice_data
avcodec/vc1dec: Clear mb_type_base and ttblk_base
avcodec/shorten: clear padding
avformat/mpeg: Check an avio_read() for failure
avcodec/apac: Clean padding space
avcodec/mvha: Clear remaining space after inflate()
bsf/media100_to_mjpegb: Clear output buffer padding
avformat/segafilm: Set keyframe
avcodec/sga: av_assert1 check init_get_bits8()
tools/target_dec_fuzzer: Check that FFv1 doesnt leave uninitialized memory in its buffers
avdevice/dshow: Initialize 2 pointers
avcodec/dxva2: initialize hr in ff_dxva2_common_end_frame()
avcodec/dxva2: initialize validate
avcodec/dxva2: Initialize ConfigBitstreamRaw
avcodec/dxva2: Initialize dxva_size and check it
avfilter/vf_xfade: Compute w2, h2 with float
avfilter/vf_v360: Assert that vf was initialized
avfilter/vf_tonemap_opencl: Dereference after NULL check
avfilter/af_surround: Check output format
avfilter/vf_xfade_opencl: Check ff_inlink_consume_frame() for failure
avformat/lmlm4: Eliminate some AVERROR(EIO)
tools/target_dec_fuzzer: Use av_buffer_allocz() to avoid missing slices to have unpredictable content
avformat/wtvdec: Check length of read mpeg2_descriptor
avformat/wtvdec: clear sectors
avcodec/parser: ensure input padding is zeroed
avformat/jpegxl_anim_dec: ensure input padding is zeroed
avformat/img2dec: Clear padding data after EOF
avformat/wavdec: Check if there are 16 bytes before testing them
Revert "avformat/mpegts: update stream info when PMT ES stream_type changes"
configure: Use MSYSTEM_CARCH for default arch on msys2
avfilter/avfiltergraph: fix regression in picking channel layout
avformat/mpegts: update stream info when PMT ES stream_type changes
avformat/wavdec: increase requested probe score for codec probe
lsws/ppc/yuv2rgb_altivec: Fix build in non-VSX environments with Clang v2
lsws/ppc/yuv2rgb_altivec: Fix build in non-VSX environments with Clang
avformat/mov: (v4) fix get_eia608_packet
configure: Improve the check for the rsync --contimeout option
rtmpproto: Avoid rare crashes in the fail: codepath in rtmp_open
lavc/hevcdec: pass an actual codec context to ff_h2645_sei_to_frame()
lavc/aarch64: Fix ff_pred16x16_plane_neon_10
lavc/aarch64: Fix ff_pred8x8_plane_neon_10
vp9: recon: Use emulated edge to prevent buffer overflows
arm: vp9mc: Load only 12 pixels in the 4 pixel wide horizontal filter
aarch64: vp9mc: Load only 12 pixels in the 4 pixel wide horizontal filter
riscv: test for assembler support
avfilter/f_loop: fix aloop activate logic
avfilter/f_loop: fix length of aloop leftover buffer
avcodec/jpegxl_parser: fix reading lz77-pair as initial entropy symbol
avcodec/jpegxl_parser: check entropy_decoder_read_symbol return value
avutil/hwcontext: Don't assume frames_uninit is reentrant
avutil/wchar_filename: re-introduce explicit cast of void* to char*
avcodec/libx265: unbreak build for X265_BUILD >= 213
lavc/hevcdec: set per-CTB filter parameters for WPP
lavc/hevc: check framerate num/den to be strictly positive
lavc/libx265: unbreak build for X265_BUILD >= 210
avformat/libzmq: fix check for zmq protocol prefix
configure: improve check for POSIX ioctl
configure: restore autodetection of v4l2 and fbdev
avformat/hlsenc: correctly reset subtitle stream counter per-varstream
libavcodec/arm/mlpdsp_armv5te: fix label format to work with binutils 2.43
version 6.1.2
avcodec/snow: Fix off by 1 error in run_buffer
avcodec/utils: apply the same alignment to YUV410 as we do to YUV420 for snow
swscale: [loongarch] Fix checkasm-sw_yuv2rgb failure.
avcodec/pngenc: fix sBIT writing for indexed-color PNGs
avcodec/pngdec: use 8-bit sBIT cap for indexed PNGs per spec
avcodec/videotoolboxenc: Fix bitrate doesn't work as expected
Changelog: update
avdevice/dshow: Don't skip audio devices if no video device is present
avcodec/hdrenc: Allocate more space
avcodec/cfhdenc: Height of 16 is not supported
avcodec/cfhdenc: Allocate more space
avcodec/osq: fix integer overflow when applying factor
avcodec/osq: avoid using too large numbers for shifts and integers in update_residue_parameter()
avcodec/vaapi_encode: Check hwctx
avcodec/proresdec: Consider negative bits left
avcodec/alsdec: Clear shift_value
avcodec/hevc/hevcdec: Do not allow slices to depend on failed slices
avfilter/vf_xfade: Check ff_inlink_consume_frame() for failure
avutil/slicethread: Check pthread_*_init() for failure
avutil/frame: Check log2_crop_align
avutil/buffer: Check ff_mutex_init() for failure
avformat/xmv: Check this_packet_size
avformat/ty: rec_size seems to only need 32bit
avformat/tty: Check avio_size()
avformat/siff: Basic pkt_size check
avformat/sauce: Check avio_size() for failure
avformat/sapdec: Check ffurl_get_file_handle() for error
avformat/nsvdec: Check asize for PCM
avformat/mp3dec: Check header_filesize
avformat/mp3dec; Check for avio_size() failure
avformat/mov: Use 64bit for str_size
avformat/mm: Check length
avformat/hnm: Check *chunk_size
avformat/hlsenc: Check ret
avformat/bintext: Check avio_size() return
avformat/asfdec_o: Check size of index object
avfilter/vf_scale: Check ff_scale_adjust_dimensions() for failure
avfilter/scale_eval: Use 64bit, check values in ff_scale_adjust_dimensions()
avfilter/vf_lut3d: Check av_scanf()
avfilter/vf_elbg: Use unsigned for shifting into the top bit
avfilter/vf_deshake_opencl: Ensure that the first iteration initializes the best variables
swscale/output: Fix integer overflows in yuv2rgba64_X_c_template
avformat/mxfdec: Reorder elements of expression in bisect loop
avutil/timecode: Use a 64bit framenum internally
avcodec/pnmdec: Use 64bit for input size check
avcodec/mpeg12enc: Use av_rescale() in vbv_buffer_size computation
avcodec/utvideoenc: Use unsigned shift to build flags
avcodec/j2kenc: Merge dwt_norm into lambda
avcodec/vc2enc: Fix overflows with storing large values
avcodec/mpegvideo_enc: Do not duplicate pictures on shifting
avdevice/dshow_capture: Fix error handling in ff_dshow_##prefix##_Create()
avcodec/tiff: Check value on positive signed targets
avfilter/vf_convolution_opencl: Assert that the filter name is one of the filters
avfilter/vf_bm3d: Dont round MSE2SSE to an integer
avdevice/dshow: Remove NULL check on pin
avdevice/dshow: check ff_dshow_pin_ConnectionMediaType() for failure
avdevice/dshow: Check device_filter_unique_name before use
avdevice/dshow: Cleanup also on av_log case
avdevice/dshow_filter: Use wcscpy_s()
avcodec/flac_parser: Assert that we do not overrun the link_penalty array
avcodec/osq: avoid signed overflow in downsample path
avcodec/pixlet: Simplify pfx computation
avcodec/motion_est: Fix score squaring overflow
avcodec/mlpenc: Use 64 for ml, mr
avcodec/loco: Check loco_get_rice() for failure
avcodec/loco: check get_ur_golomb_jpegls() for failure
avcodec/imm4: check cbphi for error
avcodec/iff: Use signed count
avcodec/golomb: Assert that k is in the supported range for get_ur/sr_golomb()
avcodec/golomb: Document return for get_ur_golomb_jpegls() and get_sr_golomb_flac()
avcodec/dxv: Fix type in get_opcodes()
avcodec/cri: Check length
avcodec/xsubdec: Check parse_timecode()
avutil/imgutils: av_image_check_size2() ensure width and height fit in 32bit
doc/examples/mux: remove nop
avcodec/proresenc_kostya: use unsigned alpha for rotation
avformat/rtpenc_rfc4175: Use 64bit in computation if copy_offset
avformat/rtmppkt: Simplify and deobfuscate amf_tag_skip() slightly
avformat/rmdec: use 64bit for audio_framesize checks
avutil/wchar_filename: Correct sizeof
avutil/hwcontext_d3d11va: correct sizeof IDirect3DSurface9
avutil/hwcontext_d3d11va: Free AVD3D11FrameDescriptor on error
avutil/hwcontext_d3d11va: correct sizeof AVD3D11FrameDescriptor
doc/examples/vaapi_encode: Try to check fwrite() for failure
avformat/usmdec: Initialize value
avformat/tls_schannel: Initialize ret
avformat/subfile: Assert that whence is a known case
avformat/subfile: Merge if into switch()
avformat/rtsp: Check that lower transport is handled in one of the if()
avformat/rtsp: initialize reply1
avformat/rtsp: use < 0 for error check
avformat/rtpenc_vc2hq: Check sizes
avfilter/af_aderivative: Free out on error
swscale/swscale: Use ptrdiff_t for linesize computations
avfilter/af_afir: Assert format
avfilter/af_afftdn: Assert format
avfilter/af_pan: check nb_output_channels before use
cbs_av1: Reject thirty-two zero bits in uvlc code
avfilter/af_mcompand: compute half frequency in double
avfilter/af_channelsplit: Assert that av_channel_layout_channel_from_index() succeeds
avfilter/af_aresample: Cleanup on av_channel_layout_copy() failure
tools/coverity: Phase 1 study of anti-halicogenic for coverity av_rescale()
avfilter/vf_avgblur: Check plane instead of AVFrame
avfilter/drawutils: Fix depthb computation
avfilter/avf_showcwt: Check av_parse_video_rate() for failure
avformat/rdt: Check pkt_len
avformat/mpeg: Check len in mpegps_probe()
avformat/mxfenc: resurrects the error print
avdevice/dshow: Check ICaptureGraphBuilder2_SetFiltergraph() for failure
avcodec/mfenc: check IMFSample_ConvertToContiguousBuffer() for failure
avcodec/vc1_loopfilter: Factor duplicate code in vc1_b_h_intfi_loop_filter()
avformat/img2dec: assert no pipe on ts_from_file
avcodec/cbs_jpeg: Try to move the read entity to one side in a test
fftools/ffmpeg_enc: Initialize fd
fftools/ffmpeg_enc: simplify opaque_ref check
avformat/mov: Check edit list for overflow
fftools/ffmpeg: Check read() for failure
MAINTAINERS: Add Timo Rothenpieler to server admins
swscale/output: Avoid undefined overflow in yuv2rgb_write_full()
swscale/output: alpha can become negative after scaling, use multiply
avcodec/targaenc: Allocate space for the palette
avcodec/r210enc: Use av_rescale for bitrate
avcodec/jfdctint_template: Fewer integer anomalies
avcodec/snowenc: MV limits due to mv_penalty table size
tools/target_dec_fuzzer: Adjust threshold for MV30
tools/target_dec_fuzzer: Adjust threshold for jpeg2000
avformat/mxfdec: Check container_ul->desc before use
avcodec/libvpxenc: Cleanup on error
MAINTAINERS: Update the entries for the release maintainer for FFmpeg
configure: update copyright year
doc/developer: Provide information about git send-email and gmail
avfilter/vf_rotate: Check ff_draw_init2() return value
avformat/mov: Use int64_t in intermediate for corrected_dts
avformat/mov: Use 64bit in intermediate for current_dts
avformat/matroskadec: Assert that num_levels is non negative
avformat/libzmq: Check av_strstart()
avformat/img2dec: Little JFIF / Exif cleanup
avformat/img2dec: Move DQT after unrelated if()
avformat/imfdec: Simplify get_next_track_with_minimum_timestamp()
avdevice/xcbgrab: Check sscanf() return
fftools/cmdutils: Add protective () to FLAGS
avformat/sdp: Check before appending ","
avcodec/ilbcdec: Remove dead code
avcodec/vp8: Check cond init
avcodec/vp8: Check mutex init
avcodec/proresenc_anatoliy: Assert that AV_PROFILE_UNKNOWN is replaced
avcodec/pcm-dvdenc: 64bit pkt-size
avcodec/notchlc: Check init_get_bits8() for failure
avcodec/tests/dct: Use 64bit in intermediate for error computation
avcodec/scpr3: Check add_dec() for failure
avcodec/rv34: assert that size is not 0 in rv34_gen_vlc_ext()
avcodec/wavpackenc: Use unsigned for potential 31bit shift
avcodec/tests/jpeg2000dwt: Use 64bit in comparission
avcodec/tests/jpeg2000dwt: Use 64bit in err2 computation
avformat/fwse: Remove always false expression
avcodec/sga: Make it clear that the return is intentionally not checked
avformat/asfdec_f: Use 64bit for preroll computation
avformat/argo_asf: Use 64bit in offset intermediate
avformat/ape: Use 64bit for final frame size
avformat/ac4dec: Check remaining space in ac4_probe()
avdevice/pulse_audio_enc: Use av_rescale() to avoid integer overflow
avcodec/vlc: Cleanup on multi table alloc failure in ff_vlc_init_multi_from_lengths()
avcodec/tiff: Assert init_get_bits8() success in unpack_gray()
avcodec/tiff: Assert init_get_bits8() success in horizontal_fill()
tools/decode_simple: Check avcodec_send_packet() for errors on flushing
swscale/yuv2rgb: Use 64bit for brightness computation
swscale/x86/swscale: use a clearer name for INPUT_PLANER_RGB_A_FUNC_CASE
avutil/tests/opt: Check av_set_options_string() for failure
avutil/tests/dict: Check av_dict_set() before get for failure
avdevice/dshow: fix badly indented line
avformat/demux: resurrect dead stores
avcodec/tests/bitstream_template: Assert bits_init8() return
tools/enc_recon_frame_test: Assert that av_image_get_linesize() succeeds
fftools/ffmpeg: prefer real errors over EOF in err_merge()
avcodec/png: more informative error message for invalid sBIT size
avcodec/pngdec: avoid erroring with sBIT on indexed-color images
avcodec/nvenc: fix segfault in intra-only mode
aarch64: Add OpenBSD runtime detection of dotprod and i8mm using sysctl
qsv: Initialize impl_value
avutil/hwcontext_qsv: fix GCC 14.1 warnings
lavc/vp9: reset segmentation fields when segmentation isn't enabled
configure: enable ffnvcodec, nvenc, nvdec for FreeBSD
avcodec/mscc & mwsc: Check loop counts before use
avcodec/mpegvideo_enc: Fix potential overflow in RD
avcodec/mpeg4videodec: assert impossible wrap points
avcodec/mpeg12dec: Use 64bit in bit computation
avcodec/vqcdec: Check init_get_bits8() for failure
avcodec/vble: Check av_image_get_buffer_size() for failure
avcodec/vp3: Replace check by assert
avcodec/vp8: Forward return of ff_vpx_init_range_decoder()
avcodec/jpeg2000dec: remove ST=3 case
avcodec/qsvdec: Check av_image_get_buffer_size() for failure
avcodec/exr: Fix preview overflow
avcodec/decode: decode_simple_internal() only implements audio and video
avcodec/fmvc: remove dead assignment
avcodec/h2645_sei: Remove dead checks
avcodec/h264_slice: Remove dead sps check
avcodec/lpc: copy levenson coeffs only when they have been computed
avutil/tests/base64: Check with too short output array
libavutil/base64: Try not to write over the array end
avcodec/cbs_av1: Avoid shift overflow
fftools/ffplay: Check return of swr_alloc_set_opts2()
tools/opt_common: Check for malloc failure
doc/examples/demux_decode: Simplify loop
avformat/concatdec: Check file
avcodec/mpegvideo_enc: Fix 1 line and one column images
avcodec/amrwbdec: assert mode to be valid in decode_fixed_vector()
avcodec/wavarc: fix integer overflow in decode_5elp() block type 2
swscale/output: Fix integer overflow in yuv2rgba64_full_1_c_template()
swscale/output: Fix integer overflow in yuv2rgba64_1_c_template
avcodec/av1dec: Change bit_depth to int
avcodec/av1dec: bit_depth cannot be another values than 8,10,12
avcodec/avs3_parser: assert the return value of init_get_bits()
avcodec/avs2_parser: Assert init_get_bits8() success with const size 15
avformat/mxfdec: Check body_offset
avformat/kvag: Check sample_rate
avcodec/atrac9dec: Check init_get_bits8() for failure
avcodec/ac3_parser: Check init_get_bits8() for failure
avcodec/pngdec: Check last AVFrame before deref
avcodec/hevcdec: Check ref frame
doc/examples/qsv_transcode: Initialize pointer before free
doc/examples/qsv_transcode: Simplify str_to_dict() loop
doc/examples/vaapi_transcode: Simplify loop
doc/examples/qsv_transcode: Simplify loop
avcodec/cbs_h2645: Check NAL space
avfilter/vf_thumbnail_cuda: Set ret before checking it
avfilter/signature_lookup: Dont copy uninitialized stuff around
avfilter/signature_lookup: Fix 2 differences to the refernce SW
avcodec/x86/vp3dsp_init: Set correct function pointer, fix crash
avformat/mp3dec: change bogus error message if read_header encounters EOF
avformat/mp3dec: simplify inner frame size check in mp3_read_header
avformat/mp3dec: only call ffio_ensure_seekback once
avutil/thread: fix pthread_setname_np parameters for NetBSD and Apple
avutil/thread: add support for setting thread name on *bsd and solaris
avutil/ppc/cpu: Also use the machdep.altivec sysctl on NetBSD
lavd/v4l2: Use proper field type for second parameter of ioctl() with BSD's
avfilter/avfilter: fix OOM case for default activate
avfilter/buffersrc: switch to activate
avcodec/mediacodecenc: set quality in cq mode
Update for 6.1.2
fate/subtitles: Ignore line endings for sub-scc test
avformat/mxfdec: Check index_edit_rate
swscale/utils: Fix xInc overflow
avcodec/wavarc: fix signed integer overflow in block type 6/19
doc/developer: (security) researchers should be credited
avformat/isom: Uninit layout in ff_mp4_read_dec_config_descr()
avcodec/exr: Dont use 64bits to hold 6bits
avcodec/exr: Check for remaining bits in huf_unpack_enc_table()
avcodec/apedec: Use NABS to avoid undefined negation
avformat/mpegts: Reset local nb_prg on add_program() failure
avformat/aiffdec: Check for previously set channels
avformat/mxfdec: Make edit_unit_byte_count unsigned
avformat/movenc: Check that cts fits in 32bit
avformat/mxfdec: Check first case of offset_temp computation for overflow
avcodec/jpeg2000htdec: warn about non zero roi shift
avcodec/jpeg2000htdec: Check magp before using it in a shift
avfilter/vf_signature: Dont crash on no frames
avformat/westwood_vqa: Fix 2g packets
avformat/matroskadec: Check timescale
avformat/wavdec: satuarte next_tag_ofs, data_end
avformat/wavdec: sanity check channels and bps before using them for block_align
avformat/sbgdec: Check for negative duration
avformat/rpl: Use 64bit for total_audio_size and check it
avformat/timecode: use 64bit for intermediate for rounding in fps_from_frame_rate()
avformat/mov: use 64bit for intermediate for rounding
avformat/jacosubdec: Use 64bit for abs
avformat/concatdec: Check user_duration sum
avcodec/wavarc: avoid signed integer overflow in AC code
avcodec/wavarc: Avoid signed integer overflow in sample
avcodec/truemotion1: Height not being a multiple of 4 is unsupported
avcodec/rtv1: fix undefined FFALIGN
avcodec/hcadec: do not allow code to continue after failed init
avcodec/hcadec: do not set hfr_group_count to invalid values
avformat/concatdec: clip outpoint - inpoint overflow in get_best_effort_duration()
avcodec/osq: avoid several signed integer overflows
avformat/jacosubdec: clarify code
avformat/cafdec: Check that data chunk end fits within 64bit
avformat/iff: Saturate avio_tell() + 12
avformat/dxa: Adjust order of operations around block align
avformat/cafdec: dont seek beyond 64bit
avformat/id3v2: read_uslt() check for the amount read
avcodec/vmixdec: Check shift before use
avformat/mov: Check sample_count and auxiliary_info_default_size to be 0
avformat/wady: Check >0 samplerate and channels 1 || 2.
avcodec/cbs_h266_syntax_template: Check tile_y
avcodec/proresenc_kostya: Remove bug similarity text
avcodec/vorbisdec: Check remaining data in vorbis_residue_decode_internal()
avformat/concatdec: Check in and outpoints to be to produce a positive representable duration
avcodec/8bps: Consider width in the minimal size check
libswscale/utils: Fix bayer to yuvj
swscale/swscale: Check srcSliceH for bayer
swscale/utils: Allocate more dithererror
avcodec/indeo3: Round dimensions up in allocate_frame_buffers()
avutil/rational: Document what is to be expected from av_d2q() of doubles representing rational numbers
avfilter/signature_lookup: Do not dereference NULL pointers after malloc failure
avfilter/signature_lookup: dont leave uncleared pointers in sll_free()
avcodec/mpegvideo_enc: Use ptrdiff_t for stride
libavformat/hlsenc.c: Populate OTI using AAC profile in write_codec_attr.
avformat/mov: Check if a key is longer than the atom containing it
avcodec/nvenc: support SDK 12.2 bit depth API
avcodec/nvenc: stop using long deprecated format specifiers
avfilter/buffersrc: fix overriding unknown channel layouts with negotiated one
avfilter/af_channelmap: disallow channel index 64
avfilter/af_channelmap: fix mapping if in_channel was a string but out_channel was not specified
avfilter/af_channelmap: fix error message if FL source channel was missing
avcodec/nvdec: reset bitstream_len/nb_slices when resetting bitstream pointer
avformat/mov: don't abort on duplicate Mastering Display Metadata boxes
fftools/ffplay: use correct buffersink channel layout parameters
avformat/mpegts: detect synchronous metadata KLV more reliably
swresample/resample: fix rounding errors with filter_size=1 and phase_shift=0
avformat/mxfdec: remove resolve_strong_ref usage with AnyType
avfilter/vf_convolution: add float user_rdiv[4] to allow user options to apply correctly
avformat/libsrt: use SRT_EPOLL_IN for waiting for an incoming connection
avformat/mxfdec: do not use AnyType when resolving Descriptors and MultipleDescriptors
avformat/mxfdec: move resolving Descriptors to the multi descriptor resolve function
avutil/hwcontext_d3d11va: prefer DXGI 1.1 factory when available
avcodec/libsvtav1: send the EOS signal without a one frame delay to allow for the library to operate in a low-delay mode
avcodec/libsvtav1: add version guard for external param
lavc/vvc: Read subpic ID when only one subpicture is present
lavc/vvc: Correct sps_num_subpics_minus1 minimum
avcodec/cbs_h2645: Avoid function pointer casts, fix UB
avcodec/cbs_h266_syntax_template: Don't omit unused function parameter
avcodec/cbs_h266_syntax_template: check aps_adaptation_parameter_set_id
lavc/vvc: Add check to num_multi_layer_olss
avcodec/cbs_h266: fix logic setting num_layers_in_ols when vps_ols_mode_idc is 2
avcodec/av1dec: fix matrix coefficients exposed by codec context
{avcodec,tests}: rename the bundled Mesa AV1 vulkan video headers
avformat/mov_chan: never override number of channels based on chan atom
avformat/mov_chan: do not assume channels are in native order
avfft: avoid overreads with RDFT API users
avcodec/nvdec: don't free NVDECContext->bitstream
avcodec/mediacodecdec: fix return EAGAIN after EOF
version 6.1.1
- avcodec/mpegvideo_enc: Dont copy beyond the image
- avfilter/vf_minterpolate: Check pts before division
- avfilter/avf_showwaves: Check history_nb_samples
- avformat/flacdec: Avoid double AVERRORS
- avfilter/vf_vidstabdetect: Avoid double AVERRORS
- avcodec/vaapi_encode: Avoid double AVERRORS
- avfilter/vf_swaprect: round coordinates down
- avfilter/vf_swaprect: Use height for vertical variables
- avfilter/vf_swaprect: assert that rectangles are within memory
- avfilter/af_alimiter: Check nextpos before use
- avfilter/f_reverse: Apply PTS compensation only when pts is available
- avfilter/af_stereowiden: Check length
- avformat/mov: Fix MSAN issue with stsd_id
- avcodec/jpegxl_parser: Check get_vlc2()
- avfilter/vf_weave: Fix odd height handling
- avfilter/edge_template: Fix small inputs with gaussian_blur()
- avfilter/vf_gradfun: Do not overread last line
- avfilter/avf_showspectrum: fix off by 1 error
- avcodec/jpegxl_parser: Add padding to cs_buffer
- avformat/mov: do not set sign bit for chunk_offsets
- avcodec/jpeglsdec: Check Jpeg-LS LSE
- avcodec/osq: Implement flush()
- configure: Enable section_data_rel_ro for FreeBSD and NetBSD aarch64 / arm
- avcodec/cbs_h266: more restrictive check on pps_tile_idx_delta_val
- avcodec/jpeg2000htdec: check if block decoding will exceed internal precision
- tools/target_dec_fuzzer: Adjust threshold for VMIX
- avcodec/av1dec: Fix resolving zero divisor
- avformat/mov: Ignore duplicate ftyp
- avformat/mov: Fix integer overflow in mov_read_packet().
- lavc/qsvdec: return 0 if more data is required
- avcodec/jpegxl_parser: check ANS cluster alphabet size vs bundle size
- libavformat/vvc: Make probe more conservative
- hwcontext_vulkan: guard unistd.h include
- lavc/Makefile: build vulkan decode code if vulkan_av1 has been enabled
- lavc/dvdsubenc: only check canvas size when it is actually set
- avcodec/decode: validate hw_frames_ctx when AVHWAccel.free_frame_priv is used
- avcoded/fft: Fix memory leak if ctx2 is used
- avcodec/fft: Use av_mallocz to avoid invalid free/uninit
version 6.1:
- libaribcaption decoder
- Playdate video decoder and demuxer
- Extend VAAPI support for libva-win32 on Windows
- afireqsrc audio source filter
- arls filter
- ffmpeg CLI new option: -readrate_initial_burst
- zoneplate video source filter
- command support in the setpts and asetpts filters
- Vulkan decode hwaccel, supporting H264, HEVC and AV1
- color_vulkan filter
- bwdif_vulkan filter
- nlmeans_vulkan filter
- RivaTuner video decoder
- xfade_vulkan filter
- vMix video decoder
- Essential Video Coding parser, muxer and demuxer
- Essential Video Coding frame merge bsf
- bwdif_cuda filter
- Microsoft RLE video encoder
- Raw AC-4 muxer and demuxer
- Raw VVC bitstream parser, muxer and demuxer
- Bitstream filter for editing metadata in VVC streams
- Bitstream filter for converting VVC from MP4 to Annex B
- scale_vt filter for videotoolbox
- transpose_vt filter for videotoolbox
- support for the P_SKIP hinting to speed up libx264 encoding
- Support HEVC,VP9,AV1 codec in enhanced flv format
- apsnr and asisdr audio filters
- OSQ demuxer and decoder
- Support HEVC,VP9,AV1 codec fourcclist in enhanced rtmp protocol
- CRI USM demuxer
- ffmpeg CLI '-top' option deprecated in favor of the setfield filter
- VAAPI AV1 encoder
- ffprobe XML output schema changed to account for multiple
variable-fields elements within the same parent element
- ffprobe -output_format option added as an alias of -of
version 6.0:
- Radiance HDR image support
- ddagrab (Desktop Duplication) video capture filter
- ffmpeg -shortest_buf_duration option
- ffmpeg now requires threading to be built
- ffmpeg now runs every muxer in a separate thread
- Add new mode to cropdetect filter to detect crop-area based on motion vectors and edges
- VAAPI decoding and encoding for 10/12bit 422, 10/12bit 444 HEVC and VP9
- WBMP (Wireless Application Protocol Bitmap) image format
- a3dscope filter
- bonk decoder and demuxer
- Micronas SC-4 audio decoder
- LAF demuxer
- APAC decoder and demuxer
- Media 100i decoders
- DTS to PTS reorder bsf
- ViewQuest VQC decoder
- backgroundkey filter
- nvenc AV1 encoding support
- MediaCodec decoder via NDKMediaCodec
- MediaCodec encoder
- oneVPL support for QSV
- QSV AV1 encoder
- QSV decoding and encoding for 10/12bit 422, 10/12bit 444 HEVC and VP9
- showcwt multimedia filter
- corr video filter
- adrc audio filter
- afdelaysrc audio filter
- WADY DPCM decoder and demuxer
- CBD2 DPCM decoder
- ssim360 video filter
- ffmpeg CLI new options: -stats_enc_pre[_fmt], -stats_enc_post[_fmt],
-stats_mux_pre[_fmt]
- hstack_vaapi, vstack_vaapi and xstack_vaapi filters
- XMD ADPCM decoder and demuxer
- media100 to mjpegb bsf
- ffmpeg CLI new option: -fix_sub_duration_heartbeat
- WavArc decoder and demuxer
- CrystalHD decoders deprecated
- SDNS demuxer
- RKA decoder and demuxer
- filtergraph syntax in ffmpeg CLI now supports passing file contents
as option values, by prefixing option name with '/'
- hstack_qsv, vstack_qsv and xstack_qsv filters
version 5.1:
- add ipfs/ipns gateway support
- dialogue enhance audio filter
- dropped obsolete XvMC hwaccel
- pcm-bluray encoder
- DFPWM audio encoder/decoder and raw muxer/demuxer
- SITI filter
- Vizrt Binary Image encoder/decoder
- avsynctest source filter
- feedback video filter
- pixelize video filter
- colormap video filter
- colorchart video source filter
- multiply video filter
- PGS subtitle frame merge bitstream filter
- blurdetect filter
- tiltshelf audio filter
- QOI image format support
- ffprobe -o option
- virtualbass audio filter
- VDPAU AV1 hwaccel
- PHM image format support
- remap_opencl filter
- added chromakey_cuda filter
- added bilateral_cuda filter
version 5.0:
- ADPCM IMA Westwood encoder
- Westwood AUD muxer
- ADPCM IMA Acorn Replay decoder
- Argonaut Games CVG demuxer
- Argonaut Games CVG muxer
- Concatf protocol
- afwtdn audio filter
- audio and video segment filters
- Apple Graphics (SMC) encoder
- hsvkey and hsvhold video filters
- adecorrelate audio filter
- atilt audio filter
- grayworld video filter
- AV1 Low overhead bitstream format muxer
- swscale slice threading
- MSN Siren decoder
- scharr video filter
- apsyclip audio filter
- morpho video filter
- amr parser
- (a)latency filters
- GEM Raster image decoder
- asdr audio filter
- speex decoder
- limitdiff video filter
- xcorrelate video filter
- varblur video filter
- huesaturation video filter
- colorspectrum source video filter
- RTP packetizer for uncompressed video (RFC 4175)
- bitpacked encoder
- VideoToolbox VP9 hwaccel
- VideoToolbox ProRes hwaccel
- support loongarch.
- aspectralstats audio filter
- adynamicsmooth audio filter
- libplacebo filter
- vflip_vulkan, hflip_vulkan and flip_vulkan filters
- adynamicequalizer audio filter
- yadif_videotoolbox filter
- VideoToolbox ProRes encoder
- anlmf audio filter
- IMF demuxer (experimental)
version 4.4:
version <next>:
- AudioToolbox output device
- MacCaption demuxer
- PGX decoder

View File

@@ -15,11 +15,3 @@ NOTICE
------
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.
NOTICE for Package Maintainers
------------------------------
- It is recommended to build FFmpeg twice, first with minimal external dependencies so
that 3rd party packages, which depend on FFmpegs libavutil/libavfilter/libavcodec/libavformat
can then be built. And last build FFmpeg with full dependancies (which may in turn depend on
some of these 3rd party packages). This avoids circular dependencies during build.

View File

@@ -11,11 +11,17 @@ A (CC <address>) after the name means that the maintainer prefers to be CC-ed on
patches and related discussions.
Project Leader
==============
final design decisions
Applications
============
ffmpeg:
ffmpeg.c Michael Niedermayer, Anton Khirnov
ffmpeg.c Michael Niedermayer
ffplay:
ffplay.c Marton Balint
@@ -34,8 +40,7 @@ Miscellaneous Areas
===================
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Gyan Doshi
project server day to day operations Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov, Timo Rothenpieler
project server emergencies Árpád Gereöffy, Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov, Timo Rothenpieler
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov
presets Robert Swain
metadata subsystem Aurelien Jacobs
release management Michael Niedermayer
@@ -110,6 +115,8 @@ Generic Parts:
lzw.* Michael Niedermayer
floating point AAN DCT:
faandct.c, faandct.h Michael Niedermayer
Non-power-of-two MDCT:
mdct15.c, mdct15.h Rostislav Pehlivanov
Golomb coding:
golomb.c, golomb.h Michael Niedermayer
motion estimation:
@@ -131,10 +138,8 @@ Codecs:
8bps.c Roberto Togni
8svx.c Jaikrishnan Menon
aacenc*, aaccoder.c Rostislav Pehlivanov
adpcm.c Zane van Iperen
alacenc.c Jaikrishnan Menon
alsdec.c Thilo Borgmann, Umair Khan
amfenc* Dmitrii Ovchinnikov
aptx.c Aurelien Jacobs
ass* Aurelien Jacobs
asv* Michael Niedermayer
@@ -151,10 +156,10 @@ Codecs:
ccaption_dec.c Anshul Maheshwari, Aman Gupta
cljr Alex Beregszaszi
cpia.c Stephan Hilb
crystalhd.c Philip Langdale
cscd.c Reimar Doeffinger
cuviddec.c Timo Rothenpieler
dca* foo86
dfpwm* Jack Bruienne
dirac* Rostislav Pehlivanov
dnxhd* Baptiste Coudurier
dolby_e* foo86
@@ -181,14 +186,12 @@ Codecs:
interplayvideo.c Mike Melanson
jni*, ffjni* Matthieu Bouron
jpeg2000* Nicolas Bertrand
jpegxl* Leo Izen
jvdec.c Peter Ross
lcl*.c Roberto Togni, Reimar Doeffinger
libcelt_dec.c Nicolas George
libcodec2.c Tomas Härdin
libdirac* David Conrad
libdavs2.c Huiwen Ren
libjxl*.c, libjxl.h Leo Izen
libgsm.c Michel Bardiaux
libkvazaar.c Arttu Ylä-Outinen
libopenh264enc.c Martin Storsjo, Linjie Fu
@@ -211,7 +214,6 @@ Codecs:
mqc* Nicolas Bertrand
msmpeg4.c, msmpeg4data.h Michael Niedermayer
msrle.c Mike Melanson
msrleenc.c Tomas Härdin
msvideo1.c Mike Melanson
nuv.c Reimar Doeffinger
nvdec*, nvenc* Timo Rothenpieler
@@ -223,7 +225,7 @@ Codecs:
ptx.c Ivo van Poorten
qcelp* Reynaldo H. Verdejo Pinochet
qdm2.c, qdm2data.h Roberto Togni
qsv* Mark Thompson, Zhong Li, Haihao Xiang
qsv* Mark Thompson, Zhong Li
qtrle.c Mike Melanson
ra144.c, ra144.h, ra288.c, ra288.h Roberto Togni
resample2.c Michael Niedermayer
@@ -263,14 +265,16 @@ Codecs:
xan.c Mike Melanson
xbm* Paul B Mahol
xface Stefano Sabatini
xvmc.c Ivan Kalvachev
xwd* Paul B Mahol
Hardware acceleration:
crystalhd.c Philip Langdale
dxva2* Hendrik Leppkes, Laurent Aimar, Steve Lhomme
d3d11va* Steve Lhomme
mediacodec* Matthieu Bouron, Aman Gupta
vaapi* Haihao Xiang
vaapi_encode* Mark Thompson, Haihao Xiang
vaapi* Gwenole Beauchesne
vaapi_encode* Mark Thompson
vdpau* Philip Langdale, Carl Eugen Hoyos
videotoolbox* Rick Kern, Aman Gupta
@@ -349,7 +353,6 @@ Filters:
vf_il.c Paul B Mahol
vf_(t)interlace Thomas Mundt (CC <thomas.mundt@hr.de>)
vf_lenscorrection.c Daniel Oberhoff
vf_libplacebo.c Niklas Haas
vf_mergeplanes.c Paul B Mahol
vf_mestimate.c Davinder Singh
vf_minterpolate.c Davinder Singh
@@ -395,7 +398,6 @@ Muxers/Demuxers:
apngdec.c Benoit Fouet
argo_asf.c Zane van Iperen
argo_brp.c Zane van Iperen
argo_cvg.c Zane van Iperen
ass* Aurelien Jacobs
astdec.c Paul B Mahol
astenc.c James Almer
@@ -412,14 +414,12 @@ Muxers/Demuxers:
dashdec.c Steven Liu
dashenc.c Karthick Jeyapal
daud.c Reimar Doeffinger
dfpwmdec.c Jack Bruienne
dss.c Oleksij Rempel
dtsdec.c foo86
dtshddec.c Paul B Mahol
dv.c Roman Shaposhnik
electronicarts.c Peter Ross
epafdec.c Paul B Mahol
evc* Samsung (Dawid Kozinski)
ffm* Baptiste Coudurier
flic.c Mike Melanson
flvdec.c Michael Niedermayer
@@ -430,12 +430,10 @@ Muxers/Demuxers:
idcin.c Mike Melanson
idroqdec.c Mike Melanson
iff.c Jaikrishnan Menon
imf* Pierre-Anthony Lemieux
img2*.c Michael Niedermayer
ipmovie.c Mike Melanson
ircam* Paul B Mahol
iss.c Stefan Gehrer
jpegxl* Leo Izen
jvdec.c Peter Ross
kvag.c Zane van Iperen
libmodplug.c Clément Bœsch
@@ -515,7 +513,6 @@ Protocols:
bluray.c Petri Hintukainen
ftp.c Lukasz Marek
http.c Ronald S. Bultje
libsrt.c Zhao Zhili
libssh.c Lukasz Marek
libzmq.c Andriy Gelman
mms*.c Ronald S. Bultje
@@ -542,11 +539,9 @@ Operating systems / CPU architectures
Alpha Falk Hueffner
MIPS Manojkumar Bhosale, Shiyou Yin
LoongArch Shiyou Yin
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
Amiga / PowerPC Colin Ward
Linux / PowerPC Lauri Kasanen
RISC-V Rémi Denis-Courmont
Windows MinGW Alex Beregszaszi, Ramiro Polla
Windows Cygwin Victor Paesa
Windows MSVC Matthew Oliver, Hendrik Leppkes
@@ -588,12 +583,10 @@ wm4
Releases
========
7.0 Michael Niedermayer
6.1 Michael Niedermayer
5.1 Michael Niedermayer
4.4 Michael Niedermayer
3.4 Michael Niedermayer
2.8 Michael Niedermayer
2.7 Michael Niedermayer
2.6 Michael Niedermayer
2.5 Michael Niedermayer
If you want to maintain an older release, please contact us
@@ -616,22 +609,17 @@ Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
Ganesh Ajjanagadde C96A 848E 97C3 CEA2 AB72 5CE4 45F9 6A2D 3C36 FB1B
Gwenole Beauchesne 2E63 B3A6 3E44 37E2 017D 2704 53C7 6266 B153 99C4
Haihao Xiang (haihao) 1F0C 31E8 B4FE F7A4 4DC1 DC99 E0F5 76D4 76FC 437F
Jaikrishnan Menon 61A1 F09F 01C9 2D45 78E1 C862 25DC 8831 AF70 D368
James Almer 7751 2E8C FD94 A169 57E6 9A7A 1463 01AD 7376 59E0
Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A
Leo Izen (Traneptora) B6FD 3CFC 7ACF 83FC 9137 6945 5A71 C331 FD2F A19A
Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE
Lynne FE50 139C 6805 72CA FD52 1F8D A2FE A5F0 3F03 4464
Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
DD1E C9E8 DE08 5C62 9B3E 1846 B18E 8928 B394 8D64
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
Niklas Haas (haasn) 1DDB 8076 B14D 5B48 32FC 99D9 EB52 DA9C 02BA 6FB4
Nikolay Aleksandrov 8978 1D8C FB71 588E 4B27 EAA8 C4F0 B5FC E011 13B1
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
Philip Langdale 5DC5 8D66 5FBA 3A43 18EC 045E F8D6 B194 6A75 682E
Pierre-Anthony Lemieux (pal) F4B3 9492 E6F2 E4AF AEC8 46CB 698F A1F0 F8D4 EED4
Ramiro Polla 7859 C65B 751B 1179 792E DAE8 8E95 8B2F 9B6C 5700
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4

View File

@@ -13,19 +13,17 @@ vpath %.v $(SRC_PATH)
vpath %.texi $(SRC_PATH)
vpath %.cu $(SRC_PATH)
vpath %.ptx $(SRC_PATH)
vpath %.metal $(SRC_PATH)
vpath %/fate_config.sh.template $(SRC_PATH)
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64 audiomatch
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
ALLFFLIBS = avcodec avdevice avfilter avformat avutil postproc swscale swresample
# $(FFLIBS-yes) needs to be in linking order
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
FFLIBS-$(CONFIG_AVFILTER) += avfilter
FFLIBS-$(CONFIG_AVFORMAT) += avformat
FFLIBS-$(CONFIG_AVCODEC) += avcodec
FFLIBS-$(CONFIG_AVRESAMPLE) += avresample
FFLIBS-$(CONFIG_POSTPROC) += postproc
FFLIBS-$(CONFIG_SWRESAMPLE) += swresample
FFLIBS-$(CONFIG_SWSCALE) += swscale
@@ -47,7 +45,7 @@ FF_DEP_LIBS := $(DEP_LIBS)
FF_STATIC_DEP_LIBS := $(STATIC_DEP_LIBS)
$(TOOLS): %$(EXESUF): %.o
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(filter-out $(FF_DEP_LIBS), $^) $(EXTRALIBS-$(*F)) $(EXTRALIBS) $(ELIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(EXTRALIBS-$(*F)) $(EXTRALIBS) $(ELIBS)
target_dec_%_fuzzer$(EXESUF): target_dec_%_fuzzer.o $(FF_DEP_LIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH)
@@ -67,10 +65,6 @@ tools/target_io_dem_fuzzer$(EXESUF): tools/target_io_dem_fuzzer.o $(FF_DEP_LIBS)
tools/enum_options$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/enum_options$(EXESUF): $(FF_DEP_LIBS)
tools/enc_recon_frame_test$(EXESUF): $(FF_DEP_LIBS)
tools/enc_recon_frame_test$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/scale_slice_test$(EXESUF): $(FF_DEP_LIBS)
tools/scale_slice_test$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/sofa2wavs$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
tools/uncoded_frame$(EXESUF): ELIBS = $(FF_EXTRALIBS)
@@ -80,14 +74,13 @@ tools/target_dem_%_fuzzer$(EXESUF): $(FF_DEP_LIBS)
CONFIGURABLE_COMPONENTS = \
$(wildcard $(FFLIBS:%=$(SRC_PATH)/lib%/all*.c)) \
$(SRC_PATH)/libavcodec/bitstream_filters.c \
$(SRC_PATH)/libavcodec/hwaccels.h \
$(SRC_PATH)/libavcodec/parsers.c \
$(SRC_PATH)/libavformat/protocols.c \
config_components.h: ffbuild/.config
config.h: ffbuild/.config
ffbuild/.config: $(CONFIGURABLE_COMPONENTS)
@-tput bold 2>/dev/null
@-printf '\nWARNING: $(?) newer than config_components.h, rerun configure\n\n'
@-printf '\nWARNING: $(?) newer than config.h, rerun configure\n\n'
@-tput sgr0 2>/dev/null
SUBDIR_VARS := CLEANFILES FFLIBS HOSTPROGS TESTPROGS TOOLS \
@@ -95,8 +88,7 @@ SUBDIR_VARS := CLEANFILES FFLIBS HOSTPROGS TESTPROGS TOOLS \
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \
ALTIVEC-OBJS VSX-OBJS MMX-OBJS X86ASM-OBJS \
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSP-OBJS MSA-OBJS \
MMI-OBJS LSX-OBJS LASX-OBJS RV-OBJS RVV-OBJS \
OBJS SLIBOBJS SHLIBOBJS STLIBOBJS HOSTOBJS TESTOBJS
MMI-OBJS OBJS SLIBOBJS HOSTOBJS TESTOBJS
define RESET
$(1) :=
@@ -118,13 +110,12 @@ include $(SRC_PATH)/fftools/Makefile
include $(SRC_PATH)/doc/Makefile
include $(SRC_PATH)/doc/examples/Makefile
$(ALLFFLIBS:%=lib%/version.o): libavutil/ffversion.h
libavcodec/avcodec.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
ifeq ($(STRIPTYPE),direct)
$(STRIP) -o $@ $<
else
$(RM) $@
$(CP) $< $@
$(STRIP) $@
endif
@@ -165,7 +156,7 @@ clean::
$(RM) -rf coverage.info coverage.info.in lcov
distclean:: clean
$(RM) .version config.asm config.h config_components.h mapfile \
$(RM) .version avversion.h config.asm config.h mapfile \
ffbuild/.config ffbuild/config.* libavutil/avconfig.h \
version.h libavutil/ffversion.h libavcodec/codec_names.h \
libavcodec/bsf_list.c libavformat/protocol_list.c \

View File

@@ -9,7 +9,7 @@ such as audio, video, subtitles and related metadata.
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides means to alter decoded audio and video through a directed graph of connected filters.
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.

View File

@@ -1 +1 @@
6.1.4
4.4

View File

@@ -1,15 +1,15 @@
──────────────────────────────────────────┐
│ RELEASE NOTES for FFmpeg 6.1 "Heaviside" │
──────────────────────────────────────────┘
┌────────────────────────────────────┐
│ RELEASE NOTES for FFmpeg 4.4 "Rao" │
└────────────────────────────────────┘
The FFmpeg Project proudly presents FFmpeg 6.1 "Heaviside", about 8
months after the release of FFmpeg 6.0.
The FFmpeg Project proudly presents FFmpeg 4.4 "Rao", about 10
months after the release of FFmpeg 4.3.
A complete Changelog is available at the root of the project, and the
complete Git history on https://git.ffmpeg.org/gitweb/ffmpeg.git
We hope you will like this release as much as we enjoyed working on it, and
as usual, if you have any questions about it, or any FFmpeg related topic,
feel free to join us on the #ffmpeg IRC channel (on irc.libera.chat) or ask
feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask
on the mailing-lists.

View File

@@ -19,6 +19,7 @@
#ifndef COMPAT_ATOMICS_WIN32_STDATOMIC_H
#define COMPAT_ATOMICS_WIN32_STDATOMIC_H
#define WIN32_LEAN_AND_MEAN
#include <stddef.h>
#include <stdint.h>
#include <windows.h>
@@ -95,7 +96,7 @@ do { \
atomic_load(object)
#define atomic_exchange(object, desired) \
InterlockedExchangePointer((PVOID volatile *)object, (PVOID)desired)
InterlockedExchangePointer(object, desired);
#define atomic_exchange_explicit(object, desired, order) \
atomic_exchange(object, desired)

View File

@@ -181,12 +181,8 @@ static inline __device__ double trunc(double a) { return __builtin_trunc(a); }
static inline __device__ float fabsf(float a) { return __builtin_fabsf(a); }
static inline __device__ float fabs(float a) { return __builtin_fabsf(a); }
static inline __device__ double fabs(double a) { return __builtin_fabs(a); }
static inline __device__ float sqrtf(float a) { return __builtin_sqrtf(a); }
static inline __device__ float __saturatef(float a) { return __nvvm_saturate_f(a); }
static inline __device__ float __sinf(float a) { return __nvvm_sin_approx_f(a); }
static inline __device__ float __cosf(float a) { return __nvvm_cos_approx_f(a); }
static inline __device__ float __expf(float a) { return __nvvm_ex2_approx_f(a * (float)__builtin_log2(__builtin_exp(1))); }
static inline __device__ float __powf(float a, float b) { return __nvvm_ex2_approx_f(__nvvm_lg2_approx_f(a) * b); }
#endif /* COMPAT_CUDA_CUDA_RUNTIME_H */

34
compat/cuda/ptx2c.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/sh
# Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
set -e
OUT="$1"
IN="$2"
NAME="$(basename "$IN" | sed 's/\..*//')"
printf "const char %s_ptx[] = \\" "$NAME" > "$OUT"
echo >> "$OUT"
sed -e "$(printf 's/\r//g')" -e 's/["\\]/\\&/g' -e "$(printf 's/^/\t"/')" -e 's/$/\\n"/' < "$IN" >> "$OUT"
echo ";" >> "$OUT"
exit 0

View File

@@ -59,7 +59,7 @@ int avpriv_vsnprintf(char *s, size_t n, const char *fmt,
* recommends to provide _snprintf/_vsnprintf() a buffer size that
* is one less than the actual buffer, and zero it before calling
* _snprintf/_vsnprintf() to workaround this problem.
* See https://web.archive.org/web/20151214111935/http://msdn.microsoft.com/en-us/library/1kt27hek(v=vs.80).aspx */
* See http://msdn.microsoft.com/en-us/library/1kt27hek(v=vs.80).aspx */
memset(s, 0, n);
va_copy(ap_copy, ap);
ret = _vsnprintf(s, n - 1, fmt, ap_copy);

View File

@@ -20,40 +20,11 @@
#define COMPAT_W32DLFCN_H
#ifdef _WIN32
#include <stdint.h>
#include <windows.h>
#include "config.h"
#include "libavutil/macros.h"
#if (_WIN32_WINNT < 0x0602) || HAVE_WINRT
#include "libavutil/wchar_filename.h"
static inline wchar_t *get_module_filename(HMODULE module)
{
wchar_t *path = NULL, *new_path;
DWORD path_size = 0, path_len;
do {
path_size = path_size ? FFMIN(2 * path_size, INT16_MAX + 1) : MAX_PATH;
new_path = av_realloc_array(path, path_size, sizeof *path);
if (!new_path) {
av_free(path);
return NULL;
}
path = new_path;
// Returns path_size in case of insufficient buffer.
// Whether the error is set or not and whether the output
// is null-terminated or not depends on the version of Windows.
path_len = GetModuleFileNameW(module, path, path_size);
} while (path_len && path_size <= INT16_MAX && path_size <= path_len);
if (!path_len) {
av_free(path);
return NULL;
}
return path;
}
#endif
/**
* Safe function used to open dynamic libs. This attempts to improve program security
* by removing the current directory from the dll search path. Only dll's found in the
@@ -63,53 +34,29 @@ static inline wchar_t *get_module_filename(HMODULE module)
*/
static inline HMODULE win32_dlopen(const char *name)
{
wchar_t *name_w;
HMODULE module = NULL;
if (utf8towchar(name, &name_w))
name_w = NULL;
#if _WIN32_WINNT < 0x0602
// On Win7 and earlier we check if KB2533623 is available
// Need to check if KB2533623 is available
if (!GetProcAddress(GetModuleHandleW(L"kernel32.dll"), "SetDefaultDllDirectories")) {
wchar_t *path = NULL, *new_path;
DWORD pathlen, pathsize, namelen;
if (!name_w)
HMODULE module = NULL;
wchar_t *path = NULL, *name_w = NULL;
DWORD pathlen;
if (utf8towchar(name, &name_w))
goto exit;
namelen = wcslen(name_w);
path = (wchar_t *)av_mallocz_array(MAX_PATH, sizeof(wchar_t));
// Try local directory first
path = get_module_filename(NULL);
if (!path)
pathlen = GetModuleFileNameW(NULL, path, MAX_PATH);
pathlen = wcsrchr(path, '\\') - path;
if (pathlen == 0 || pathlen + wcslen(name_w) + 2 > MAX_PATH)
goto exit;
new_path = wcsrchr(path, '\\');
if (!new_path)
goto exit;
pathlen = new_path - path;
pathsize = pathlen + namelen + 2;
new_path = av_realloc_array(path, pathsize, sizeof *path);
if (!new_path)
goto exit;
path = new_path;
path[pathlen] = '\\';
wcscpy(path + pathlen + 1, name_w);
module = LoadLibraryExW(path, NULL, LOAD_WITH_ALTERED_SEARCH_PATH);
if (module == NULL) {
// Next try System32 directory
pathlen = GetSystemDirectoryW(path, pathsize);
if (!pathlen)
pathlen = GetSystemDirectoryW(path, MAX_PATH);
if (pathlen == 0 || pathlen + wcslen(name_w) + 2 > MAX_PATH)
goto exit;
// Buffer is not enough in two cases:
// 1. system directory + \ + module name
// 2. system directory even without the module name.
if (pathlen + namelen + 2 > pathsize) {
pathsize = pathlen + namelen + 2;
new_path = av_realloc_array(path, pathsize, sizeof *path);
if (!new_path)
goto exit;
path = new_path;
// Query again to handle the case #2.
pathlen = GetSystemDirectoryW(path, pathsize);
if (!pathlen)
goto exit;
}
path[pathlen] = L'\\';
path[pathlen] = '\\';
wcscpy(path + pathlen + 1, name_w);
module = LoadLibraryExW(path, NULL, LOAD_WITH_ALTERED_SEARCH_PATH);
}
@@ -126,19 +73,16 @@ exit:
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if HAVE_WINRT
if (!name_w)
wchar_t *name_w = NULL;
int ret;
if (utf8towchar(name, &name_w))
return NULL;
module = LoadPackagedLibrary(name_w, 0);
#else
#define LOAD_FLAGS (LOAD_LIBRARY_SEARCH_APPLICATION_DIR | LOAD_LIBRARY_SEARCH_SYSTEM32)
/* filename may be be in CP_ACP */
if (!name_w)
return LoadLibraryExA(name, NULL, LOAD_FLAGS);
module = LoadLibraryExW(name_w, NULL, LOAD_FLAGS);
#undef LOAD_FLAGS
#endif
ret = LoadPackagedLibrary(name_w, 0);
av_free(name_w);
return module;
return ret;
#else
return LoadLibraryExA(name, NULL, LOAD_LIBRARY_SEARCH_APPLICATION_DIR | LOAD_LIBRARY_SEARCH_SYSTEM32);
#endif
}
#define dlopen(name, flags) win32_dlopen(name)
#define dlclose FreeLibrary

View File

@@ -35,6 +35,7 @@
* As most functions here are used without checking return values,
* only implement return values as necessary. */
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <process.h>
#include <time.h>
@@ -65,14 +66,7 @@ typedef CONDITION_VARIABLE pthread_cond_t;
#define PTHREAD_CANCEL_ENABLE 1
#define PTHREAD_CANCEL_DISABLE 0
#if HAVE_WINRT
#define THREADFUNC_RETTYPE DWORD
#else
#define THREADFUNC_RETTYPE unsigned
#endif
static av_unused THREADFUNC_RETTYPE
__stdcall attribute_align_arg win32thread_worker(void *arg)
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
{
pthread_t *h = (pthread_t*)arg;
h->ret = h->func(h->arg);

View File

@@ -1,32 +0,0 @@
#!/bin/sh
if [ "$1" = "--version" ]; then
rc.exe -?
exit $?
fi
if [ $# -lt 2 ]; then
echo "Usage: mswindres [-I/include/path ...] [-DSOME_DEFINE ...] [-o output.o] input.rc [output.o]" >&2
exit 0
fi
EXTRA_OPTS="-nologo"
while [ $# -gt 2 ]; do
case $1 in
-D*) EXTRA_OPTS="$EXTRA_OPTS -d$(echo $1 | sed -e "s/^..//" -e "s/ /\\\\ /g")" ;;
-I*) EXTRA_OPTS="$EXTRA_OPTS -i$(echo $1 | sed -e "s/^..//" -e "s/ /\\\\ /g")" ;;
-o) OPT_OUT="$2"; shift ;;
esac
shift
done
IN="$1"
if [ -z "$OPT_OUT" ]; then
OUT="$2"
else
OUT="$OPT_OUT"
fi
eval set -- $EXTRA_OPTS
rc.exe "$@" -fo "$OUT" "$IN"

1347
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,592 +1,20 @@
The last version increases of all libraries were on 2023-02-09
Never assume the API of libav* to be stable unless at least 1 month has passed
since the last major version increase or the API was added.
The last version increases were:
libavcodec: 2017-10-21
libavdevice: 2017-10-21
libavfilter: 2017-10-21
libavformat: 2017-10-21
libavresample: 2017-10-21
libpostproc: 2017-10-21
libswresample: 2017-10-21
libswscale: 2017-10-21
libavutil: 2017-10-21
API changes, most recent first:
-------- 8< --------- FFmpeg 6.1 was cut here -------- 8< ---------
2023-10-27 - 52a97642604 - lavu 58.28.100 - channel_layout.h
Add AV_CH_LAYOUT_3POINT1POINT2 and AV_CHANNEL_LAYOUT_3POINT1POINT2.
Add AV_CH_LAYOUT_5POINT1POINT2_BACK and AV_CHANNEL_LAYOUT_5POINT1POINT2_BACK.
Add AV_CH_LAYOUT_5POINT1POINT4_BACK and AV_CHANNEL_LAYOUT_5POINT1POINT4_BACK.
Add AV_CH_LAYOUT_7POINT1POINT2 and AV_CHANNEL_LAYOUT_7POINT1POINT2.
Add AV_CH_LAYOUT_7POINT1POINT4_BACK and AV_CHANNEL_LAYOUT_7POINT1POINT4_BACK.
2023-10-06 - 804be7f9e3c - lavc 60.30.101 - avcodec.h
AVCodecContext.coded_side_data may now be used during decoding, to be set
by user before calling avcodec_open2() for initialization.
2023-10-06 - 5432d2aacad - lavc 60.15.100 - avformat.h
Deprecate AVFormatContext.{nb_,}side_data, av_stream_add_side_data(),
av_stream_new_side_data(), and av_stream_get_side_data(). Side data fields
from AVFormatContext.codecpar should be used from now on.
2023-10-06 - 21d7cc6fa9a - lavc 60.30.100 - codec_par.h
Added {nb_,}coded_side_data to AVCodecParameters.
The AVCodecParameters helpers will copy it to and from its AVCodecContext
namesake.
2023-10-06 - 74279227dd2 - lavc 60.29.100 - packet.h
Added av_packet_side_data_new(), av_packet_side_data_add(),
av_packet_side_data_get(), av_packet_side_data_remove, and
av_packet_side_data_free().
2023-10-03 - ea14e8bc302 - lavc 60.28.100 - codec_par.h defs.h
Move the definition of enum AVFieldOrder from codec_par.h to defs.h.
2023-10-03 - dd48e49d547 - lavf 60.14.100 - avformat.h
Deprecate AVFMT_ALLOW_FLUSH without replacement. Users can always
flush any muxer by sending a NULL packet.
2023-09-28 - 8e1ef7c38f6 - lavu 58.27.100 - pixfmt.h
Add AV_PIX_FMT_GBRAP14BE, AV_PIX_FMT_GBRAP14LE pixel formats.
2023-09-28 - 05f8b2ca0f7 - lavu 58.26.100 - hwcontext_cuda.h
Add AV_CUDA_USE_CURRENT_CONTEXT.
2023-09-19 - ba9cd06c763 - lavu 58.25.100 - avutil.h
Make AV_TIME_BASE_Q compatible with C++.
2023-09-18 - 85e075587dc - lavf 60 - avformat.h
Deprecate AVFMT_FLAG_SHORTEST without replacement.
2023-09-07 - 423b6a7e493 - lavu 58.24.100 - imgutils.h
Add av_image_copy2(), a wrapper around the av_image_copy()
to overcome limitations of automatic conversions.
2023-09-07 - 5094d1f429e - lavu 58.23.100 - fifo.h
Constify the AVFifo pointees in av_fifo_peek() and av_fifo_peek_to_cb().
2023-09-07 - fa4bf5793a0 - lavu 58.22.100 - audio_fifo.h
Constify some pointees in av_audio_fifo_write(), av_audio_fifo_read(),
av_audio_fifo_peek() and av_audio_fifo_peek_at().
2023-09-07 - 9bf31f60960 - lavu 58.21.100 - samplefmt.h
Constify some pointees in av_samples_copy() and av_samples_set_silence().
2023-09-07 - 41285890e03 - lavu 58.20.100 - imgutils.h
Constify some pointees in av_image_copy(), av_image_copy_uc_from() and
av_image_fill_black().
2023-09-07 - 2a68d945cd7 - lavf 60.12.100 - avio.h
Constify the buffer pointees in the write_packet and write_data_type
callbacks of AVIOContext on the next major bump.
2023-09-07 - 8238bc0b5e3 - lavc 60.26.100 - defs.h
Add AV_PROFILE_* and AV_LEVEL_* replacements in defs.h for the
defines from avcodec.h. The latter are deprecated.
2023-09-06 - b6627a57f41 - lavc 60.25.101 - avcodec.h
AVCodecContext.rc_buffer_size may now be set by decoders.
2023-09-02 - 25ecc94d58f - lavu 58.19.100 - executor.h
Add AVExecutor API
2023-09-01 - 139e54911c8 - lavc 60.25.100 - avfft.h
The entire header will be deprecated and removed in two major bumps.
For a replacement to av_dct, av_rdft, av_fft and av_mdct, use
the new API from libavutil/tx.h.
2023-09-01 - 11e22730e1e - lavu 58.18.100 - tx.h
Add AV_TX_REAL_TO_REAL and AV_TX_REAL_TO_IMAGINARY
2023-08-18 - ff094f5ebbd - lavu 58.17.100 - channel_layout.h
All AV_CHANNEL_LAYOUT_* macros are now compatible with C++ 17 and older.
2023-08-08 - 5012b4ab4ca - lavu 58.15.100 - video_hint.h
Add AVVideoHint API.
2023-08-08 - 5012b4ab4ca - lavc 60 - avcodec.h
Deprecate AV_CODEC_FLAG_DROPCHANGED without replacement.
2023-07-05 - d694c25b44c - lavu 58.14.100 - random_seed.h
Add av_random_bytes()
2023-05-29 - 637afea88ed - lavc 60.16.100 - avcodec.h codec_id.h
Add AV_CODEC_ID_EVC, FF_PROFILE_EVC_BASELINE, and FF_PROFILE_EVC_MAIN.
2023-05-29 - 75918016ab1 - lavu 58.12.100 - mathematics.h
Add av_bessel_i0()
2023-05-29 - f3795e18574 - lavc 60.15.100 - avcodec.h
Add AVHWAccel.update_thread_context, AVHWAccel.free_frame_priv,
AVHWAccel.flush.
2023-05-29 - db1d0227812 - lavu 58.11.100 - hwcontext_vulkan.h
Add AVVulkanDeviceContext.lock_queue, AVVulkanDeviceContext.unlock_queue,
AVVulkanFramesContext.format, AVVulkanFramesContext.lock_frame,
AVVulkanFramesContext.unlock_frame, AVVkFrame.queue_family.
Deprecate AV_VK_FRAME_FLAG_CONTIGUOUS_MEMORY (use multiplane images instead).
2023-05-29 - bef86ba86cc - lavu 58.10.100 - pixfmt.h
Add AV_PIX_FMT_P212BE, AV_PIX_FMT_P212LE, AV_PIX_FMT_P412BE,
AV_PIX_FMT_P412LE.
2023-05-18 - 01d444c077e - lavu 58.8.100 - frame.h
Add av_frame_replace().
2023-05-18 - 63767b79a57 - lavu 58 - frame.h
Deprecate AVFrame.palette_has_changed without replacement.
2023-05-15 - 7d1d61cc5f5 - lavc 60 - avcodec.h
Depreate AVCodecContext.ticks_per_frame in favor of
AVCodecContext.framerate (encoding) and
AV_CODEC_PROP_FIELDS (decoding).
2023-05-15 - 70433abf7fb - lavc 60.12.100 - codec_desc.h
Add AV_CODEC_PROP_FIELDS.
2023-05-15 - 8b20d0dcb5c - lavc 60 - codec.h
Depreate AV_CODEC_CAP_SUBFRAMES without replacement.
2023-05-07 - c2ae8e30b7f - lavc 60.11.100 - codec_par.h
Add AVCodecParameters.framerate.
2023-05-04 - 0fc9c1f6828 - lavu 58.7.100 - frame.h
Deprecate AVFrame.interlaced_frame, AVFrame.top_field_first, and
AVFrame.key_frame.
Add AV_FRAME_FLAG_INTERLACED, AV_FRAME_FLAG_TOP_FIELD_FIRST, and
AV_FRAME_FLAG_KEY flags as replacement.
2023-04-10 - 4eaaa38d3df - lavu 58.6.100 - frame.h
av_frame_get_plane_buffer() now accepts const AVFrame*.
2023-04-04 - 61b27b15fc9 - lavu 58.6.100 - hdr_dynamic_metadata.h
Add AV_HDR_PLUS_MAX_PAYLOAD_SIZE.
av_dynamic_hdr_plus_create_side_data() now accepts a user provided
buffer.
2023-03-24 - 632c3499319 - lavfi 9.5.100 - avfilter.h
Add AVFILTER_FLAG_HWDEVICE.
2023-03-21 - 0a3ce5f7384 - lavu 58.5.100 - hdr_dynamic_metadata.h
Add av_dynamic_hdr_plus_from_t35() and av_dynamic_hdr_plus_to_t35()
functions to convert between raw T.35 payloads containing dynamic
HDR10+ metadata and their parsed representations as AVDynamicHDRPlus.
2023-03-17 - 3be46ee7672 - lavu 58.4.100 - hdr_dynamic_vivid_metadata.h
Add two group of three spline params.
Deprecate previous define which only supports one group of params.
2023-03-02 - 373ef1c4fae - lavc 60.6.100 - avcodec.h
Add FF_PROFILE_EAC3_DDP_ATMOS, FF_PROFILE_TRUEHD_ATMOS,
FF_PROFILE_DTS_HD_MA_X and FF_PROFILE_DTS_HD_MA_X_IMAX.
2023-02-25 - f4593775436 - lavc 60.5.100 - avcodec.h
Add FF_PROFILE_HEVC_SCC.
-------- 8< --------- FFmpeg 6.0 was cut here -------- 8< ---------
2023-02-16 - 927042b409 - lavf 60.2.100 - avformat.h
Deprecate AVFormatContext io_close callback.
The superior io_close2 callback should be used instead.
2023-02-13 - 2296078397 - lavu 58.1.100 - frame.h
Deprecate AVFrame.coded_picture_number and display_picture_number.
Their usefulness is questionable and very few decoders set them.
2023-02-13 - 6b6f7db819 - lavc 60.2.100 - avcodec.h
Add AVCodecContext.frame_num as a 64bit version of frame_number.
Deprecate AVCodecContext.frame_number.
2023-02-12 - d1b9a3ddb4 - lavfi 9.1.100 - avfilter.h
Add filtergraph segment parsing API.
New structs:
- AVFilterGraphSegment
- AVFilterChain
- AVFilterParams
- AVFilterPadParams
New functions:
- avfilter_graph_segment_parse()
- avfilter_graph_segment_create_filters()
- avfilter_graph_segment_apply_opts()
- avfilter_graph_segment_init()
- avfilter_graph_segment_link()
- avfilter_graph_segment_apply()
2023-02-09 - 719a93f4e4 - lavu 58.0.100 - csp.h
Add av_csp_approximate_trc_gamma() and av_csp_trc_func_from_id().
Add av_csp_trc_function.
2023-02-09 - 868a31b42d - lavc 60.0.100 - avcodec.h
avcodec_decode_subtitle2() now accepts const AVPacket*.
2023-02-04 - d02340b9e3 - lavc 59.63.100
Allow AV_CODEC_FLAG_COPY_OPAQUE to be used with decoders.
2023-01-29 - a1a80f2e64 - lavc 59.59.100 - avcodec.h
Add AV_CODEC_FLAG_COPY_OPAQUE and AV_CODEC_FLAG_FRAME_DURATION.
2023-01-13 - 002d0ec740 - lavu 57.44.100 - ambient_viewing_environment.h frame.h
Adds a new structure for holding H.274 Ambient Viewing Environment metadata,
AVAmbientViewingEnvironment.
Adds a new AVFrameSideDataType entry AV_FRAME_DATA_AMBIENT_VIEWING_ENVIRONMENT
for it.
2022-12-10 - 7a8d78f7e3 - lavc 59.55.100 - avcodec.h
Add AV_HWACCEL_FLAG_UNSAFE_OUTPUT.
2022-11-24 - e97368eba5 - lavu 57.43.100 - tx.h
Add AV_TX_FLOAT_DCT, AV_TX_DOUBLE_DCT and AV_TX_INT32_DCT.
2022-11-06 - 9dad237928 - lavu 57.42.100 - dict.h
Add av_dict_iterate().
2022-11-03 - 6228ba141d - lavu 57.41.100 - channel_layout.h
Add AV_CH_LAYOUT_7POINT1_TOP_BACK and AV_CHANNEL_LAYOUT_7POINT1_TOP_BACK.
2022-10-30 - 83e918de71 - lavu 57.40.100 - channel_layout.h
Add AV_CH_LAYOUT_CUBE and AV_CHANNEL_LAYOUT_CUBE.
2022-10-11 - 479747645f - lavu 57.39.101 - pixfmt.h
Add AV_PIX_FMT_RGBF32 and AV_PIX_FMT_RGBAF32.
2022-10-05 - 37d5ddc317 - lavu 57.39.100 - cpu.h
Add AV_CPU_FLAG_RVB_BASIC.
2022-10-03 - d09776d486 - lavf 59.34.100 - avio.h
Make AVIODirContext an opaque type in a future major version bump.
2022-09-27 - 0c0a3deb18 - lavu 57.38.100 - cpu.h
Add CPU flags for RISC-V vector extensions:
AV_CPU_FLAG_RVV_I32, AV_CPU_FLAG_RVV_F32, AV_CPU_FLAG_RVV_I64,
AV_CPU_FLAG_RVV_F64
2022-09-26 - a02a0e8db4 - lavc 59.48.100 - avcodec.h
Deprecate avcodec_enum_to_chroma_pos() and avcodec_chroma_pos_to_enum().
Use av_chroma_location_enum_to_pos() or av_chroma_location_pos_to_enum()
instead.
2022-09-26 - xxxxxxxxxx - lavu 57.37.100 - pixdesc.h pixfmt.h
Add av_chroma_location_enum_to_pos() and av_chroma_location_pos_to_enum().
Add AV_PIX_FMT_RGBF32BE, AV_PIX_FMT_RGBF32LE, AV_PIX_FMT_RGBAF32BE,
AV_PIX_FMT_RGBAF32LE.
2022-09-26 - cf856d8957 - lavc 59.47.100 - avcodec.h defs.h
Move the AV_EF_* and FF_COMPLIANCE_* defines from avcodec.h to defs.h.
2022-09-03 - d75c4693fe - lavu 57.36.100 - pixfmt.h
Add AV_PIX_FMT_P012, AV_PIX_FMT_Y212, AV_PIX_FMT_XV30, AV_PIX_FMT_XV36
2022-09-03 - dea9744560 - lavu 57.35.100 - file.h
Deprecate av_tempfile() without replacement.
2022-08-03 - cc5a5c9860 - lavu 57.34.100 - pixfmt.h
Add AV_PIX_FMT_VUYX.
2022-08-22 - 14726571dd - lavf 59 - avformat.h
Deprecate av_stream_get_end_pts() without replacement.
2022-08-19 - 352799dca8 - lavc 59.42.102 - codec_id.h
Deprecate AV_CODEC_ID_AYUV and ayuv decoder/encoder. The rawvideo codec
and vuya pixel format combination will be used instead from now on.
2022-08-07 - e95b08a7dd - lavu 57.33.101 - pixfmt.h
Add AV_PIX_FMT_RGBAF16{BE,LE} pixel formats.
2022-08-12 - e0bbdbe0a6 - lavu 57.33.100 - hwcontext_qsv.h
Add loader field to AVQSVDeviceContext
2022-08-03 - 6ab8a9d375 - lavu 57.32.100 - pixfmt.h
Add AV_PIX_FMT_VUYA.
2022-08-02 - e3838b856f - lavc 59.41.100 - avcodec.h codec.h
Add AV_CODEC_FLAG_RECON_FRAME and AV_CODEC_CAP_ENCODER_RECON_FRAME.
avcodec_receive_frame() may now be used on encoders when
AV_CODEC_FLAG_RECON_FRAME is active.
2022-08-02 - eede1d2927 - lavu 57.31.100 - frame.h
av_frame_make_writable() may now be called on non-refcounted
frames and will make a refcounted copy out of them.
Previously an error was returned in such cases.
2022-07-30 - e1a0f2df3d - lavc 59.40.100 - avcodec.h
Add the AV_CODEC_FLAG2_ICC_PROFILES flag to AVCodecContext, to enable
automatic reading and writing of embedded ICC profiles in image files.
The "flags2" option now supports the corresponding flag "icc_profiles".
2022-07-19 - 4397f9a5a0 - lavu 57.30.100 - frame.h
Add AVFrame.duration, deprecate AVFrame.pkt_duration.
-------- 8< --------- FFmpeg 5.1 was cut here -------- 8< ---------
2022-06-12 - 7cae3d8b76 - lavf 59.25.100 - avio.h
Add avio_vprintf(), similar to avio_printf() but allow to use it
from within a function taking a variable argument list as input.
2022-06-12 - ff59ecc4de - lavu 57.27.100 - uuid.h
Add UUID handling functions.
Add av_uuid_parse(), av_uuid_urn_parse(), av_uuid_parse_range(),
av_uuid_parse_range(), av_uuid_equal(), av_uuid_copy(), and av_uuid_nil().
2022-06-01 - d42b410e05 - lavu 57.26.100 - csp.h
Add public API for colorspace structs.
Add av_csp_luma_coeffs_from_avcsp(), av_csp_primaries_desc_from_id(),
and av_csp_primaries_id_from_desc().
2022-05-23 - 4cdc14aa95 - lavu 57.25.100 - avutil.h
Deprecate av_fopen_utf8() without replacement.
2022-03-16 - f3a0e2ee2b - all libraries - version_major.h
Add lib<name>/version_major.h as new installed headers, which only
contain the major version number (and corresponding API deprecation
defines).
2022-03-15 - cdba98bb80 - swr 4.5.100 - swresample.h
Add swr_alloc_set_opts2() and swr_build_matrix2().
Deprecate swr_alloc_set_opts() and swr_build_matrix().
2022-03-15 - cdba98bb80 - lavfi 8.28.100 - avfilter.h buffersink.h buffersrc.h
Update AVFilterLink for the new channel layout API: add ch_layout,
deprecate channel_layout.
Update the buffersink filter sink for the new channel layout API:
add av_buffersink_get_ch_layout() and the ch_layouts option,
deprecate av_buffersink_get_channel_layout() and the channel_layouts option.
Update AVBufferSrcParameters for the new channel layout API:
add ch_layout, deprecate channel_layout.
2022-03-15 - cdba98bb80 - lavf 59.19.100 - avformat.h
Add AV_DISPOSITION_NON_DIEGETIC.
2022-03-15 - cdba98bb80 - lavc 59.24.100 - avcodec.h codec_par.h
Update AVCodecParameters for the new channel layout API: add ch_layout,
deprecate channels/channel_layout.
Update AVCodecContext for the new channel layout API: add ch_layout,
deprecate channels/channel_layout.
Update AVCodec for the new channel layout API: add ch_layouts,
deprecate channel_layouts.
2022-03-15 - cdba98bb80 - lavu 57.24.100 - channel_layout.h frame.h opt.h
Add new channel layout API based on the AVChannelLayout struct.
Add support for Ambisonic audio.
Deprecate previous channel layout API based on uint64 bitmasks.
Add AV_OPT_TYPE_CHLAYOUT option type, deprecate AV_OPT_TYPE_CHANNEL_LAYOUT.
Update AVFrame for the new channel layout API: add ch_layout, deprecate
channels/channel_layout.
2022-03-10 - f629ea2e18 - lavu 57.23.100 - cpu.h
Add AV_CPU_FLAG_AVX512ICL.
2022-02-07 - a10f1aec1f - lavu 57.21.100 - fifo.h
Deprecate AVFifoBuffer and the API around it, namely av_fifo_alloc(),
av_fifo_alloc_array(), av_fifo_free(), av_fifo_freep(), av_fifo_reset(),
av_fifo_size(), av_fifo_space(), av_fifo_generic_peek_at(),
av_fifo_generic_peek(), av_fifo_generic_read(), av_fifo_generic_write(),
av_fifo_realloc2(), av_fifo_grow(), av_fifo_drain() and av_fifo_peek2().
Users should switch to the AVFifo-API.
2022-02-07 - 7329b22c05 - lavu 57.20.100 - fifo.h
Add a new FIFO API, which allows setting a FIFO element size.
This API operates on these elements rather than on bytes.
Add av_fifo_alloc2(), av_fifo_elem_size(), av_fifo_can_read(),
av_fifo_can_write(), av_fifo_grow2(), av_fifo_drain2(), av_fifo_write(),
av_fifo_write_from_cb(), av_fifo_read(), av_fifo_read_to_cb(),
av_fifo_peek(), av_fifo_peek_to_cb(), av_fifo_drain2(), av_fifo_reset2(),
av_fifo_freep2(), av_fifo_auto_grow_limit().
2022-01-26 - af94ab7c7c0 - lavu 57.19.100 - tx.h
Add AV_TX_FLOAT_RDFT, AV_TX_DOUBLE_RDFT and AV_TX_INT32_RDFT.
-------- 8< --------- FFmpeg 5.0 was cut here -------- 8< ---------
2022-01-04 - 78dc21b123e - lavu 57.16.100 - frame.h
Add AV_FRAME_DATA_DOVI_METADATA.
2022-01-03 - 70f318e6b6c - lavf 59.13.100 - avformat.h
Add AVFMT_EXPERIMENTAL flag.
2021-12-22 - b7e1ec7bda9 - lavu 57.13.100 - hwcontext_videotoolbox.h
Add av_vt_pixbuf_set_attachments
2021-12-22 - 69bd95dcd8d - lavu 57.13.100 - hwcontext_videotoolbox.h
Add av_map_videotoolbox_chroma_loc_from_av
Add av_map_videotoolbox_color_matrix_from_av
Add av_map_videotoolbox_color_primaries_from_av
Add av_map_videotoolbox_color_trc_from_av
2021-12-21 - ffbab99f2c2 - lavu 57.12.100 - cpu.h
Add AV_CPU_FLAG_SLOW_GATHER.
2021-12-20 - 278068dc60d - lavu 57.11.101 - display.h
Modified the documentation of av_display_rotation_set()
to match its longstanding actual behaviour of treating
the angle as directed clockwise.
2021-12-12 - 64834bb86a1 - lavf 59.10.100 - avformat.h
Add AVFormatContext io_close2 which returns an int
2021-12-10 - f45cbb775e4 - lavu 57.11.100 - hwcontext_vulkan.h
Add AVVkFrame.offset and AVVulkanFramesContext.flags.
2021-12-04 - b9c928a486f - lavfi 8.19.100 - avfilter.h
Add AVFILTER_FLAG_METADATA_ONLY.
2021-12-03 - b236ef0a594 - lavu 57.10.100 - frame.h
Add AVFrame.time_base
2021-11-22 - b2cd1fb2ec6 - lavu 57.9.100 - pixfmt.h
Add AV_PIX_FMT_P210, AV_PIX_FMT_P410, AV_PIX_FMT_P216, and AV_PIX_FMT_P416.
2021-11-17 - 54e65aa38ab - lavf 57.9.100 - frame.h
Add AV_FRAME_DATA_DOVI_RPU_BUFFER.
2021-11-16 - ed75a08d36c - lavf 59.9.100 - avformat.h
Add av_stream_get_class(). Schedule adding AVStream.av_class at libavformat
major version 60.
Add av_disposition_to_string() and av_disposition_from_string().
Add "disposition" AVOption to AVStream's class.
2021-11-12 - 8478d60d5b5 - lavu 57.8.100 - hwcontext_vulkan.h
Added AVVkFrame.sem_value, AVVulkanDeviceContext.queue_family_encode_index,
nb_encode_queues, queue_family_decode_index, and nb_decode_queues.
2021-10-18 - 682bafdb125 - lavf 59.8.100 - avio.h
Introduce public bytes_{read,written} statistic fields to AVIOContext.
2021-10-13 - a5622ed16f8 - lavf 59.7.100 - avio.h
Deprecate AVIOContext.written. Originally added as a private entry in
commit 3f75e5116b900f1428aa13041fc7d6301bf1988a, its grouping with
the comment noting its private state was missed during merging of the field
from Libav (most likely due to an already existing field in between).
2021-09-21 - 0760d9153c3 - lavu 57.7.100 - pixfmt.h
Add AV_PIX_FMT_X2BGR10.
2021-09-20 - 8d5de914d31 - lavu 57.6.100 - mem.h
Deprecate av_mallocz_array() as it is identical to av_calloc().
2021-09-20 - 176b8d785bf - lavc 59.9.100 - avcodec.h
Deprecate AVCodecContext.sub_text_format and the corresponding
AVOptions. It is unused since the last major bump.
2021-09-20 - dd846bc4a91 - lavc 59.8.100 - avcodec.h codec.h
Deprecate AV_CODEC_FLAG_TRUNCATED and AV_CODEC_CAP_TRUNCATED,
as they are redundant with parsers.
2021-09-17 - ccfdef79b13 - lavu 57.5.101 - buffer.h
Constified the input parameters in av_buffer_replace(), av_buffer_ref(),
and av_buffer_pool_buffer_get_opaque().
2021-09-08 - 4f78711f9c2 - lavu 57.5.100 - hwcontext_d3d11va.h
Add AVD3D11VAFramesContext.texture_infos
2021-09-06 - 42cd64c1826 - lsws 6.1.100 - swscale.h
Add AVFrame-based scaling API:
- sws_scale_frame()
- sws_frame_start()
- sws_frame_end()
- sws_send_slice()
- sws_receive_slice()
- sws_receive_slice_alignment()
2021-09-02 - cbf111059d2 - lavc 59.7.100 - avcodec.h
Incremented the number of elements of AVCodecParser.codec_ids to seven.
2021-08-24 - 590a7e02f04 - lavc 59.6.100 - avcodec.h
Add FF_CODEC_PROPERTY_FILM_GRAIN
2021-08-20 - 7c5f998196d - lavfi 8.3.100 - avfilter.H
Add avfilter_filter_pad_count() as a replacement for avfilter_pad_count().
Deprecate avfilter_pad_count().
2021-08-17 - 8c53b145993 - lavu 57.4.101 - opt.h
av_opt_copy() now guarantees that allocated src and dst options
don't alias each other even on error.
2021-08-14 - d5de9965ef6 - lavu 57.4.100 - imgutils.h
Add av_image_copy_plane_uc_from()
2021-08-02 - a1a0fddfd05 - lavc 59.4.100 - packet.h
Add AVPacket.opaque, AVPacket.opaque_ref, AVPacket.time_base.
2021-07-23 - 2dd8acbe800 - lavu 57.3.100 - common.h macros.h
Move several macros (AV_NE, FFDIFFSIGN, FFMAX, FFMAX3, FFMIN, FFMIN3,
FFSWAP, FF_ARRAY_ELEMS, MKTAG, MKBETAG) from common.h to macros.h.
2021-07-22 - e3b5ff17c2e - lavu 57.2.100 - film_grain_params.h
Add AV_FILM_GRAIN_PARAMS_H274, AVFilmGrainH274Params
2021-07-19 - c1bf56a526f - lavu 57.1.100 - cpu.h
Add av_cpu_force_count()
2021-06-17 - aca923b3653 - lavc 59.2.100 - packet.h
Add AV_PKT_DATA_DYNAMIC_HDR10_PLUS
2021-06-09 - 2cccab96f6f - lavf 59.3.100 - avformat.h
Add pts_wrap_bits to AVStream
2021-06-10 - 7c9763070d9 - lavc 59.1.100 - avcodec.h codec.h
Move av_get_profile_name() from avcodec.h to codec.h.
2021-06-10 - bb3648e6766 - lavc 59.1.100 - avcodec.h codec_par.h
Move av_get_audio_frame_duration2() from avcodec.h to codec_par.h.
2021-06-10 - 881db34f6a0 - lavc 59.1.100 - avcodec.h codec_id.h
Move av_get_bits_per_sample(), av_get_exact_bits_per_sample(),
avcodec_profile_name(), and av_get_pcm_codec() from avcodec.h
to codec_id.h.
2021-06-10 - ff0a96046d8 - lavc 59.1.100 - avcodec.h defs.h
Add new installed header defs.h. The following definitions are moved
into it from avcodec.h:
- AVDiscard
- AVAudioServiceType
- AVPanScan
- AVCPBProperties and av_cpb_properties_alloc()
- AVProducerReferenceTime
- av_xiphlacing()
2021-04-27 - cb3ac722f4 - lavc 59.0.100 - avcodec.h
Constified AVCodecParserContext.parser.
2021-04-27 - 8b3e6ce5f4 - lavd 59.0.100 - avdevice.h
The av_*_device_next API functions now accept and return
pointers to const AVInputFormat resp. AVOutputFormat.
2021-04-27 - d7e0d428fa - lavd 59.0.100 - avdevice.h
avdevice_list_input_sources and avdevice_list_output_sinks now accept
pointers to const AVInputFormat resp. const AVOutputFormat.
2021-04-27 - 46dac8cf3d - lavf 59.0.100 - avformat.h
av_find_best_stream now uses a const AVCodec ** parameter
for the returned decoder.
2021-04-27 - 626535f6a1 - lavc 59.0.100 - codec.h
avcodec_find_encoder_by_name(), avcodec_find_encoder(),
avcodec_find_decoder_by_name() and avcodec_find_decoder()
now return a pointer to const AVCodec.
2021-04-27 - 14fa0a4efb - lavf 59.0.100 - avformat.h
Constified AVFormatContext.*_codec.
2021-04-27 - 56450a0ee4 - lavf 59.0.100 - avformat.h
Constified the pointers to AVInputFormats and AVOutputFormats
in AVFormatContext, avformat_alloc_output_context2(),
av_find_input_format(), av_probe_input_format(),
av_probe_input_format2(), av_probe_input_format3(),
av_probe_input_buffer2(), av_probe_input_buffer(),
avformat_open_input(), av_guess_format() and av_guess_codec().
Furthermore, constified the AVProbeData in av_probe_input_format(),
av_probe_input_format2() and av_probe_input_format3().
2021-04-19 - 18af1ea8d1 - lavu 56.74.100 - tx.h
Add AV_TX_FULL_IMDCT and AV_TX_UNALIGNED.
2021-04-17 - f1bf465aa0 - lavu 56.73.100 - frame.h detection_bbox.h
Add AV_FRAME_DATA_DETECTION_BBOXES
2021-04-06 - 557953a397 - lavf 58.78.100 - avformat.h
Add avformat_index_get_entries_count(), avformat_index_get_entry(),
and avformat_index_get_entry_from_timestamp().
2021-03-21 - a77beea6c8 - lavu 56.72.100 - frame.h
Deprecated av_get_colorspace_name().
Use av_color_space_name() instead.
-------- 8< --------- FFmpeg 4.4 was cut here -------- 8< ---------
2021-03-19 - e8c0bca6bd - lavu 56.69.100 - adler32.h
@@ -1982,7 +1410,7 @@ API changes, most recent first:
2014-04-15 - ef818d8 - lavf 55.37.101 - avformat.h
Add av_format_inject_global_side_data()
2014-04-12 - 4f698be8f - lavu 52.76.100 - log.h
2014-04-12 - 4f698be - lavu 52.76.100 - log.h
Add av_log_get_flags()
2014-04-11 - 6db42a2b - lavd 55.12.100 - avdevice.h

View File

@@ -38,7 +38,7 @@ PROJECT_NAME = FFmpeg
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = 6.1.4
PROJECT_NUMBER = 4.4
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
@@ -1980,7 +1980,6 @@ PREDEFINED = __attribute__(x)= \
av_alloc_size(...)= \
AV_GCC_VERSION_AT_LEAST(x,y)=1 \
AV_GCC_VERSION_AT_MOST(x,y)=0 \
"FF_PAD_STRUCTURE(name,size,...)=typedef struct name { __VA_ARGS__ } name;" \
__GNUC__
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this

View File

@@ -19,7 +19,6 @@ MANPAGES3 = $(LIBRARIES-yes:%=doc/%.3)
MANPAGES = $(MANPAGES1) $(MANPAGES3)
PODPAGES = $(AVPROGS-yes:%=doc/%.pod) $(AVPROGS-yes:%=doc/%-all.pod) $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
doc/community.html \
doc/developer.html \
doc/faq.html \
doc/fate.html \
@@ -28,9 +27,6 @@ HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMP
doc/mailing-list-faq.html \
doc/nut.html \
doc/platform.html \
$(SRC_PATH)/doc/bootstrap.min.css \
$(SRC_PATH)/doc/style.min.css \
$(SRC_PATH)/doc/default.css \
TXTPAGES = doc/fate.txt \
@@ -106,7 +102,7 @@ DOXY_INPUT_DEPS = $(addprefix $(SRC_PATH)/, $(DOXY_INPUT)) ffbuild/config.mak
doc/doxy/html: TAG = DOXY
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT_DEPS)
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $$PWD/doc/doxy $(SRC_PATH) doc/Doxyfile $(DOXYGEN) $(DOXY_INPUT);
$(M)OUT_DIR=$$PWD/doc/doxy; cd $(SRC_PATH); ./doc/doxy-wrapper.sh $$OUT_DIR $< $(DOXYGEN) $(DOXY_INPUT);
install-doc: install-html install-man

View File

@@ -3,9 +3,9 @@
The FFmpeg developers.
For details about the authorship, see the Git history of the project
(https://git.ffmpeg.org/ffmpeg), e.g. by typing the command
(git://source.ffmpeg.org/ffmpeg), e.g. by typing the command
@command{git log} in the FFmpeg source directory, or browsing the
online repository at @url{https://git.ffmpeg.org/ffmpeg}.
online repository at @url{http://source.ffmpeg.org}.
Maintainers for the specific components are listed in the file
@file{MAINTAINERS} in the source code tree.

View File

@@ -81,7 +81,7 @@ Top-left position.
@end table
@item tick_rate
Set the tick rate (@emph{time_scale / num_units_in_display_tick}) in
Set the tick rate (@emph{num_units_in_display_tick / time_scale}) in
the timing info in the sequence header.
@item num_ticks_per_picture
Set the number of ticks in each picture, to indicate that the stream
@@ -132,36 +132,6 @@ the header stored in extradata to the key packets:
ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts
@end example
@section dv_error_marker
Blocks in DV which are marked as damaged are replaced by blocks of the specified color.
@table @option
@item color
The color to replace damaged blocks by
@item sta
A 16 bit mask which specifies which of the 16 possible error status values are
to be replaced by colored blocks. 0xFFFE is the default which replaces all non 0
error status values.
@table @samp
@item ok
No error, no concealment
@item err
Error, No concealment
@item res
Reserved
@item notok
Error or concealment
@item notres
Not reserved
@item Aa, Ba, Ca, Ab, Bb, Cb, A, B, C, a, b, erri, erru
The specific error status code
@end table
see page 44-46 or section 5.5 of
@url{http://web.archive.org/web/20060927044735/http://www.smpte.org/smpte_store/standards/pdf/s314m.pdf}
@end table
@section eac3_core
Extract the core from a E-AC-3 stream, dropping extra channels.
@@ -247,16 +217,12 @@ Modify metadata embedded in an H.264 stream.
Insert or remove AUD NAL units in all access units of the stream.
@table @samp
@item pass
@item insert
@item remove
@end table
Default is pass.
@item sample_aspect_ratio
Set the sample aspect ratio of the stream in the VUI parameters.
See H.264 table E-1.
@item overscan_appropriate_flag
Set whether the stream is suitable for display using overscan
@@ -278,7 +244,7 @@ Set the chroma sample location in the stream (see H.264 section
E.2.1 and figure E-1).
@item tick_rate
Set the tick rate (time_scale / num_units_in_tick) in the VUI
Set the tick rate (num_units_in_tick / time_scale) in the VUI
parameters. This is the smallest time unit representable in the
stream, and in many cases represents the field rate of the stream
(double the frame rate).
@@ -287,11 +253,6 @@ Set whether the stream has fixed framerate - typically this indicates
that the framerate is exactly half the tick rate, but the exact
meaning is dependent on interlacing and the picture structure (see
H.264 section E.2.1 and table E-6).
@item zero_new_constraint_set_flags
Zero constraint_set4_flag and constraint_set5_flag in the SPS. These
bits were reserved in a previous version of the H.264 spec, and thus
some hardware decoders require these to be zero. The result of zeroing
this is still a valid bitstream.
@item crop_left
@item crop_right
@@ -315,37 +276,6 @@ insert the string ``hello'' associated with the given UUID.
@item delete_filler
Deletes both filler NAL units and filler SEI messages.
@item display_orientation
Insert, extract or remove Display orientation SEI messages.
See H.264 section D.1.27 and D.2.27 for syntax and semantics.
@table @samp
@item pass
@item insert
@item remove
@item extract
@end table
Default is pass.
Insert mode works in conjunction with @code{rotate} and @code{flip} options.
Any pre-existing Display orientation messages will be removed in insert or remove mode.
Extract mode attaches the display matrix to the packet as side data.
@item rotate
Set rotation in display orientation SEI (anticlockwise angle in degrees).
Range is -360 to +360. Default is NaN.
@item flip
Set flip in display orientation SEI.
@table @samp
@item horizontal
@item vertical
@end table
Default is unset.
@item level
Set the level in the SPS. Refer to H.264 section A.3 and tables A-1
to A-5.
@@ -382,6 +312,9 @@ This applies a specific fixup to some Blu-ray streams which contain
redundant PPSs modifying irrelevant parameters of the stream which
confuse other transformations which require correct extradata.
A new single global PPS is created, and all of the redundant PPSs
within the stream are removed.
@section hevc_metadata
Modify metadata embedded in an HEVC stream.
@@ -414,8 +347,8 @@ Set the chroma sample location in the stream (see H.265 section
E.3.1 and figure E.1).
@item tick_rate
Set the tick rate in the VPS and VUI parameters (time_scale /
num_units_in_tick). Combined with @option{num_ticks_poc_diff_one}, this can
Set the tick rate in the VPS and VUI parameters (num_units_in_tick /
time_scale). Combined with @option{num_ticks_poc_diff_one}, this can
set a constant framerate in the stream. Note that it is likely to be
overridden by container parameters when the stream is in a container.
@@ -596,67 +529,20 @@ container. Can be used for fuzzing or testing error resilience/concealment.
Parameters:
@table @option
@item amount
Accepts an expression whose evaluation per-packet determines how often bytes in that
packet will be modified. A value below 0 will result in a variable frequency.
Default is 0 which results in no modification. However, if neither amount nor drop is specified,
amount will be set to @var{-1}. See below for accepted variables.
@item drop
Accepts an expression evaluated per-packet whose value determines whether that packet is dropped.
Evaluation to a positive value results in the packet being dropped. Evaluation to a negative
value results in a variable chance of it being dropped, roughly inverse in proportion to the magnitude
of the value. Default is 0 which results in no drops. See below for accepted variables.
A numeral string, whose value is related to how often output bytes will
be modified. Therefore, values below or equal to 0 are forbidden, and
the lower the more frequent bytes will be modified, with 1 meaning
every byte is modified.
@item dropamount
Accepts a non-negative integer, which assigns a variable chance of it being dropped, roughly inverse
in proportion to the value. Default is 0 which results in no drops. This option is kept for backwards
compatibility and is equivalent to setting drop to a negative value with the same magnitude
i.e. @code{dropamount=4} is the same as @code{drop=-4}. Ignored if drop is also specified.
A numeral string, whose value is related to how often packets will be dropped.
Therefore, values below or equal to 0 are forbidden, and the lower the more
frequent packets will be dropped, with 1 meaning every packet is dropped.
@end table
Both @code{amount} and @code{drop} accept expressions containing the following variables:
@table @samp
@item n
The index of the packet, starting from zero.
@item tb
The timebase for packet timestamps.
@item pts
Packet presentation timestamp.
@item dts
Packet decoding timestamp.
@item nopts
Constant representing AV_NOPTS_VALUE.
@item startpts
First non-AV_NOPTS_VALUE PTS seen in the stream.
@item startdts
First non-AV_NOPTS_VALUE DTS seen in the stream.
@item duration
@itemx d
Packet duration, in timebase units.
@item pos
Packet position in input; may be -1 when unknown or not set.
@item size
Packet size, in bytes.
@item key
Whether packet is marked as a keyframe.
@item state
A pseudo random integer, primarily derived from the content of packet payload.
@end table
@subsection Examples
Apply modification to every byte but don't drop any packets.
The following example applies the modification to every byte but does not drop
any packets.
@example
ffmpeg -i INPUT -c copy -bsf noise=1 output.mkv
@end example
Drop every video packet not marked as a keyframe after timestamp 30s but do not
modify any of the remaining packets.
@example
ffmpeg -i INPUT -c copy -bsf:v noise=drop='gt(t\,30)*not(key)' output.mkv
@end example
Drop one second of audio every 10 seconds and add some random noise to the rest.
@example
ffmpeg -i INPUT -c copy -bsf:a noise=amount=-1:drop='between(mod(t\,10)\,9\,10)' output.mkv
ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
@end example
@section null
@@ -692,14 +578,6 @@ for NTSC frame rate using the @option{frame_rate} option.
ffmpeg -f lavfi -i sine=r=48000:d=1 -c pcm_s16le -bsf pcm_rechunk=r=30000/1001 -f framecrc -
@end example
@section pgs_frame_merge
Merge a sequence of PGS Subtitle segments ending with an "end of display set"
segment into a single packet.
This is required by some containers that support PGS subtitles
(muxer @code{matroska}).
@section prores_metadata
Modify color property metadata embedded in prores stream.
@@ -806,10 +684,6 @@ It accepts the following parameters:
@item pts
@item dts
Set expressions for PTS, DTS or both.
@item duration
Set expression for duration.
@item time_base
Set output time base.
@end table
The expressions are evaluated through the eval API and can contain the following
@@ -833,9 +707,6 @@ The demux timestamp in input.
@item PTS
The presentation timestamp in input.
@item DURATION
The duration in input.
@item STARTDTS
The DTS of the first packet.
@@ -848,38 +719,17 @@ The previous input DTS.
@item PREV_INPTS
The previous input PTS.
@item PREV_INDURATION
The previous input duration.
@item PREV_OUTDTS
The previous output DTS.
@item PREV_OUTPTS
The previous output PTS.
@item PREV_OUTDURATION
The previous output duration.
@item NEXT_DTS
The next input DTS.
@item NEXT_PTS
The next input PTS.
@item NEXT_DURATION
The next input duration.
@item TB
The timebase of stream packet belongs.
@item TB_OUT
The output timebase.
@item SR
The sample rate of stream packet belongs.
@item NOPTS
The AV_NOPTS_VALUE constant.
@end table
@anchor{text2movsub}

File diff suppressed because one or more lines are too long

View File

@@ -144,6 +144,21 @@ Default value is 0.
@item b_qfactor @var{float} (@emph{encoding,video})
Set qp factor between P and B frames.
@item b_strategy @var{integer} (@emph{encoding,video})
Set strategy to choose between I/P/B-frames.
@item ps @var{integer} (@emph{encoding,video})
Set RTP payload size in bytes.
@item mv_bits @var{integer}
@item header_bits @var{integer}
@item i_tex_bits @var{integer}
@item p_tex_bits @var{integer}
@item i_count @var{integer}
@item p_count @var{integer}
@item skip_count @var{integer}
@item misc_bits @var{integer}
@item frame_bits @var{integer}
@item codec_tag @var{integer}
@item bug @var{flags} (@emph{decoding,video})
Workaround not auto detected encoder bugs.
@@ -233,6 +248,9 @@ consider things that a sane encoder should not do as an error
@item block_align @var{integer}
@item mpeg_quant @var{integer} (@emph{encoding,video})
Use MPEG quantizers instead of H.263.
@item rc_override_count @var{integer}
@item maxrate @var{integer} (@emph{encoding,audio,video})
@@ -338,6 +356,19 @@ favor predicting from the previous frame instead of the current
@item bits_per_coded_sample @var{integer}
@item pred @var{integer} (@emph{encoding,video})
Set prediction method.
Possible values:
@table @samp
@item left
@item plane
@item median
@end table
@item aspect @var{rational number} (@emph{encoding,video})
Set sample aspect ratio.
@@ -554,6 +585,9 @@ sab diamond motion estimation
@item last_pred @var{integer} (@emph{encoding,video})
Set amount of motion predictors from the previous frame.
@item preme @var{integer} (@emph{encoding,video})
Set pre motion estimation.
@item precmp @var{integer} (@emph{encoding,video})
Set pre motion estimation compare function.
@@ -602,6 +636,23 @@ Set limit motion vectors range (1023 for DivX player).
@item global_quality @var{integer} (@emph{encoding,audio,video})
@item coder @var{integer} (@emph{encoding,video})
Possible values:
@table @samp
@item vlc
variable length coder / huffman coder
@item ac
arithmetic coder
@item raw
raw (no encoding)
@item rle
run-length coder
@end table
@item context @var{integer} (@emph{encoding,video})
Set context model.
@item slice_flags @var{integer}
@item mbd @var{integer} (@emph{encoding,video})
@@ -617,6 +668,12 @@ use fewest bits
use best rate distortion
@end table
@item sc_threshold @var{integer} (@emph{encoding,video})
Set scene change threshold.
@item nr @var{integer} (@emph{encoding,video})
Set noise reduction.
@item rc_init_occupancy @var{integer} (@emph{encoding,video})
Set number of bits which should be loaded into the rc buffer before
decoding starts.
@@ -644,8 +701,6 @@ for codecs that support it. See also @file{doc/examples/export_mvs.c}.
Do not skip samples and export skip information as frame side data.
@item ass_ro_flush_noop
Do not reset ASS ReadOrder field on flush.
@item icc_profiles
Generate/parse embedded ICC profiles from/to colorimetry tags.
@end table
@item export_side_data @var{flags} (@emph{decoding/encoding,audio,video,subtitles})
@@ -697,24 +752,73 @@ profiles are documented in the relevant encoder documentation.
@item level @var{integer} (@emph{encoding,audio,video})
Set the encoder level. This level depends on the specific codec, and
might correspond to the profile level. It is set by default to
@samp{unknown}.
Possible values:
@table @samp
@item unknown
@end table
@item lowres @var{integer} (@emph{decoding,audio,video})
Decode at 1= 1/2, 2=1/4, 3=1/8 resolutions.
@item skip_threshold @var{integer} (@emph{encoding,video})
Set frame skip threshold.
@item skip_factor @var{integer} (@emph{encoding,video})
Set frame skip factor.
@item skip_exp @var{integer} (@emph{encoding,video})
Set frame skip exponent.
Negative values behave identical to the corresponding positive ones, except
that the score is normalized.
Positive values exist primarily for compatibility reasons and are not so useful.
@item skipcmp @var{integer} (@emph{encoding,video})
Set frame skip compare function.
Possible values:
@table @samp
@item sad
sum of absolute differences, fast (default)
@item sse
sum of squared errors
@item satd
sum of absolute Hadamard transformed differences
@item dct
sum of absolute DCT transformed differences
@item psnr
sum of squared quantization errors (avoid, low quality)
@item bit
number of bits needed for the block
@item rd
rate distortion optimal, slow
@item zero
0
@item vsad
sum of absolute vertical differences
@item vsse
sum of squared vertical differences
@item nsse
noise preserving sum of squared differences
@item w53
5/3 wavelet, only used in snow
@item w97
9/7 wavelet, only used in snow
@item dctmax
@item chroma
@end table
@item mblmin @var{integer} (@emph{encoding,video})
Set min macroblock lagrange factor (VBR).
@item mblmax @var{integer} (@emph{encoding,video})
Set max macroblock lagrange factor (VBR).
@item mepc @var{integer} (@emph{encoding,video})
Set motion estimation bitrate penalty compensation (1.0 = 256).
@item skip_loop_filter @var{integer} (@emph{decoding,video})
@item skip_idct @var{integer} (@emph{decoding,video})
@item skip_frame @var{integer} (@emph{decoding,video})
@@ -754,17 +858,31 @@ Default value is @samp{default}.
@item bidir_refine @var{integer} (@emph{encoding,video})
Refine the two motion vectors used in bidirectional macroblocks.
@item brd_scale @var{integer} (@emph{encoding,video})
Downscale frames for dynamic B-frame decision.
@item keyint_min @var{integer} (@emph{encoding,video})
Set minimum interval between IDR-frames.
@item refs @var{integer} (@emph{encoding,video})
Set reference frames to consider for motion compensation.
@item chromaoffset @var{integer} (@emph{encoding,video})
Set chroma qp offset from luma.
@item trellis @var{integer} (@emph{encoding,audio,video})
Set rate-distortion optimal quantization.
@item mv0_threshold @var{integer} (@emph{encoding,video})
@item b_sensitivity @var{integer} (@emph{encoding,video})
Adjust sensitivity of b_frame_strategy 1.
@item compression_level @var{integer} (@emph{encoding,audio,video})
@item min_prediction_order @var{integer} (@emph{encoding,audio})
@item max_prediction_order @var{integer} (@emph{encoding,audio})
@item timecode_frame_start @var{integer} (@emph{encoding,video})
Set GOP timecode frame start number, in non drop frame format.
@item bits_per_raw_sample @var{integer}
@item channel_layout @var{integer} (@emph{decoding/encoding,audio})
@@ -778,6 +896,7 @@ Possible values:
@end table
@item rc_max_vbv_use @var{float} (@emph{encoding,video})
@item rc_min_vbv_use @var{float} (@emph{encoding,video})
@item ticks_per_frame @var{integer} (@emph{decoding/encoding,audio,video})
@item color_primaries @var{integer} (@emph{decoding/encoding,video})
Possible values:
@@ -892,11 +1011,9 @@ Possible values:
@table @samp
@item tv
@item mpeg
@item limited
MPEG (219*2^(n-8))
@item pc
@item jpeg
@item full
JPEG (2^n-1)
@end table

View File

@@ -1,175 +0,0 @@
\input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Community
@titlepage
@center @titlefont{Community}
@end titlepage
@top
@contents
@anchor{Organisation}
@chapter Organisation
The FFmpeg project is organized through a community working on global consensus.
Decisions are taken by the ensemble of active members, through voting and are aided by two committees.
@anchor{General Assembly}
@chapter General Assembly
The ensemble of active members is called the General Assembly (GA).
The General Assembly is sovereign and legitimate for all its decisions regarding the FFmpeg project.
The General Assembly is made up of active contributors.
Contributors are considered "active contributors" if they have pushed more than 20 patches in the last 36 months in the main FFmpeg repository, or if they have been voted in by the GA.
Additional members are added to the General Assembly through a vote after proposal by a member of the General Assembly. They are part of the GA for two years, after which they need a confirmation by the GA.
A script to generate the current members of the general assembly (minus members voted in) can be found in `tools/general_assembly.pl`.
@anchor{Voting}
@chapter Voting
Voting is done using a ranked voting system, currently running on https://vote.ffmpeg.org/ .
Majority vote means more than 50% of the expressed ballots.
@anchor{Technical Committee}
@chapter Technical Committee
The Technical Committee (TC) is here to arbitrate and make decisions when technical conflicts occur in the project. They will consider the merits of all the positions, judge them and make a decision.
The TC resolves technical conflicts but is not a technical steering committee.
Decisions by the TC are binding for all the contributors.
Decisions made by the TC can be re-opened after 1 year or by a majority vote of the General Assembly, requested by one of the member of the GA.
The TC is elected by the General Assembly for a duration of 1 year, and is composed of 5 members. Members can be re-elected if they wish. A majority vote in the General Assembly can trigger a new election of the TC.
The members of the TC can be elected from outside of the GA. Candidates for election can either be suggested or self-nominated.
The conflict resolution process is detailed in the resolution process document.
The TC can be contacted at <tc@@ffmpeg>.
@anchor{Resolution Process}
@section Resolution Process
The Technical Committee (TC) is here to arbitrate and make decisions when technical conflicts occur in the project.
The TC main role is to resolve technical conflicts. It is therefore not a technical steering committee, but it is understood that some decisions might impact the future of the project.
@subsection Seizing
The TC can take possession of any technical matter that it sees fit.
To involve the TC in a matter, email tc@ or CC them on an ongoing discussion.
As members of TC are developers, they also can email tc@ to raise an issue.
@subsection Announcement
The TC, once seized, must announce itself on the main mailing list, with a [TC] tag.
The TC has 2 modes of operation: a RFC one and an internal one.
If the TC thinks it needs the input from the larger community, the TC can call for a RFC. Else, it can decide by itself.
If the disagreement involves a member of the TC, that member should recuse themselves from the decision.
The decision to use a RFC process or an internal discussion is a discretionary decision of the TC.
The TC can also reject a seizure for a few reasons such as: the matter was not discussed enough previously; it lacks expertise to reach a beneficial decision on the matter; or the matter is too trivial.
@subsection RFC call
In the RFC mode, one person from the TC posts on the mailing list the technical question and will request input from the community.
The mail will have the following specification:
a precise title
a specific tag [TC RFC]
a top-level email
contain a precise question that does not exceed 100 words and that is answerable by developers
may have an extra description, or a link to a previous discussion, if deemed necessary,
contain a precise end date for the answers.
The answers from the community must be on the main mailing list and must have the following specification:
keep the tag and the title unchanged
limited to 400 words
a first-level, answering directly to the main email
answering to the question.
Further replies to answers are permitted, as long as they conform to the community standards of politeness, they are limited to 100 words, and are not nested more than once. (max-depth=2)
After the end-date, mails on the thread will be ignored.
Violations of those rules will be escalated through the Community Committee.
After all the emails are in, the TC has 96 hours to give its final decision. Exceptionally, the TC can request an extra delay, that will be notified on the mailing list.
@subsection Within TC
In the internal case, the TC has 96 hours to give its final decision. Exceptionally, the TC can request an extra delay.
@subsection Decisions
The decisions from the TC will be sent on the mailing list, with the [TC] tag.
Internally, the TC should take decisions with a majority, or using ranked-choice voting.
The decision from the TC should be published with a summary of the reasons that lead to this decision.
The decisions from the TC are final, until the matters are reopened after no less than one year.
@anchor{Community Committee}
@chapter Community Committee
The Community Committee (CC) is here to arbitrage and make decisions when inter-personal conflicts occur in the project. It will decide quickly and take actions, for the sake of the project.
The CC can remove privileges of offending members, including removal of commit access and temporary ban from the community.
Decisions made by the CC can be re-opened after 1 year or by a majority vote of the General Assembly. Indefinite bans from the community must be confirmed by the General Assembly, in a majority vote.
The CC is elected by the General Assembly for a duration of 1 year, and is composed of 5 members. Members can be re-elected if they wish. A majority vote in the General Assembly can trigger a new election of the CC.
The members of the CC can be elected from outside of the GA. Candidates for election can either be suggested or self-nominated.
The CC is governed by and responsible for enforcing the Code of Conduct.
The CC can be contacted at <cc@@ffmpeg>.
@anchor{Code of Conduct}
@chapter Code of Conduct
Be friendly and respectful towards others and third parties.
Treat others the way you yourself want to be treated.
Be considerate. Not everyone shares the same viewpoint and priorities as you do.
Different opinions and interpretations help the project.
Looking at issues from a different perspective assists development.
Do not assume malice for things that can be attributed to incompetence. Even if
it is malice, it's rarely good to start with that as initial assumption.
Stay friendly even if someone acts contrarily. Everyone has a bad day
once in a while.
If you yourself have a bad day or are angry then try to take a break and reply
once you are calm and without anger if you have to.
Try to help other team members and cooperate if you can.
The goal of software development is to create technical excellence, not for any
individual to be better and "win" against the others. Large software projects
are only possible and successful through teamwork.
If someone struggles do not put them down. Give them a helping hand
instead and point them in the right direction.
Finally, keep in mind the immortal words of Bill and Ted,
"Be excellent to each other."
@bye

View File

@@ -76,23 +76,13 @@ The following options are supported by the libdav1d wrapper.
@item framethreads
Set amount of frame threads to use during decoding. The default value is 0 (autodetect).
This option is deprecated for libdav1d >= 1.0 and will be removed in the future. Use the
option @code{max_frame_delay} and the global option @code{threads} instead.
@item tilethreads
Set amount of tile threads to use during decoding. The default value is 0 (autodetect).
This option is deprecated for libdav1d >= 1.0 and will be removed in the future. Use the
global option @code{threads} instead.
@item max_frame_delay
Set max amount of frames the decoder may buffer internally. The default value is 0
(autodetect).
@item filmgrain
Apply film grain to the decoded video if present in the bitstream. Defaults to the
internal default of the library.
This option is deprecated and will be removed in the future. See the global option
@code{export_side_data} to export Film Grain parameters instead of applying it.
@item oppoint
Select an operating point of a scalable AV1 bitstream (0 - 31). Defaults to the
@@ -130,63 +120,6 @@ Set amount of frame threads to use during decoding. The default value is 0 (auto
@end table
@section QSV Decoders
The family of Intel QuickSync Video decoders (VC1, MPEG-2, H.264, HEVC,
JPEG/MJPEG, VP8, VP9, AV1).
@subsection Common Options
The following options are supported by all qsv decoders.
@table @option
@item @var{async_depth}
Internal parallelization depth, the higher the value the higher the latency.
@item @var{gpu_copy}
A GPU-accelerated copy between video and system memory
@table @samp
@item default
@item on
@item off
@end table
@end table
@subsection HEVC Options
Extra options for hevc_qsv.
@table @option
@item @var{load_plugin}
A user plugin to load in an internal session
@table @samp
@item none
@item hevc_sw
@item hevc_hw
@end table
@item @var{load_plugins}
A :-separate list of hexadecimal plugin UIDs to load in an internal session
@end table
@section v210
Uncompressed 4:2:2 10-bit decoder.
@subsection Options
@table @option
@item custom_stride
Set the line size of the v210 data in bytes. The default value is 0
(autodetect). You can use the special -1 value for a strideless v210 as seen in
BOXX files.
@end table
@c man end VIDEO DECODERS
@chapter Audio Decoders
@@ -353,169 +286,6 @@ Enabled by default.
@end table
@section libaribcaption
Yet another ARIB STD-B24 caption decoder using external @dfn{libaribcaption}
library.
Implements profiles A and C of the Japanse ARIB STD-B24 standard,
Brazilian ABNT NBR 15606-1, and Philippines version of ISDB-T.
Requires the presence of the libaribcaption headers and library
(@url{https://github.com/xqq/libaribcaption}) during configuration.
You need to explicitly configure the build with @code{--enable-libaribcaption}.
If both @dfn{libaribb24} and @dfn{libaribcaption} are enabled, @dfn{libaribcaption}
decoder precedes.
@subsection libaribcaption Decoder Options
@table @option
@item -sub_type @var{subtitle_type}
Specifies the format of the decoded subtitles.
@table @samp
@item bitmap
Graphical image.
@item ass
ASS formatted text.
@item text
Simple text based output without formatting.
@end table
The default is @dfn{ass} as same as @dfn{libaribb24} decoder.
Some present players (e.g., @dfn{mpv}) expect ASS format for ARIB caption.
@item -caption_encoding @var{encoding_scheme}
Specifies the encoding scheme of input subtitle text.
@table @samp
@item auto
Automatically detect text encoding (default).
@item jis
8bit-char JIS encoding defined in ARIB STD B24.
This encoding used in Japan for ISDB captions.
@item utf8
UTF-8 encoding defined in ARIB STD B24.
This encoding is used in Philippines for ISDB-T captions.
@item latin
Latin character encoding defined in ABNT NBR 15606-1.
This encoding is used in South America for SBTVD / ISDB-Tb captions.
@end table
@item -font @var{font_name[,font_name2,...]}
Specify comma-separated list of font family names to be used for @dfn{bitmap}
or @dfn{ass} type subtitle rendering.
Only first font name is used for @dfn{ass} type subtitle.
If not specified, use internaly defined default font family.
@item -ass_single_rect @var{boolean}
ARIB STD-B24 specifies that some captions may be displayed at different
positions at a time (multi-rectangle subtitle).
Since some players (e.g., old @dfn{mpv}) can't handle multiple ASS rectangles
in a single AVSubtitle, or multiple ASS rectangles of indeterminate duration
with the same start timestamp, this option can change the behavior so that
all the texts are displayed in a single ASS rectangle.
The default is @var{false}.
If your player cannot handle AVSubtitles with multiple ASS rectangles properly,
set this option to @var{true} or define @env{ASS_SINGLE_RECT=1} to change
default behavior at compilation.
@item -force_outline_text @var{boolean}
Specify whether always render outline text for all characters regardless of
the indication by charactor style.
The default is @var{false}.
@item -outline_width @var{number} (0.0 - 3.0)
Specify width for outline text, in dots (relative).
The default is @var{1.5}.
@item -ignore_background @var{boolean}
Specify whether to ignore background color rendering.
The default is @var{false}.
@item -ignore_ruby @var{boolean}
Specify whether to ignore rendering for ruby-like (furigana) characters.
The default is @var{false}.
@item -replace_drcs @var{boolean}
Specify whether to render replaced DRCS characters as Unicode characters.
The default is @var{true}.
@item -replace_msz_ascii @var{boolean}
Specify whether to replace MSZ (Middle Size; half width) fullwidth
alphanumerics with halfwidth alphanumerics.
The default is @var{true}.
@item -replace_msz_japanese @var{boolean}
Specify whether to replace some MSZ (Middle Size; half width) fullwidth
japanese special characters with halfwidth ones.
The default is @var{true}.
@item -replace_msz_glyph @var{boolean}
Specify whether to replace MSZ (Middle Size; half width) characters
with halfwidth glyphs if the fonts supports it.
This option works under FreeType or DirectWrite renderer
with Adobe-Japan1 compliant fonts.
e.g., IBM Plex Sans JP, Morisawa BIZ UDGothic, Morisawa BIZ UDMincho,
Yu Gothic, Yu Mincho, and Meiryo.
The default is @var{true}.
@item -canvas_size @var{image_size}
Specify the resolution of the canvas to render subtitles to; usually, this
should be frame size of input video.
This only applies when @code{-subtitle_type} is set to @var{bitmap}.
The libaribcaption decoder assumes input frame size for bitmap rendering as below:
@enumerate
@item
PROFILE_A : 1440 x 1080 with SAR (PAR) 4:3
@item
PROFILE_C : 320 x 180 with SAR (PAR) 1:1
@end enumerate
If actual frame size of input video does not match above assumption,
the rendered captions may be distorted.
To make the captions undistorted, add @code{-canvas_size} option to specify
actual input video size.
Note that the @code{-canvas_size} option is not required for video with
different size but same aspect ratio.
In such cases, the caption will be stretched or shrunk to actual video size
if @code{-canvas_size} option is not specified.
If @code{-canvas_size} option is specified with different size,
the caption will be stretched or shrunk as specified size with calculated SAR.
@end table
@subsection libaribcaption decoder usage examples
Display MPEG-TS file with ARIB subtitle by @code{ffplay} tool:
@example
ffplay -sub_type bitmap MPEG.TS
@end example
Display MPEG-TS file with input frame size 1920x1080 by @code{ffplay} tool:
@example
ffplay -sub_type bitmap -canvas_size 1920x1080 MPEG.TS
@end example
Embed ARIB subtitle in transcoded video:
@example
ffmpeg -sub_type bitmap -i src.m2t -filter_complex "[0:v][0:s]overlay" -vcodec h264 dest.mp4
@end example
@section dvbsub
@subsection Options
@@ -523,8 +293,6 @@ ffmpeg -sub_type bitmap -i src.m2t -filter_complex "[0:v][0:s]overlay" -vcodec h
@table @option
@item compute_clut
@table @option
@item -2
Compute clut once if no matching CLUT is in the stream.
@item -1
Compute clut if no matching CLUT is in the stream.
@item 0

View File

@@ -25,13 +25,6 @@ Audible Format 2, 3, and 4 demuxer.
This demuxer is used to demux Audible Format 2, 3, and 4 (.aa) files.
@section aac
Raw Audio Data Transport Stream AAC demuxer.
This demuxer is used to demux an ADTS input containing a single AAC stream
alongwith any ID3v1/2 or APE tags in it.
@section apng
Animated Portable Network Graphics demuxer.
@@ -44,15 +37,12 @@ between the last fcTL and IEND chunks.
@table @option
@item -ignore_loop @var{bool}
Ignore the loop variable in the file if set. Default is enabled.
Ignore the loop variable in the file if set.
@item -max_fps @var{int}
Maximum framerate in frames per second. Default of 0 imposes no limit.
Maximum framerate in frames per second (0 for no limit).
@item -default_fps @var{int}
Default framerate in frames per second when none is specified in the file
(0 meaning as fast as possible). Default is 15.
(0 meaning as fast as possible).
@end table
@section asf
@@ -103,7 +93,8 @@ backslash or single quotes.
All subsequent file-related directives apply to that file.
@item @code{ffconcat version 1.0}
Identify the script type and version.
Identify the script type and version. It also sets the @option{safe} option
to 1 if it was -1.
To make FFmpeg recognize the format automatically, this directive must
appear exactly as is (no extra space or byte-order-mark) on the very first
@@ -157,16 +148,6 @@ directive) will be reduced based on their specified Out point.
Metadata of the packets of the file. The specified metadata will be set for
each file packet. You can specify this directive multiple times to add multiple
metadata entries.
This directive is deprecated, use @code{file_packet_meta} instead.
@item @code{file_packet_meta @var{key} @var{value}}
Metadata of the packets of the file. The specified metadata will be set for
each file packet. You can specify this directive multiple times to add multiple
metadata entries.
@item @code{option @var{key} @var{value}}
Option to access, open and probe the file.
Can be present multiple times.
@item @code{stream}
Introduce a stream in the virtual file.
@@ -184,20 +165,6 @@ subfiles will be used.
This is especially useful for MPEG-PS (VOB) files, where the order of the
streams is not reliable.
@item @code{stream_meta @var{key} @var{value}}
Metadata for the stream.
Can be present multiple times.
@item @code{stream_codec @var{value}}
Codec for the stream.
@item @code{stream_extradata @var{hex_string}}
Extradata for the string, encoded in hexadecimal.
@item @code{chapter @var{id} @var{start} @var{end}}
Add a chapter. @var{id} is an unique identifier, possibly small and
consecutive.
@end table
@subsection Options
@@ -207,8 +174,7 @@ This demuxer accepts the following option:
@table @option
@item safe
If set to 1, reject unsafe file paths and directives.
A file path is considered safe if it
If set to 1, reject unsafe file paths. A file path is considered safe if it
does not contain a protocol specification and is relative and all components
only contain characters from the portable character set (letters, digits,
period, underscore and hyphen) and have no period at the beginning of a
@@ -218,6 +184,9 @@ If set to 0, any file name is accepted.
The default is 1.
-1 is equivalent to 1 if the format was automatically
probed and 0 otherwise.
@item auto_convert
If set to 1, try to perform automatic conversions on packet data to make the
streams concatenable.
@@ -274,55 +243,11 @@ which streams to actually receive.
Each stream mirrors the @code{id} and @code{bandwidth} properties from the
@code{<Representation>} as metadata keys named "id" and "variant_bitrate" respectively.
@subsection Options
This demuxer accepts the following option:
@table @option
@item cenc_decryption_key
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
@end table
@section ea
Electronic Arts Multimedia format demuxer.
This format is used by various Electronic Arts games.
@subsection Options
@table @option
@item merge_alpha @var{bool}
Normally the VP6 alpha channel (if exists) is returned as a secondary video
stream, by setting this option you can make the demuxer return a single video
stream which contains the alpha channel in addition to the ordinary video.
@end table
@section imf
Interoperable Master Format demuxer.
This demuxer presents audio and video streams found in an IMF Composition, as
specified in @url{https://doi.org/10.5594/SMPTE.ST2067-2.2020, SMPTE ST 2067-2}.
@example
ffmpeg [-assetmaps <path of ASSETMAP1>,<path of ASSETMAP2>,...] -i <path of CPL> ...
@end example
If @code{-assetmaps} is not specified, the demuxer looks for a file called
@file{ASSETMAP.xml} in the same directory as the CPL.
@section flv, live_flv, kux
@section flv, live_flv
Adobe Flash Video Format demuxer.
This demuxer is used to demux FLV files and RTMP network streams. In case of live network streams, if you force format, you may use live_flv option instead of flv to survive timestamp discontinuities.
KUX is a flv variant used on the Youku platform.
@example
ffmpeg -f flv -i myfile.flv ...
@@ -399,19 +324,9 @@ It accepts the following options:
@item live_start_index
segment index to start live streams at (negative values are from the end).
@item prefer_x_start
prefer to use #EXT-X-START if it's in playlist instead of live_start_index.
@item allowed_extensions
',' separated list of file extensions that hls is allowed to access.
@item extension_picky
This blocks disallowed extensions from probing
It also requires all available segments to have matching extensions to the format
except mpegts, which is always allowed.
It is recommended to set the whitelists correctly instead of depending on extensions
Enabled by default.
@item max_reload
Maximum number of times a insufficient list is attempted to be reloaded.
Default value is 1000.
@@ -431,13 +346,6 @@ Enabled by default for HTTP/1.1 servers.
@item http_seekable
Use HTTP partial requests for downloading HTTP segments.
0 = disable, 1 = enable, -1 = auto, Default is auto.
@item seg_format_options
Set options for the demuxer of media segments using a list of key=value pairs separated by @code{:}.
@item seg_max_retry
Maximum number of times to reload a segment on error, useful when segment skip on network error is not desired.
Default value is 0.
@end table
@section image2
@@ -753,12 +661,6 @@ Set mfra timestamps as PTS
Don't use mfra box to set timestamps
@end table
@item use_tfdt
For fragmented input, set fragment's starting timestamp to @code{baseMediaDecodeTime} from the @code{tfdt} box.
Default is enabled, which will prefer to use the @code{tfdt} box to set DTS. Disable to use the @code{earliest_presentation_time} from the @code{sidx} box.
In either case, the timestamp from the @code{mfra} box will be used if it's available and @code{use_mfra_for} is
set to pts or dts.
@item export_all
Export unrecognized boxes within the @var{udta} box as metadata entries. The first four
characters of the box type are set as the key. Default is false.
@@ -777,22 +679,6 @@ specify.
@item decryption_key
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
@item max_stts_delta
Very high sample deltas written in a trak's stts box may occasionally be intended but usually they are written in
error or used to store a negative value for dts correction when treated as signed 32-bit integers. This option lets
the user set an upper limit, beyond which the delta is clamped to 1. Values greater than the limit if negative when
cast to int32 are used to adjust onward dts.
Unit is the track time scale. Range is 0 to UINT_MAX. Default is @code{UINT_MAX - 48000*10} which allows upto
a 10 second dts correction for 48 kHz audio streams while accommodating 99.9% of @code{uint32} range.
@item interleaved_read
Interleave packets from multiple tracks at demuxer level. For badly interleaved files, this prevents playback issues
caused by large gaps between packets in different tracks, as MOV/MP4 do not have packet placement requirements.
However, this can cause excessive seeking on very badly interleaved files, due to seeking between tracks, so disabling
it may prevent I/O issues, at the expense of playback.
@end table
@subsection Audible AAX
@@ -833,10 +719,6 @@ disabled). Default value is -1.
@item merge_pmt_versions
Re-use existing streams when a PMT's version is updated and elementary
streams move to different PIDs. Default value is 0.
@item max_packet_size
Set maximum size, in bytes, of packet emitted by the demuxer. Payloads above this size
are split across multiple packets. Range is 1 to INT_MAX/2. Default is 204800 bytes.
@end table
@section mpjpeg

View File

@@ -0,0 +1,79 @@
# FFmpeg project
## Organisation
The FFmpeg project is organized through a community working on global consensus.
Decisions are taken by the ensemble of active members, through voting and
are aided by two committees.
## General Assembly
The ensemble of active members is called the General Assembly (GA).
The General Assembly is sovereign and legitimate for all its decisions
regarding the FFmpeg project.
The General Assembly is made up of active contributors.
Contributors are considered "active contributors" if they have pushed more
than 20 patches in the last 36 months in the main FFmpeg repository, or
if they have been voted in by the GA.
Additional members are added to the General Assembly through a vote after
proposal by a member of the General Assembly.
They are part of the GA for two years, after which they need a confirmation by
the GA.
## Voting
Voting is done using a ranked voting system, currently running on https://vote.ffmpeg.org/ .
Majority vote means more than 50% of the expressed ballots.
## Technical Committee
The Technical Committee (TC) is here to arbitrate and make decisions when
technical conflicts occur in the project.
They will consider the merits of all the positions, judge them and make a
decision.
The TC resolves technical conflicts but is not a technical steering committee.
Decisions by the TC are binding for all the contributors.
Decisions made by the TC can be re-opened after 1 year or by a majority vote
of the General Assembly, requested by one of the member of the GA.
The TC is elected by the General Assembly for a duration of 1 year, and
is composed of 5 members.
Members can be re-elected if they wish. A majority vote in the General Assembly
can trigger a new election of the TC.
The members of the TC can be elected from outside of the GA.
Candidates for election can either be suggested or self-nominated.
The conflict resolution process is detailed in the [resolution process](resolution_process.md) document.
## Community committee
The Community Committee (CC) is here to arbitrage and make decisions when
inter-personal conflicts occur in the project. It will decide quickly and
take actions, for the sake of the project.
The CC can remove privileges of offending members, including removal of
commit access and temporary ban from the community.
Decisions made by the CC can be re-opened after 1 year or by a majority vote
of the General Assembly. Indefinite bans from the community must be confirmed
by the General Assembly, in a majority vote.
The CC is elected by the General Assembly for a duration of 1 year, and is
composed of 5 members.
Members can be re-elected if they wish. A majority vote in the General Assembly
can trigger a new election of the CC.
The members of the CC can be elected from outside of the GA.
Candidates for election can either be suggested or self-nominated.
The CC is governed by and responsible for enforcing the Code of Conduct.

View File

@@ -0,0 +1,91 @@
# Technical Committee
_This document only makes sense with the rules from [the community document](community)_.
The Technical Committee (**TC**) is here to arbitrate and make decisions when
technical conflicts occur in the project.
The TC main role is to resolve technical conflicts.
It is therefore not a technical steering committee, but it is understood that
some decisions might impact the future of the project.
# Process
## Seizing
The TC can take possession of any technical matter that it sees fit.
To involve the TC in a matter, email tc@ or CC them on an ongoing discussion.
As members of TC are developers, they also can email tc@ to raise an issue.
## Announcement
The TC, once seized, must announce itself on the main mailing list, with a _[TC]_ tag.
The TC has 2 modes of operation: a RFC one and an internal one.
If the TC thinks it needs the input from the larger community, the TC can call
for a RFC. Else, it can decide by itself.
If the disagreement involves a member of the TC, that member should recuse
themselves from the decision.
The decision to use a RFC process or an internal discussion is a discretionary
decision of the TC.
The TC can also reject a seizure for a few reasons such as:
the matter was not discussed enough previously; it lacks expertise to reach a
beneficial decision on the matter; or the matter is too trivial.
### RFC call
In the RFC mode, one person from the TC posts on the mailing list the
technical question and will request input from the community.
The mail will have the following specification:
* a precise title
* a specific tag [TC RFC]
* a top-level email
* contain a precise question that does not exceed 100 words and that is answerable by developers
* may have an extra description, or a link to a previous discussion, if deemed necessary,
* contain a precise end date for the answers.
The answers from the community must be on the main mailing list and must have
the following specification:
* keep the tag and the title unchanged
* limited to 400 words
* a first-level, answering directly to the main email
* answering to the question.
Further replies to answers are permitted, as long as they conform to the
community standards of politeness, they are limited to 100 words, and are not
nested more than once. (max-depth=2)
After the end-date, mails on the thread will be ignored.
Violations of those rules will be escalated through the Community Committee.
After all the emails are in, the TC has 96 hours to give its final decision.
Exceptionally, the TC can request an extra delay, that will be notified on the
mailing list.
### Within TC
In the internal case, the TC has 96 hours to give its final decision.
Exceptionally, the TC can request an extra delay.
## Decisions
The decisions from the TC will be sent on the mailing list, with the _[TC]_ tag.
Internally, the TC should take decisions with a majority, or using
ranked-choice voting.
The decision from the TC should be published with a summary of the reasons that
lead to this decision.
The decisions from the TC are final, until the matters are reopened after
no less than one year.

View File

@@ -10,115 +10,41 @@
@contents
@chapter Introduction
@chapter Notes for external developers
This text is concerned with the development @emph{of} FFmpeg itself. Information
on using the FFmpeg libraries in other programs can be found elsewhere, e.g. in:
@itemize @bullet
@item
the installed header files
@item
@url{http://ffmpeg.org/doxygen/trunk/index.html, the Doxygen documentation}
generated from the headers
@item
the examples under @file{doc/examples}
@end itemize
This document is mostly useful for internal FFmpeg developers.
External developers who need to use the API in their application should
refer to the API doxygen documentation in the public headers, and
check the examples in @file{doc/examples} and in the source code to
see how the public API is employed.
You can use the FFmpeg libraries in your commercial program, but you
are encouraged to @emph{publish any patch you make}. In this case the
best way to proceed is to send your patches to the ffmpeg-devel
mailing list following the guidelines illustrated in the remainder of
this document.
For more detailed legal information about the use of FFmpeg in
external programs read the @file{LICENSE} file in the source tree and
consult @url{https://ffmpeg.org/legal.html}.
If you modify FFmpeg code for your own use case, you are highly encouraged to
@emph{submit your changes back to us}, using this document as a guide. There are
both pragmatic and ideological reasons to do so:
@chapter Contributing
There are 2 ways by which code gets into FFmpeg:
@itemize @bullet
@item
Maintaining external changes to keep up with upstream development is
time-consuming and error-prone. With your code in the main tree, it will be
maintained by FFmpeg developers.
@item
FFmpeg developers include leading experts in the field who can find bugs or
design flaws in your code.
@item
By supporting the project you find useful you ensure it continues to be
maintained and developed.
@item Submitting patches to the ffmpeg-devel mailing list.
See @ref{Submitting patches} for details.
@item Directly committing changes to the main tree.
@end itemize
All proposed code changes should be submitted for review to
@url{mailto:ffmpeg-devel@@ffmpeg.org, the development mailing list}, as
described in more detail in the @ref{Submitting patches} chapter. The code
should comply with the @ref{Development Policy} and follow the @ref{Coding Rules}.
Whichever way, changes should be reviewed by the maintainer of the code
before they are committed. And they should follow the @ref{Coding Rules}.
The developer making the commit and the author are responsible for their changes
and should try to fix issues their commit causes.
@anchor{Coding Rules}
@chapter Coding Rules
@section Language
FFmpeg is mainly programmed in the ISO C99 language, extended with:
@itemize @bullet
@item
Atomic operations from C11 @file{stdatomic.h}. They are emulated on
architectures/compilers that do not support them, so all FFmpeg-internal code
may use atomics without any extra checks. However, @file{stdatomic.h} must not
be included in public headers, so they stay C99-compatible.
@end itemize
Compiler-specific extensions may be used with good reason, but must not be
depended on, i.e. the code must still compile and work with compilers lacking
the extension.
The following C99 features must not be used anywhere in the codebase:
@itemize @bullet
@item
variable-length arrays;
@item
complex numbers;
@item
mixed statements and declarations.
@end itemize
@subsection SIMD/DSP
@anchor{SIMD/DSP}
As modern compilers are unable to generate efficient SIMD or other
performance-critical DSP code from plain C, handwritten assembly is used.
Usually such code is isolated in a separate function. Then the standard approach
is writing multiple versions of this function a plain C one that works
everywhere and may also be useful for debugging, and potentially multiple
architecture-specific optimized implementations. Initialization code then
chooses the best available version at runtime and loads it into a function
pointer; the function in question is then always called through this pointer.
The specific syntax used for writing assembly is:
@itemize @bullet
@item
NASM on x86;
@item
GAS on ARM.
@end itemize
A unit testing framework for assembly called @code{checkasm} lives under
@file{tests/checkasm}. All new assembly should come with @code{checkasm} tests;
adding tests for existing assembly that lacks them is also strongly encouraged.
@subsection Other languages
Other languages than C may be used in special cases:
@itemize @bullet
@item
Compiler intrinsics or inline assembly when the code in question cannot be
written in the standard way described in the @ref{SIMD/DSP} section. This
typically applies to code that needs to be inlined.
@item
Objective-C where required for interacting with macOS-specific interfaces.
@end itemize
@section Code formatting conventions
There are the following guidelines regarding the indentation in files:
@@ -141,39 +67,8 @@ K&R coding style is used.
@end itemize
The presentation is one inspired by 'indent -i4 -kr -nut'.
@subsection Vim configuration
In order to configure Vim to follow FFmpeg formatting conventions, paste
the following snippet into your @file{.vimrc}:
@example
" indentation rules for FFmpeg: 4 spaces, no tabs
set expandtab
set shiftwidth=4
set softtabstop=4
set cindent
set cinoptions=(0
" Allow tabs in Makefiles.
autocmd FileType make,automake set noexpandtab shiftwidth=8 softtabstop=8
" Trailing whitespace and tabs are forbidden, so highlight them.
highlight ForbiddenWhitespace ctermbg=red guibg=red
match ForbiddenWhitespace /\s\+$\|\t/
" Do not highlight spaces at the end of line while typing on that line.
autocmd InsertEnter * match ForbiddenWhitespace /\t\|\s\+\%#\@@<!$/
@end example
@subsection Emacs configuration
For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
@lisp
(c-add-style "ffmpeg"
'("k&r"
(c-basic-offset . 4)
(indent-tabs-mode . nil)
(show-trailing-whitespace . t)
(c-offsets-alist
(statement-cont . (c-lineup-assignments +)))
)
)
(setq c-default-style "ffmpeg")
@end lisp
The main priority in FFmpeg is simplicity and small code size in order to
minimize the bug count.
@section Comments
Use the JavaDoc/Doxygen format (see examples below) so that code documentation
@@ -215,52 +110,92 @@ int myfunc(int my_parameter)
...
@end example
@anchor{Naming conventions}
@section Naming conventions
@section C language features
Names of functions, variables, and struct members must be lowercase, using
underscores (_) to separate words. For example, @samp{avfilter_get_video_buffer}
is an acceptable function name and @samp{AVFilterGetVideo} is not.
FFmpeg is programmed in the ISO C90 language with a few additional
features from ISO C99, namely:
Struct, union, enum, and typedeffed type names must use CamelCase. All structs
and unions should be typedeffed to the same name as the struct/union tag, e.g.
@code{typedef struct AVFoo @{ ... @} AVFoo;}. Enums are typically not
typedeffed.
Enumeration constants and macros must be UPPERCASE, except for macros
masquerading as functions, which should use the function naming convention.
All identifiers in the libraries should be namespaced as follows:
@itemize @bullet
@item
No namespacing for identifiers with file and lower scope (e.g. local variables,
static functions), and struct and union members,
the @samp{inline} keyword;
@item
The @code{ff_} prefix must be used for variables and functions visible outside
of file scope, but only used internally within a single library, e.g.
@samp{ff_w64_demuxer}. This prevents name collisions when FFmpeg is statically
linked.
@samp{//} comments;
@item
designated struct initializers (@samp{struct s x = @{ .i = 17 @};});
@item
compound literals (@samp{x = (struct s) @{ 17, 23 @};}).
@item
for loops with variable definition (@samp{for (int i = 0; i < 8; i++)});
@item
Variadic macros (@samp{#define ARRAY(nb, ...) (int[nb + 1])@{ nb, __VA_ARGS__ @}});
@item
Implementation defined behavior for signed integers is assumed to match the
expected behavior for two's complement. Non representable values in integer
casts are binary truncated. Shift right of signed values uses sign extension.
@end itemize
These features are supported by all compilers we care about, so we will not
accept patches to remove their use unless they absolutely do not impair
clarity and performance.
All code must compile with recent versions of GCC and a number of other
currently supported compilers. To ensure compatibility, please do not use
additional C99 features or GCC extensions. Especially watch out for:
@itemize @bullet
@item
mixing statements and declarations;
@item
@samp{long long} (use @samp{int64_t} instead);
@item
@samp{__attribute__} not protected by @samp{#ifdef __GNUC__} or similar;
@item
GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
@end itemize
@section Naming conventions
All names should be composed with underscores (_), not CamelCase. For example,
@samp{avfilter_get_video_buffer} is an acceptable function name and
@samp{AVFilterGetVideo} is not. The exception from this are type names, like
for example structs and enums; they should always be in CamelCase.
There are the following conventions for naming variables and functions:
@itemize @bullet
@item
For local variables no prefix is required.
@item
For file-scope variables and functions declared as @code{static}, no prefix
is required.
@item
For variables and functions visible outside of file scope, but only used
internally by a library, an @code{ff_} prefix should be used,
e.g. @samp{ff_w64_demuxer}.
@item
For variables and functions visible outside of file scope, used internally
across multiple libraries, use @code{avpriv_} as prefix, for example,
@samp{avpriv_report_missing_feature}.
@item
All other internal identifiers, like private type or macro names, should be
namespaced only to avoid possible internal conflicts. E.g. @code{H264_NAL_SPS}
vs. @code{HEVC_NAL_SPS}.
@item
Each library has its own prefix for public symbols, in addition to the
commonly used @code{av_} (@code{avformat_} for libavformat,
@code{avcodec_} for libavcodec, @code{swr_} for libswresample, etc).
Check the existing code and choose names accordingly.
@item
Other public identifiers (struct, union, enum, macro, type names) must use their
library's public prefix (@code{AV}, @code{Sws}, or @code{Swr}).
Note that some symbols without these prefixes are also exported for
retro-compatibility reasons. These exceptions are declared in the
@code{lib<name>/lib<name>.v} files.
@end itemize
Furthermore, name space reserved for the system should not be invaded.
@@ -274,50 +209,50 @@ symbols. If in doubt, just avoid names starting with @code{_} altogether.
@section Miscellaneous conventions
@itemize @bullet
@item
fprintf and printf are forbidden in libavformat and libavcodec,
please use av_log() instead.
@item
Casts should be used only when necessary. Unneeded parentheses
should also be avoided if they don't make the code easier to understand.
@end itemize
@anchor{Development Policy}
@section Editor configuration
In order to configure Vim to follow FFmpeg formatting conventions, paste
the following snippet into your @file{.vimrc}:
@example
" indentation rules for FFmpeg: 4 spaces, no tabs
set expandtab
set shiftwidth=4
set softtabstop=4
set cindent
set cinoptions=(0
" Allow tabs in Makefiles.
autocmd FileType make,automake set noexpandtab shiftwidth=8 softtabstop=8
" Trailing whitespace and tabs are forbidden, so highlight them.
highlight ForbiddenWhitespace ctermbg=red guibg=red
match ForbiddenWhitespace /\s\+$\|\t/
" Do not highlight spaces at the end of line while typing on that line.
autocmd InsertEnter * match ForbiddenWhitespace /\t\|\s\+\%#\@@<!$/
@end example
For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
@lisp
(c-add-style "ffmpeg"
'("k&r"
(c-basic-offset . 4)
(indent-tabs-mode . nil)
(show-trailing-whitespace . t)
(c-offsets-alist
(statement-cont . (c-lineup-assignments +)))
)
)
(setq c-default-style "ffmpeg")
@end lisp
@chapter Development Policy
@section Code behaviour
@subheading Correctness
The code must be valid. It must not crash, abort, access invalid pointers, leak
memory, cause data races or signed integer overflow, or otherwise cause
undefined behaviour. Error codes should be checked and, when applicable,
forwarded to the caller.
@subheading Thread- and library-safety
Our libraries may be called by multiple independent callers in the same process.
These calls may happen from any number of threads and the different call sites
may not be aware of each other - e.g. a user program may be calling our
libraries directly, and use one or more libraries that also call our libraries.
The code must behave correctly under such conditions.
@subheading Robustness
The code must treat as untrusted any bytestream received from a caller or read
from a file, network, etc. It must not misbehave when arbitrary data is sent to
it - typically it should print an error message and return
@code{AVERROR_INVALIDDATA} on encountering invalid input data.
@subheading Memory allocation
The code must use the @code{av_malloc()} family of functions from
@file{libavutil/mem.h} to perform all memory allocation, except in special cases
(e.g. when interacting with an external library that requires a specific
allocator to be used).
All allocations should be checked and @code{AVERROR(ENOMEM)} returned on
failure. A common mistake is that error paths leak memory - make sure that does
not happen.
@subheading stdio
Our libraries must not access the stdio streams stdin/stdout/stderr directly
(e.g. via @code{printf()} family of functions), as that is not library-safe. For
logging, use @code{av_log()}.
@section Patches/Committing
@subheading Licenses for patches must be compatible with FFmpeg.
Contributions should be licensed under the
@@ -340,24 +275,13 @@ missing samples or an implementation with a small subset of features.
Always check the mailing list for any reviewers with issues and test
FATE before you push.
@subheading Commit messages
Commit messages are highly important tools for informing other developers on
what a given change does and why. Every commit must always have a properly
filled out commit message with the following format:
@example
area changed: short 1 line description
details describing what and why and giving references.
@end example
If the commit addresses a known bug on our bug tracker or other external issue
(e.g. CVE), the commit message should include the relevant bug ID(s) or other
external identifiers. Note that this should be done in addition to a proper
explanation and not instead of it. Comments such as "fixed!" or "Changed it."
are not acceptable.
When applying patches that have been discussed at length on the mailing list,
reference the thread in the commit message.
@subheading Keep the main commit message short with an extended description below.
The commit message should have a short first line in the form of
a @samp{topic: short description} as a header, separated by a newline
from the body consisting of an explanation of why the change is necessary.
If the commit fixes a known bug on the bug tracker, the commit message
should include its bug ID. Referring to the issue on the bug tracker does
not exempt you from writing an excerpt of the bug in the commit message.
@subheading Testing must be adequate but not excessive.
If it works for you, others, and passes FATE then it should be OK to commit
@@ -376,6 +300,15 @@ later on.
Also if you have doubts about splitting or not splitting, do not hesitate to
ask/discuss it on the developer mailing list.
@subheading Ask before you change the build system (configure, etc).
Do not commit changes to the build system (Makefiles, configure script)
which change behavior, defaults etc, without asking first. The same
applies to compiler warning fixes, trivial looking fixes and to code
maintained by other developers. We usually have a reason for doing things
the way we do. Send your changes as patches to the ffmpeg-devel mailing
list, and if the code maintainers say OK, you may commit. This does not
apply to files you wrote and/or maintain.
@subheading Cosmetic changes should be kept in separate patches.
We refuse source indentation and other cosmetic changes if they are mixed
with functional changes, such commits will be rejected and removed. Every
@@ -390,15 +323,27 @@ NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
then either do NOT change the indentation of the inner part within (do not
move it to the right)! or do so in a separate commit
@subheading Commit messages should always be filled out properly.
Always fill out the commit log message. Describe in a few lines what you
changed and why. You can refer to mailing list postings if you fix a
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
Recommended format:
@example
area changed: Short 1 line description
details describing what and why and giving references.
@end example
@subheading Credit the author of the patch.
Make sure the author of the commit is set correctly. (see git commit --author)
If you apply a patch, send an
answer to ffmpeg-devel (or wherever you got the patch from) saying that
you applied the patch.
@subheading Credit any researchers
If a commit/patch fixes an issues found by some researcher, always credit the
researcher in the commit message for finding/reporting the issue.
@subheading Complex patches should refer to discussion surrounding them.
When applying patches that have been discussed (at length) on the mailing
list, reference the thread in the log message.
@subheading Always wait long enough before pushing changes
Do NOT commit to code actively maintained by others without permission.
@@ -408,6 +353,22 @@ time-frame (12h for build failures and security fixes, 3 days small changes,
Also note, the maintainer can simply ask for more time to review!
@section Code
@subheading API/ABI changes should be discussed before they are made.
Do not change behavior of the programs (renaming options etc) or public
API or ABI without first discussing it on the ffmpeg-devel mailing list.
Do not remove widely used functionality or features (redundant code can be removed).
@subheading Remember to check if you need to bump versions for libav*.
Depending on the change, you may need to change the version integer.
Incrementing the first component means no backward compatibility to
previous versions (e.g. removal of a function from the public API).
Incrementing the second component means backward compatible change
(e.g. addition of a function to the public API or extension of an
existing data structure).
Incrementing the third component means a noteworthy binary compatible
change (e.g. encoder bug fix that matters for the decoder). The third
component always starts at 100 to distinguish FFmpeg from Libav.
@subheading Warnings for correct code may be disabled if there is no other option.
Compiler warnings indicate potential bugs or code with bad style. If a type of
warning always points to correct and clean code, that warning should
@@ -417,150 +378,10 @@ If it is a bug, the bug has to be fixed. If it is not, the code should
be changed to not generate a warning unless that causes a slowdown
or obfuscates the code.
@section Library public interfaces
Every library in FFmpeg provides a set of public APIs in its installed headers,
which are those listed in the variable @code{HEADERS} in that library's
@file{Makefile}. All identifiers defined in those headers (except for those
explicitly documented otherwise), and corresponding symbols exported from
compiled shared or static libraries are considered public interfaces and must
comply with the API and ABI compatibility rules described in this section.
Public APIs must be backward compatible within a given major version. I.e. any
valid user code that compiles and works with a given library version must still
compile and work with any later version, as long as the major version number is
unchanged. "Valid user code" here means code that is calling our APIs in a
documented and/or intended manner and is not relying on any undefined behavior.
Incrementing the major version may break backward compatibility, but only to the
extent described in @ref{Major version bumps}.
We also guarantee backward ABI compatibility for shared and static libraries.
I.e. it should be possible to replace a shared or static build of our library
with a build of any later version (re-linking the user binary in the static
case) without breaking any valid user binaries, as long as the major version
number remains unchanged.
@subsection Adding new interfaces
Any new public identifiers in installed headers are considered new API - this
includes new functions, structs, macros, enum values, typedefs, new fields in
existing structs, new installed headers, etc. Consider the following
guidelines when adding new APIs.
@subsubheading Motivation
While new APIs can be added relatively easily, changing or removing them is much
harder due to abovementioned compatibility requirements. You should then
consider carefully whether the functionality you are adding really needs to be
exposed to our callers as new public API.
Your new API should have at least one well-established use case outside of the
library that cannot be easily achieved with existing APIs. Every library in
FFmpeg also has a defined scope - your new API must fit within it.
@subsubheading Replacing existing APIs
If your new API is replacing an existing one, it should be strictly superior to
it, so that the advantages of using the new API outweight the cost to the
callers of changing their code. After adding the new API you should then
deprecate the old one and schedule it for removal, as described in
@ref{Removing interfaces}.
If you deem an existing API deficient and want to fix it, the preferred approach
in most cases is to add a differently-named replacement and deprecate the
existing API rather than modify it. It is important to make the changes visible
to our callers (e.g. through compile- or run-time deprecation warnings) and make
it clear how to transition to the new API (e.g. in the Doxygen documentation or
on the wiki).
@subsubheading API design
The FFmpeg libraries are used by a variety of callers to perform a wide range of
multimedia-related processing tasks. You should therefore - within reason - try
to design your new API for the broadest feasible set of use cases and avoid
unnecessarily limiting it to a specific type of callers (e.g. just media
playback or just transcoding).
@subsubheading Consistency
Check whether similar APIs already exist in FFmpeg. If they do, try to model
your new addition on them to achieve better overall consistency.
The naming of your new identifiers should follow the @ref{Naming conventions}
and be aligned with other similar APIs, if applicable.
@subsubheading Extensibility
You should also consider how your API might be extended in the future in a
backward-compatible way. If you are adding a new struct @code{AVFoo}, the
standard approach is requiring the caller to always allocate it through a
constructor function, typically named @code{av_foo_alloc()}. This way new fields
may be added to the end of the struct without breaking ABI compatibility.
Typically you will also want a destructor - @code{av_foo_free(AVFoo**)} that
frees the indirectly supplied object (and its contents, if applicable) and
writes @code{NULL} to the supplied pointer, thus eliminating the potential
dangling pointer in the caller's memory.
If you are adding new functions, consider whether it might be desirable to tweak
their behavior in the future - you may want to add a flags argument, even though
it would be unused initially.
@subsubheading Documentation
All new APIs must be documented as Doxygen-formatted comments above the
identifiers you add to the public headers. You should also briefly mention the
change in @file{doc/APIchanges}.
@subsubheading Bump the version
Backward-incompatible API or ABI changes require incrementing (bumping) the
major version number, as described in @ref{Major version bumps}. Major
bumps are significant events that happen on a schedule - so if your change
strictly requires one you should add it under @code{#if} preprocesor guards that
disable it until the next major bump happens.
New APIs that can be added without breaking API or ABI compatibility require
bumping the minor version number.
Incrementing the third (micro) version component means a noteworthy binary
compatible change (e.g. encoder bug fix that matters for the decoder). The third
component always starts at 100 to distinguish FFmpeg from Libav.
@anchor{Removing interfaces}
@subsection Removing interfaces
Due to abovementioned compatibility guarantees, removing APIs is an involved
process that should only be undertaken with good reason. Typically a deficient,
restrictive, or otherwise inadequate API is replaced by a superior one, though
it does at times happen that we remove an API without any replacement (e.g. when
the feature it provides is deemed not worth the maintenance effort, out of scope
of the project, fundamentally flawed, etc.).
The removal has two steps - first the API is deprecated and scheduled for
removal, but remains present and functional. The second step is actually
removing the API - this is described in @ref{Major version bumps}.
To deprecate an API you should signal to our users that they should stop using
it. E.g. if you intend to remove struct members or functions, you should mark
them with @code{attribute_deprecated}. When this cannot be done, it may be
possible to detect the use of the deprecated API at runtime and print a warning
(though take care not to print it too often). You should also document the
deprecation (and the replacement, if applicable) in the relevant Doxygen
documentation block.
Finally, you should define a deprecation guard along the lines of
@code{#define FF_API_<FOO> (LIBAVBAR_VERSION_MAJOR < XX)} (where XX is the major
version in which the API will be removed) in @file{libavbar/version_major.h}
(@file{version.h} in case of @code{libavutil}). Then wrap all uses of the
deprecated API in @code{#if FF_API_<FOO> .... #endif}, so that the code will
automatically get disabled once the major version reaches XX. You can also use
@code{FF_DISABLE_DEPRECATION_WARNINGS} and @code{FF_ENABLE_DEPRECATION_WARNINGS}
to suppress compiler deprecation warnings inside these guards. You should test
that the code compiles and works with the guard macro evaluating to both true
and false.
@anchor{Major version bumps}
@subsection Major version bumps
A major version bump signifies an API and/or ABI compatibility break. To reduce
the negative effects on our callers, who are required to adapt their code,
backward-incompatible changes during a major bump should be limited to:
@itemize @bullet
@item
Removing previously deprecated APIs.
@item
Performing ABI- but not API-breaking changes, like reordering struct contents.
@end itemize
@subheading Check untrusted input properly.
Never write to unallocated memory, never write over the end of arrays,
always check values read from some untrusted source before using them
as array index or other risky things.
@section Documentation/Other
@subheading Subscribe to the ffmpeg-devel mailing list.
@@ -604,6 +425,35 @@ finding a new maintainer and also don't forget to update the @file{MAINTAINERS}
We think our rules are not too hard. If you have comments, contact us.
@chapter Code of conduct
Be friendly and respectful towards others and third parties.
Treat others the way you yourself want to be treated.
Be considerate. Not everyone shares the same viewpoint and priorities as you do.
Different opinions and interpretations help the project.
Looking at issues from a different perspective assists development.
Do not assume malice for things that can be attributed to incompetence. Even if
it is malice, it's rarely good to start with that as initial assumption.
Stay friendly even if someone acts contrarily. Everyone has a bad day
once in a while.
If you yourself have a bad day or are angry then try to take a break and reply
once you are calm and without anger if you have to.
Try to help other team members and cooperate if you can.
The goal of software development is to create technical excellence, not for any
individual to be better and "win" against the others. Large software projects
are only possible and successful through teamwork.
If someone struggles do not put them down. Give them a helping hand
instead and point them in the right direction.
Finally, keep in mind the immortal words of Bill and Ted,
"Be excellent to each other."
@anchor{Submitting patches}
@chapter Submitting patches
@@ -644,27 +494,6 @@ patch is inline or attached per mail.
You can check @url{https://patchwork.ffmpeg.org}, if your patch does not show up, its mime type
likely was wrong.
@subheading How to setup git send-email?
Please see @url{https://git-send-email.io/}.
For gmail additionally see @url{https://shallowsky.com/blog/tech/email/gmail-app-passwds.html}.
@subheading Sending patches from email clients
Using @code{git send-email} might not be desirable for everyone. The
following trick allows to send patches via email clients in a safe
way. It has been tested with Outlook and Thunderbird (with X-Unsent
extension) and might work with other applications.
Create your patch like this:
@verbatim
git format-patch -s -o "outputfolder" --add-header "X-Unsent: 1" --suffix .eml --to ffmpeg-devel@ffmpeg.org -1 1a2b3c4d
@end verbatim
Now you'll just need to open the eml file with the email application
and execute 'Send'.
@subheading Reviews
Your patch will be reviewed on the mailing list. You will likely be asked
to make some changes and are expected to send in an improved version that
incorporates the requests from the review. This process may go through
@@ -825,14 +654,16 @@ Lines with similar content should be aligned vertically when doing so
improves readability.
@item
Consider adding a regression test for your code. All new modules
should be covered by tests. That includes demuxers, muxers, decoders, encoders
filters, bitstream filters, parsers. If its not possible to do that, add
an explanation why to your patchset, its ok to not test if theres a reason.
Consider adding a regression test for your code.
@item
If you added YASM code please check that things still work with --disable-yasm.
@item
Make sure you check the return values of function and return appropriate
error codes. Especially memory allocation functions like @code{av_malloc()}
are notoriously left unchecked, which is a serious problem.
@item
Test your code with valgrind and or Address Sanitizer to ensure it's free
of leaks, out of array accesses, etc.
@@ -882,8 +713,6 @@ accordingly].
@section Adding files to the fate-suite dataset
If you need a sample uploaded send a mail to samples-request.
When there is no muxer or encoder available to generate test media for a
specific test then the media has to be included in the fate-suite.
First please make sure that the sample file is as small as possible to test the
@@ -933,25 +762,6 @@ In case you need finer control over how valgrind is invoked, use the
@code{--target-exec='valgrind <your_custom_valgrind_options>} option in
your configure line instead.
@anchor{Maintenance}
@chapter Maintenance process
@anchor{MAINTAINERS}
@section MAINTAINERS
The developers maintaining each part of the codebase are listed in @file{MAINTAINERS}.
Being listed in @file{MAINTAINERS}, gives one the right to have git write access to
the specific repository.
@anchor{Becoming a maintainer}
@section Becoming a maintainer
People add themselves to @file{MAINTAINERS} by sending a patch like any other code
change. These get reviewed by the community like any other patch. It is expected
that, if someone has an objection to a new maintainer, she is willing to object
in public with her full name and is willing to take over maintainership for the area.
@anchor{Release process}
@chapter Release process

View File

@@ -1,13 +1,10 @@
#!/bin/sh
OUT_DIR="${1}"
SRC_DIR="${2}"
DOXYFILE="${3}"
DOXYGEN="${4}"
DOXYFILE="${2}"
DOXYGEN="${3}"
shift 4
cd ${SRC_DIR}
shift 3
if [ -e "VERSION" ]; then
VERSION=`cat "VERSION"`

File diff suppressed because it is too large Load Diff

View File

@@ -22,4 +22,3 @@
/transcoding
/vaapi_encode
/vaapi_transcode
/qsv_transcode

View File

@@ -1,27 +1,26 @@
EXAMPLES-$(CONFIG_AVIO_HTTP_SERVE_FILES) += avio_http_serve_files
EXAMPLES-$(CONFIG_AVIO_LIST_DIR_EXAMPLE) += avio_list_dir
EXAMPLES-$(CONFIG_AVIO_READ_CALLBACK_EXAMPLE) += avio_read_callback
EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
EXAMPLES-$(CONFIG_DECODE_AUDIO_EXAMPLE) += decode_audio
EXAMPLES-$(CONFIG_DECODE_FILTER_AUDIO_EXAMPLE) += decode_filter_audio
EXAMPLES-$(CONFIG_DECODE_FILTER_VIDEO_EXAMPLE) += decode_filter_video
EXAMPLES-$(CONFIG_DECODE_VIDEO_EXAMPLE) += decode_video
EXAMPLES-$(CONFIG_DEMUX_DECODE_EXAMPLE) += demux_decode
EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
EXAMPLES-$(CONFIG_ENCODE_AUDIO_EXAMPLE) += encode_audio
EXAMPLES-$(CONFIG_ENCODE_VIDEO_EXAMPLE) += encode_video
EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
EXAMPLES-$(CONFIG_HTTP_MULTICLIENT_EXAMPLE) += http_multiclient
EXAMPLES-$(CONFIG_HW_DECODE_EXAMPLE) += hw_decode
EXAMPLES-$(CONFIG_MUX_EXAMPLE) += mux
EXAMPLES-$(CONFIG_QSV_DECODE_EXAMPLE) += qsv_decode
EXAMPLES-$(CONFIG_REMUX_EXAMPLE) += remux
EXAMPLES-$(CONFIG_RESAMPLE_AUDIO_EXAMPLE) += resample_audio
EXAMPLES-$(CONFIG_SCALE_VIDEO_EXAMPLE) += scale_video
EXAMPLES-$(CONFIG_SHOW_METADATA_EXAMPLE) += show_metadata
EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
EXAMPLES-$(CONFIG_TRANSCODE_EXAMPLE) += transcode
EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
EXAMPLES-$(CONFIG_VAAPI_ENCODE_EXAMPLE) += vaapi_encode
EXAMPLES-$(CONFIG_VAAPI_TRANSCODE_EXAMPLE) += vaapi_transcode
EXAMPLES-$(CONFIG_QSV_TRANSCODE_EXAMPLE) += qsv_transcode
EXAMPLES := $(EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
EXAMPLES_G := $(EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))

View File

@@ -11,40 +11,33 @@ CFLAGS += -Wall -g
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
# missing the following targets, since they need special options in the FFmpeg build:
# qsv_decode
# qsv_transcode
# vaapi_encode
# vaapi_transcode
EXAMPLES=\
avio_http_serve_files \
avio_list_dir \
avio_read_callback \
EXAMPLES= avio_list_dir \
avio_reading \
decode_audio \
decode_filter_audio \
decode_filter_video \
decode_video \
demux_decode \
demuxing_decoding \
encode_audio \
encode_video \
extract_mvs \
filtering_video \
filtering_audio \
http_multiclient \
hw_decode \
mux \
remux \
resample_audio \
scale_video \
show_metadata \
metadata \
muxing \
remuxing \
resampling_audio \
scaling_video \
transcode_aac \
transcode
transcoding \
OBJS=$(addsuffix .o,$(EXAMPLES))
# the following examples make explicit use of the math library
avcodec: LDLIBS += -lm
encode_audio: LDLIBS += -lm
mux: LDLIBS += -lm
resample_audio: LDLIBS += -lm
muxing: LDLIBS += -lm
resampling_audio: LDLIBS += -lm
.phony: all clean-test clean

View File

@@ -7,10 +7,8 @@ that you have them installed and working on your system.
Method 1: build the installed examples in a generic read/write user directory
Copy to a read/write user directory and run:
make -f Makefile.example
It will link to the libraries on your system, assuming the PKG_CONFIG_PATH is
Copy to a read/write user directory and just use "make", it will link
to the libraries on your system, assuming the PKG_CONFIG_PATH is
correctly configured.
Method 2: build the examples in-tree
@@ -22,4 +20,4 @@ examples using "make examplesclean"
If you want to try the dedicated Makefile examples (to emulate the first
method), go into doc/examples and run a command such as
PKG_CONFIG_PATH=pc-uninstalled make -f Makefile.example
PKG_CONFIG_PATH=pc-uninstalled make.

View File

@@ -20,13 +20,6 @@
* THE SOFTWARE.
*/
/**
* @file libavformat AVIOContext list directory API usage example
* @example avio_list_dir.c
*
* Show how to list directories through the libavformat AVIOContext API.
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>

View File

@@ -21,11 +21,12 @@
*/
/**
* @file libavformat AVIOContext read callback API usage example
* @example avio_read_callback.c
* @file
* libavformat AVIOContext API example.
*
* Make libavformat demuxer access media content through a custom
* AVIOContext read callback.
* @example avio_reading.c
*/
#include <libavcodec/avcodec.h>
@@ -95,7 +96,6 @@ int main(int argc, char *argv[])
avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
0, &bd, &read_packet, NULL, NULL);
if (!avio_ctx) {
av_freep(&avio_ctx_buffer);
ret = AVERROR(ENOMEM);
goto end;
}

View File

@@ -21,11 +21,10 @@
*/
/**
* @file libavcodec audio decoding API usage example
* @example decode_audio.c
* @file
* audio decoding with libavcodec API example
*
* Decode data from an MP2 input file and generate a raw audio file to
* be played with ffplay.
* @example decode_audio.c
*/
#include <stdio.h>
@@ -98,7 +97,7 @@ static void decode(AVCodecContext *dec_ctx, AVPacket *pkt, AVFrame *frame,
exit(1);
}
for (i = 0; i < frame->nb_samples; i++)
for (ch = 0; ch < dec_ctx->ch_layout.nb_channels; ch++)
for (ch = 0; ch < dec_ctx->channels; ch++)
fwrite(frame->data[ch] + data_size*i, 1, data_size, outfile);
}
}
@@ -128,10 +127,6 @@ int main(int argc, char **argv)
outfilename = argv[2];
pkt = av_packet_alloc();
if (!pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
exit(1); /* or proper cleanup and returning */
}
/* find the MPEG audio decoder */
codec = avcodec_find_decoder(AV_CODEC_ID_MP2);
@@ -165,7 +160,7 @@ int main(int argc, char **argv)
}
outfile = fopen(outfilename, "wb");
if (!outfile) {
fprintf(stderr, "Could not open %s\n", outfilename);
av_free(c);
exit(1);
}
@@ -220,7 +215,7 @@ int main(int argc, char **argv)
sfmt = av_get_packed_sample_fmt(sfmt);
}
n_channels = c->ch_layout.nb_channels;
n_channels = c->channels;
if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
goto end;

View File

@@ -21,11 +21,10 @@
*/
/**
* @file libavcodec video decoding API usage example
* @example decode_video.c *
* @file
* video decoding with libavcodec API example
*
* Read from an MPEG1 video file, decode frames, and generate PGM images as
* output.
* @example decode_video.c
*/
#include <stdio.h>
@@ -70,12 +69,12 @@ static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt,
exit(1);
}
printf("saving frame %3"PRId64"\n", dec_ctx->frame_num);
printf("saving frame %3d\n", dec_ctx->frame_number);
fflush(stdout);
/* the picture is allocated by the decoder. no need to
free it */
snprintf(buf, sizeof(buf), "%s-%"PRId64, filename, dec_ctx->frame_num);
snprintf(buf, sizeof(buf), "%s-%d", filename, dec_ctx->frame_number);
pgm_save(frame->data[0], frame->linesize[0],
frame->width, frame->height, buf);
}
@@ -93,7 +92,6 @@ int main(int argc, char **argv)
uint8_t *data;
size_t data_size;
int ret;
int eof;
AVPacket *pkt;
if (argc <= 2) {
@@ -152,16 +150,15 @@ int main(int argc, char **argv)
exit(1);
}
do {
while (!feof(f)) {
/* read raw data from the input file */
data_size = fread(inbuf, 1, INBUF_SIZE, f);
if (ferror(f))
if (!data_size)
break;
eof = !data_size;
/* use the parser to split the data into frames */
data = inbuf;
while (data_size > 0 || eof) {
while (data_size > 0) {
ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
if (ret < 0) {
@@ -173,10 +170,8 @@ int main(int argc, char **argv)
if (pkt->size)
decode(c, frame, pkt, outfilename);
else if (eof)
break;
}
} while (!eof);
}
/* flush the decoder */
decode(c, frame, NULL, outfilename);

View File

@@ -21,18 +21,17 @@
*/
/**
* @file libavformat and libavcodec demuxing and decoding API usage example
* @example demux_decode.c
* @file
* Demuxing and decoding example.
*
* Show how to use the libavformat and libavcodec API to demux and decode audio
* and video data. Write the output as raw audio and input files to be played by
* ffplay.
* Show how to use the libavformat and libavcodec API to demux and
* decode audio and video data.
* @example demuxing_decoding.c
*/
#include <libavutil/imgutils.h>
#include <libavutil/samplefmt.h>
#include <libavutil/timestamp.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
@@ -73,14 +72,14 @@ static int output_video_frame(AVFrame *frame)
return -1;
}
printf("video_frame n:%d\n",
video_frame_count++);
printf("video_frame n:%d coded_n:%d\n",
video_frame_count++, frame->coded_picture_number);
/* copy decoded frame to destination buffer:
* this is required since rawvideo expects non aligned data */
av_image_copy2(video_dst_data, video_dst_linesize,
frame->data, frame->linesize,
pix_fmt, width, height);
av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height);
/* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
@@ -138,9 +137,11 @@ static int decode_packet(AVCodecContext *dec, const AVPacket *pkt)
ret = output_audio_frame(frame);
av_frame_unref(frame);
if (ret < 0)
return ret;
}
return ret;
return 0;
}
static int open_codec_context(int *stream_idx,
@@ -148,7 +149,8 @@ static int open_codec_context(int *stream_idx,
{
int ret, stream_index;
AVStream *st;
const AVCodec *dec = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
@@ -183,7 +185,7 @@ static int open_codec_context(int *stream_idx,
}
/* Init the decoders */
if ((ret = avcodec_open2(*dec_ctx, dec, NULL)) < 0) {
if ((ret = avcodec_open2(*dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
@@ -343,7 +345,7 @@ int main (int argc, char **argv)
if (audio_stream) {
enum AVSampleFormat sfmt = audio_dec_ctx->sample_fmt;
int n_channels = audio_dec_ctx->ch_layout.nb_channels;
int n_channels = audio_dec_ctx->channels;
const char *fmt;
if (av_sample_fmt_is_planar(sfmt)) {

View File

@@ -21,10 +21,10 @@
*/
/**
* @file libavcodec encoding audio API usage examples
* @example encode_audio.c
* @file
* audio encoding with libavcodec API example.
*
* Generate a synthetic audio signal and encode it to an output MP2 file.
* @example encode_audio.c
*/
#include <stdint.h>
@@ -70,25 +70,26 @@ static int select_sample_rate(const AVCodec *codec)
}
/* select layout with the highest channel count */
static int select_channel_layout(const AVCodec *codec, AVChannelLayout *dst)
static int select_channel_layout(const AVCodec *codec)
{
const AVChannelLayout *p, *best_ch_layout;
const uint64_t *p;
uint64_t best_ch_layout = 0;
int best_nb_channels = 0;
if (!codec->ch_layouts)
return av_channel_layout_copy(dst, &(AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO);
if (!codec->channel_layouts)
return AV_CH_LAYOUT_STEREO;
p = codec->ch_layouts;
while (p->nb_channels) {
int nb_channels = p->nb_channels;
p = codec->channel_layouts;
while (*p) {
int nb_channels = av_get_channel_layout_nb_channels(*p);
if (nb_channels > best_nb_channels) {
best_ch_layout = p;
best_ch_layout = *p;
best_nb_channels = nb_channels;
}
p++;
}
return av_channel_layout_copy(dst, best_ch_layout);
return best_ch_layout;
}
static void encode(AVCodecContext *ctx, AVFrame *frame, AVPacket *pkt,
@@ -163,9 +164,8 @@ int main(int argc, char **argv)
/* select other audio parameters supported by the encoder */
c->sample_rate = select_sample_rate(codec);
ret = select_channel_layout(codec, &c->ch_layout);
if (ret < 0)
exit(1);
c->channel_layout = select_channel_layout(codec);
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
@@ -195,9 +195,7 @@ int main(int argc, char **argv)
frame->nb_samples = c->frame_size;
frame->format = c->sample_fmt;
ret = av_channel_layout_copy(&frame->ch_layout, &c->ch_layout);
if (ret < 0)
exit(1);
frame->channel_layout = c->channel_layout;
/* allocate the data buffers */
ret = av_frame_get_buffer(frame, 0);
@@ -220,7 +218,7 @@ int main(int argc, char **argv)
for (j = 0; j < c->frame_size; j++) {
samples[2*j] = (int)(sin(t) * 10000);
for (k = 1; k < c->ch_layout.nb_channels; k++)
for (k = 1; k < c->channels; k++)
samples[2*j + k] = samples[2*j];
t += tincr;
}

View File

@@ -21,10 +21,10 @@
*/
/**
* @file libavcodec encoding video API usage example
* @example encode_video.c
* @file
* video encoding with libavcodec API example
*
* Generate synthetic video data and encode it to an output file.
* @example encode_video.c
*/
#include <stdio.h>
@@ -155,25 +155,12 @@ int main(int argc, char **argv)
for (i = 0; i < 25; i++) {
fflush(stdout);
/* Make sure the frame data is writable.
On the first round, the frame is fresh from av_frame_get_buffer()
and therefore we know it is writable.
But on the next rounds, encode() will have called
avcodec_send_frame(), and the codec may have kept a reference to
the frame in its internal structures, that makes the frame
unwritable.
av_frame_make_writable() checks that and allocates a new buffer
for the frame only if necessary.
*/
/* make sure the frame data is writable */
ret = av_frame_make_writable(frame);
if (ret < 0)
exit(1);
/* Prepare a dummy image.
In real code, this is where you would have your own logic for
filling the frame. FFmpeg does not care what you put in the
frame.
*/
/* prepare a dummy image */
/* Y */
for (y = 0; y < c->height; y++) {
for (x = 0; x < c->width; x++) {
@@ -198,12 +185,7 @@ int main(int argc, char **argv)
/* flush the encoder */
encode(c, NULL, pkt, f);
/* Add sequence end code to have a real MPEG file.
It makes only sense because this tiny examples writes packets
directly. This is called "elementary stream" and only works for some
codecs. To create a valid file, you usually need to write packets
into a proper file format or protocol; see mux.c.
*/
/* add sequence end code to have a real MPEG file */
if (codec->id == AV_CODEC_ID_MPEG1VIDEO || codec->id == AV_CODEC_ID_MPEG2VIDEO)
fwrite(endcode, 1, sizeof(endcode), f);
fclose(f);

View File

@@ -21,16 +21,7 @@
* THE SOFTWARE.
*/
/**
* @file libavcodec motion vectors extraction API usage example
* @example extract_mvs.c
*
* Read from input file, decode video stream and print a motion vectors
* representation to stdout.
*/
#include <libavutil/motion_vector.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
@@ -69,11 +60,10 @@ static int decode_packet(const AVPacket *pkt)
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64",%4d,%4d,%4d\n",
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags,
mv->motion_x, mv->motion_y, mv->motion_scale);
mv->dst_x, mv->dst_y, mv->flags);
}
}
av_frame_unref(frame);
@@ -88,7 +78,7 @@ static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
int ret;
AVStream *st;
AVCodecContext *dec_ctx = NULL;
const AVCodec *dec = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, &dec, 0);
@@ -114,9 +104,7 @@ static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
/* Init the video decoder */
av_dict_set(&opts, "flags2", "+export_mvs", 0);
ret = avcodec_open2(dec_ctx, dec, &opts);
av_dict_free(&opts);
if (ret < 0) {
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
@@ -133,7 +121,7 @@ static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
int main(int argc, char **argv)
{
int ret = 0;
AVPacket *pkt = NULL;
AVPacket pkt = { 0 };
if (argc != 2) {
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
@@ -168,20 +156,13 @@ int main(int argc, char **argv)
goto end;
}
pkt = av_packet_alloc();
if (!pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
ret = AVERROR(ENOMEM);
goto end;
}
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags,motion_x,motion_y,motion_scale\n");
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
/* read frames from the file */
while (av_read_frame(fmt_ctx, pkt) >= 0) {
if (pkt->stream_index == video_stream_idx)
ret = decode_packet(pkt);
av_packet_unref(pkt);
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
if (pkt.stream_index == video_stream_idx)
ret = decode_packet(&pkt);
av_packet_unref(&pkt);
if (ret < 0)
break;
}
@@ -193,6 +174,5 @@ end:
avcodec_free_context(&video_dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_packet_free(&pkt);
return ret < 0;
}

View File

@@ -19,11 +19,13 @@
*/
/**
* @file libavfilter audio filtering API usage example
* @example filter_audio.c
* @file
* libavfilter API usage example.
*
* This example will generate a sine wave audio, pass it through a simple filter
* chain, and then compute the MD5 checksum of the output data.
* @example filter_audio.c
* This example will generate a sine wave audio,
* pass it through a simple filter chain, and then compute the MD5 checksum of
* the output data.
*
* The filter chain it uses is:
* (input) -> abuffer -> volume -> aformat -> abuffersink -> (output)
@@ -53,7 +55,7 @@
#define INPUT_SAMPLERATE 48000
#define INPUT_FORMAT AV_SAMPLE_FMT_FLTP
#define INPUT_CHANNEL_LAYOUT (AVChannelLayout)AV_CHANNEL_LAYOUT_5POINT0
#define INPUT_CHANNEL_LAYOUT AV_CH_LAYOUT_5POINT0
#define VOLUME_VAL 0.90
@@ -98,7 +100,7 @@ static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
}
/* Set the filter options through the AVOptions API. */
av_channel_layout_describe(&INPUT_CHANNEL_LAYOUT, ch_layout, sizeof(ch_layout));
av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, INPUT_CHANNEL_LAYOUT);
av_opt_set (abuffer_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);
av_opt_set (abuffer_ctx, "sample_fmt", av_get_sample_fmt_name(INPUT_FORMAT), AV_OPT_SEARCH_CHILDREN);
av_opt_set_q (abuffer_ctx, "time_base", (AVRational){ 1, INPUT_SAMPLERATE }, AV_OPT_SEARCH_CHILDREN);
@@ -152,8 +154,9 @@ static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
/* A third way of passing the options is in a string of the form
* key1=value1:key2=value2.... */
snprintf(options_str, sizeof(options_str),
"sample_fmts=%s:sample_rates=%d:channel_layouts=stereo",
av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100);
"sample_fmts=%s:sample_rates=%d:channel_layouts=0x%"PRIx64,
av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100,
(uint64_t)AV_CH_LAYOUT_STEREO);
err = avfilter_init_str(aformat_ctx, options_str);
if (err < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not initialize the aformat filter.\n");
@@ -212,7 +215,7 @@ static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
static int process_output(struct AVMD5 *md5, AVFrame *frame)
{
int planar = av_sample_fmt_is_planar(frame->format);
int channels = frame->ch_layout.nb_channels;
int channels = av_get_channel_layout_nb_channels(frame->channel_layout);
int planes = planar ? channels : 1;
int bps = av_get_bytes_per_sample(frame->format);
int plane_size = bps * frame->nb_samples * (planar ? 1 : channels);
@@ -245,7 +248,7 @@ static int get_input(AVFrame *frame, int frame_num)
/* Set up the frame properties and allocate the buffer for the data. */
frame->sample_rate = INPUT_SAMPLERATE;
frame->format = INPUT_FORMAT;
av_channel_layout_copy(&frame->ch_layout, &INPUT_CHANNEL_LAYOUT);
frame->channel_layout = INPUT_CHANNEL_LAYOUT;
frame->nb_samples = FRAME_SIZE;
frame->pts = frame_num * FRAME_SIZE;

View File

@@ -23,11 +23,9 @@
*/
/**
* @file audio decoding and filtering usage example
* @example decode_filter_audio.c
*
* Demux, decode and filter audio input file, generate a raw audio
* file to be played with ffplay.
* @file
* API example for audio decoding and filtering
* @example filtering_audio.c
*/
#include <unistd.h>
@@ -36,7 +34,6 @@
#include <libavformat/avformat.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/channel_layout.h>
#include <libavutil/opt.h>
static const char *filter_descr = "aresample=8000,aformat=sample_fmts=s16:channel_layouts=mono";
@@ -51,8 +48,8 @@ static int audio_stream_index = -1;
static int open_input_file(const char *filename)
{
const AVCodec *dec;
int ret;
AVCodec *dec;
if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
@@ -96,6 +93,7 @@ static int init_filters(const char *filters_descr)
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
static const enum AVSampleFormat out_sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };
static const int64_t out_channel_layouts[] = { AV_CH_LAYOUT_MONO, -1 };
static const int out_sample_rates[] = { 8000, -1 };
const AVFilterLink *outlink;
AVRational time_base = fmt_ctx->streams[audio_stream_index]->time_base;
@@ -107,13 +105,12 @@ static int init_filters(const char *filters_descr)
}
/* buffer audio source: the decoded frames from the decoder will be inserted here. */
if (dec_ctx->ch_layout.order == AV_CHANNEL_ORDER_UNSPEC)
av_channel_layout_default(&dec_ctx->ch_layout, dec_ctx->ch_layout.nb_channels);
ret = snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=",
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout = av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
time_base.num, time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt));
av_channel_layout_describe(&dec_ctx->ch_layout, args + ret, sizeof(args) - ret);
av_get_sample_fmt_name(dec_ctx->sample_fmt), dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, abuffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
@@ -136,7 +133,7 @@ static int init_filters(const char *filters_descr)
goto end;
}
ret = av_opt_set(buffersink_ctx, "ch_layouts", "mono",
ret = av_opt_set_int_list(buffersink_ctx, "channel_layouts", out_channel_layouts, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
@@ -187,7 +184,7 @@ static int init_filters(const char *filters_descr)
/* Print summary of the sink buffer
* Note: args buffer is reused to store channel layout string */
outlink = buffersink_ctx->inputs[0];
av_channel_layout_describe(&outlink->ch_layout, args, sizeof(args));
av_get_channel_layout_string(args, sizeof(args), -1, outlink->channel_layout);
av_log(NULL, AV_LOG_INFO, "Output: srate:%dHz fmt:%s chlayout:%s\n",
(int)outlink->sample_rate,
(char *)av_x_if_null(av_get_sample_fmt_name(outlink->format), "?"),
@@ -202,7 +199,7 @@ end:
static void print_frame(const AVFrame *frame)
{
const int n = frame->nb_samples * frame->ch_layout.nb_channels;
const int n = frame->nb_samples * av_get_channel_layout_nb_channels(frame->channel_layout);
const uint16_t *p = (uint16_t*)frame->data[0];
const uint16_t *p_end = p + n;
@@ -217,12 +214,12 @@ static void print_frame(const AVFrame *frame)
int main(int argc, char **argv)
{
int ret;
AVPacket *packet = av_packet_alloc();
AVPacket packet;
AVFrame *frame = av_frame_alloc();
AVFrame *filt_frame = av_frame_alloc();
if (!packet || !frame || !filt_frame) {
fprintf(stderr, "Could not allocate frame or packet\n");
if (!frame || !filt_frame) {
perror("Could not allocate frame");
exit(1);
}
if (argc != 2) {
@@ -237,11 +234,11 @@ int main(int argc, char **argv)
/* read all packets */
while (1) {
if ((ret = av_read_frame(fmt_ctx, packet)) < 0)
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
break;
if (packet->stream_index == audio_stream_index) {
ret = avcodec_send_packet(dec_ctx, packet);
if (packet.stream_index == audio_stream_index) {
ret = avcodec_send_packet(dec_ctx, &packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
break;
@@ -277,13 +274,12 @@ int main(int argc, char **argv)
}
}
}
av_packet_unref(packet);
av_packet_unref(&packet);
}
end:
avfilter_graph_free(&filter_graph);
avcodec_free_context(&dec_ctx);
avformat_close_input(&fmt_ctx);
av_packet_free(&packet);
av_frame_free(&frame);
av_frame_free(&filt_frame);

View File

@@ -24,7 +24,7 @@
/**
* @file
* API example for decoding and filtering
* @example decode_filter_video.c
* @example filtering_video.c
*/
#define _XOPEN_SOURCE 600 /* for usleep */
@@ -53,8 +53,8 @@ static int64_t last_pts = AV_NOPTS_VALUE;
static int open_input_file(const char *filename)
{
const AVCodec *dec;
int ret;
AVCodec *dec;
if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
@@ -210,7 +210,7 @@ static void display_frame(const AVFrame *frame, AVRational time_base)
int main(int argc, char **argv)
{
int ret;
AVPacket *packet;
AVPacket packet;
AVFrame *frame;
AVFrame *filt_frame;
@@ -221,9 +221,8 @@ int main(int argc, char **argv)
frame = av_frame_alloc();
filt_frame = av_frame_alloc();
packet = av_packet_alloc();
if (!frame || !filt_frame || !packet) {
fprintf(stderr, "Could not allocate frame or packet\n");
if (!frame || !filt_frame) {
perror("Could not allocate frame");
exit(1);
}
@@ -234,11 +233,11 @@ int main(int argc, char **argv)
/* read all packets */
while (1) {
if ((ret = av_read_frame(fmt_ctx, packet)) < 0)
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
break;
if (packet->stream_index == video_stream_index) {
ret = avcodec_send_packet(dec_ctx, packet);
if (packet.stream_index == video_stream_index) {
ret = avcodec_send_packet(dec_ctx, &packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
break;
@@ -274,7 +273,7 @@ int main(int argc, char **argv)
av_frame_unref(frame);
}
}
av_packet_unref(packet);
av_packet_unref(&packet);
}
end:
avfilter_graph_free(&filter_graph);
@@ -282,7 +281,6 @@ end:
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_frame_free(&filt_frame);
av_packet_free(&packet);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));

View File

@@ -21,11 +21,12 @@
*/
/**
* @file libavformat multi-client network API usage example
* @example avio_http_serve_files.c
* @file
* libavformat multi-client network API usage example.
*
* Serve a file without decoding or demuxing it over the HTTP protocol. Multiple
* clients can connect and will receive the same file.
* @example http_multiclient.c
* This example will serve a file without decoding or demuxing it over http.
* Multiple clients can connect and will receive the same file.
*/
#include <libavformat/avformat.h>

View File

@@ -24,11 +24,12 @@
*/
/**
* @file HW-accelerated decoding API usage.example
* @example hw_decode.c
* @file
* HW-Accelerated decoding example.
*
* Perform HW-accelerated decoding with output frames from HW video
* surfaces.
* @example hw_decode.c
* This example shows how to do HW-accelerated decoding with output
* frames from the HW video surfaces.
*/
#include <stdio.h>
@@ -151,8 +152,8 @@ int main(int argc, char *argv[])
int video_stream, ret;
AVStream *video = NULL;
AVCodecContext *decoder_ctx = NULL;
const AVCodec *decoder = NULL;
AVPacket *packet = NULL;
AVCodec *decoder = NULL;
AVPacket packet;
enum AVHWDeviceType type;
int i;
@@ -171,12 +172,6 @@ int main(int argc, char *argv[])
return -1;
}
packet = av_packet_alloc();
if (!packet) {
fprintf(stderr, "Failed to allocate AVPacket\n");
return -1;
}
/* open the input file */
if (avformat_open_input(&input_ctx, argv[2], NULL, NULL) != 0) {
fprintf(stderr, "Cannot open input file '%s'\n", argv[2]);
@@ -232,21 +227,23 @@ int main(int argc, char *argv[])
/* actual decoding and dump the raw data */
while (ret >= 0) {
if ((ret = av_read_frame(input_ctx, packet)) < 0)
if ((ret = av_read_frame(input_ctx, &packet)) < 0)
break;
if (video_stream == packet->stream_index)
ret = decode_write(decoder_ctx, packet);
if (video_stream == packet.stream_index)
ret = decode_write(decoder_ctx, &packet);
av_packet_unref(packet);
av_packet_unref(&packet);
}
/* flush the decoder */
ret = decode_write(decoder_ctx, NULL);
packet.data = NULL;
packet.size = 0;
ret = decode_write(decoder_ctx, &packet);
av_packet_unref(&packet);
if (output_file)
fclose(output_file);
av_packet_free(&packet);
avcodec_free_context(&decoder_ctx);
avformat_close_input(&input_ctx);
av_buffer_unref(&hw_device_ctx);

View File

@@ -21,10 +21,9 @@
*/
/**
* @file libavformat metadata extraction API usage example
* @example show_metadata.c
*
* Show metadata from an input file.
* @file
* Shows how the metadata API can be used in application programs.
* @example metadata.c
*/
#include <stdio.h>
@@ -35,7 +34,7 @@
int main (int argc, char **argv)
{
AVFormatContext *fmt_ctx = NULL;
const AVDictionaryEntry *tag = NULL;
AVDictionaryEntry *tag = NULL;
int ret;
if (argc != 2) {
@@ -53,7 +52,7 @@ int main (int argc, char **argv)
return ret;
}
while ((tag = av_dict_iterate(fmt_ctx->metadata, tag)))
while ((tag = av_dict_get(fmt_ctx->metadata, "", tag, AV_DICT_IGNORE_SUFFIX)))
printf("%s=%s\n", tag->key, tag->value);
avformat_close_input(&fmt_ctx);

View File

@@ -21,11 +21,12 @@
*/
/**
* @file libavformat muxing API usage example
* @example mux.c
* @file
* libavformat API example.
*
* Generate a synthetic audio and video signal and mux them to a media file in
* any supported libavformat format. The default codecs are used.
* Output a media file in any supported libavformat format. The default
* codecs are used.
* @example muxing.c
*/
#include <stdlib.h>
@@ -38,7 +39,6 @@
#include <libavutil/opt.h>
#include <libavutil/mathematics.h>
#include <libavutil/timestamp.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
@@ -61,8 +61,6 @@ typedef struct OutputStream {
AVFrame *frame;
AVFrame *tmp_frame;
AVPacket *tmp_pkt;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
@@ -81,7 +79,7 @@ static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
}
static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c,
AVStream *st, AVFrame *frame, AVPacket *pkt)
AVStream *st, AVFrame *frame)
{
int ret;
@@ -94,7 +92,9 @@ static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c,
}
while (ret >= 0) {
ret = avcodec_receive_packet(c, pkt);
AVPacket pkt = { 0 };
ret = avcodec_receive_packet(c, &pkt);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
else if (ret < 0) {
@@ -103,15 +103,13 @@ static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c,
}
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, c->time_base, st->time_base);
pkt->stream_index = st->index;
av_packet_rescale_ts(&pkt, c->time_base, st->time_base);
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
ret = av_interleaved_write_frame(fmt_ctx, pkt);
/* pkt is now blank (av_interleaved_write_frame() takes ownership of
* its contents and resets pkt), so that no unreferencing is necessary.
* This would be different if one used av_write_frame(). */
log_packet(fmt_ctx, &pkt);
ret = av_interleaved_write_frame(fmt_ctx, &pkt);
av_packet_unref(&pkt);
if (ret < 0) {
fprintf(stderr, "Error while writing output packet: %s\n", av_err2str(ret));
exit(1);
@@ -123,7 +121,7 @@ static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c,
/* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc,
const AVCodec **codec,
AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
@@ -137,12 +135,6 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
exit(1);
}
ost->tmp_pkt = av_packet_alloc();
if (!ost->tmp_pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
exit(1);
}
ost->st = avformat_new_stream(oc, NULL);
if (!ost->st) {
fprintf(stderr, "Could not allocate stream\n");
@@ -169,7 +161,16 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
c->sample_rate = 44100;
}
}
av_channel_layout_copy(&c->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO);
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->channel_layout = AV_CH_LAYOUT_STEREO;
if ((*codec)->channel_layouts) {
c->channel_layout = (*codec)->channel_layouts[0];
for (i = 0; (*codec)->channel_layouts[i]; i++) {
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
c->channel_layout = AV_CH_LAYOUT_STEREO;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
break;
@@ -214,22 +215,25 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
/* audio output */
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
const AVChannelLayout *channel_layout,
uint64_t channel_layout,
int sample_rate, int nb_samples)
{
AVFrame *frame = av_frame_alloc();
int ret;
if (!frame) {
fprintf(stderr, "Error allocating an audio frame\n");
exit(1);
}
frame->format = sample_fmt;
av_channel_layout_copy(&frame->ch_layout, channel_layout);
frame->channel_layout = channel_layout;
frame->sample_rate = sample_rate;
frame->nb_samples = nb_samples;
if (nb_samples) {
if (av_frame_get_buffer(frame, 0) < 0) {
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
fprintf(stderr, "Error allocating an audio buffer\n");
exit(1);
}
@@ -238,8 +242,7 @@ static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
return frame;
}
static void open_audio(AVFormatContext *oc, const AVCodec *codec,
OutputStream *ost, AVDictionary *opt_arg)
static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{
AVCodecContext *c;
int nb_samples;
@@ -268,9 +271,9 @@ static void open_audio(AVFormatContext *oc, const AVCodec *codec,
else
nb_samples = c->frame_size;
ost->frame = alloc_audio_frame(c->sample_fmt, &c->ch_layout,
ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
c->sample_rate, nb_samples);
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, &c->ch_layout,
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, nb_samples);
/* copy the stream parameters to the muxer */
@@ -288,10 +291,10 @@ static void open_audio(AVFormatContext *oc, const AVCodec *codec,
}
/* set options */
av_opt_set_chlayout (ost->swr_ctx, "in_chlayout", &c->ch_layout, 0);
av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_chlayout (ost->swr_ctx, "out_chlayout", &c->ch_layout, 0);
av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
@@ -317,7 +320,7 @@ static AVFrame *get_audio_frame(OutputStream *ost)
for (j = 0; j <frame->nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->enc->ch_layout.nb_channels; i++)
for (i = 0; i < ost->enc->channels; i++)
*q++ = v;
ost->t += ost->tincr;
ost->tincr += ost->tincr2;
@@ -347,7 +350,8 @@ static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
if (frame) {
/* convert samples from native format to destination codec format, using the resampler */
/* compute destination number of samples */
dst_nb_samples = swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples;
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
c->sample_rate, c->sample_rate, AV_ROUND_UP);
av_assert0(dst_nb_samples == frame->nb_samples);
/* when we pass a frame to the encoder, it may keep a reference to it
@@ -372,37 +376,36 @@ static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
ost->samples_count += dst_nb_samples;
}
return write_frame(oc, c, ost->st, frame, ost->tmp_pkt);
return write_frame(oc, c, ost->st, frame);
}
/**************************************************************/
/* video output */
static AVFrame *alloc_frame(enum AVPixelFormat pix_fmt, int width, int height)
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *frame;
AVFrame *picture;
int ret;
frame = av_frame_alloc();
if (!frame)
picture = av_frame_alloc();
if (!picture)
return NULL;
frame->format = pix_fmt;
frame->width = width;
frame->height = height;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(frame, 0);
ret = av_frame_get_buffer(picture, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return frame;
return picture;
}
static void open_video(AVFormatContext *oc, const AVCodec *codec,
OutputStream *ost, AVDictionary *opt_arg)
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{
int ret;
AVCodecContext *c = ost->enc;
@@ -419,7 +422,7 @@ static void open_video(AVFormatContext *oc, const AVCodec *codec,
}
/* allocate and init a re-usable frame */
ost->frame = alloc_frame(c->pix_fmt, c->width, c->height);
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
if (!ost->frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
@@ -430,9 +433,9 @@ static void open_video(AVFormatContext *oc, const AVCodec *codec,
* output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_frame(AV_PIX_FMT_YUV420P, c->width, c->height);
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) {
fprintf(stderr, "Could not allocate temporary video frame\n");
fprintf(stderr, "Could not allocate temporary picture\n");
exit(1);
}
}
@@ -515,7 +518,7 @@ static AVFrame *get_video_frame(OutputStream *ost)
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{
return write_frame(oc, ost->enc, ost->st, get_video_frame(ost), ost->tmp_pkt);
return write_frame(oc, ost->enc, ost->st, get_video_frame(ost));
}
static void close_stream(AVFormatContext *oc, OutputStream *ost)
@@ -523,7 +526,6 @@ static void close_stream(AVFormatContext *oc, OutputStream *ost)
avcodec_free_context(&ost->enc);
av_frame_free(&ost->frame);
av_frame_free(&ost->tmp_frame);
av_packet_free(&ost->tmp_pkt);
sws_freeContext(ost->sws_ctx);
swr_free(&ost->swr_ctx);
}
@@ -534,10 +536,10 @@ static void close_stream(AVFormatContext *oc, OutputStream *ost)
int main(int argc, char **argv)
{
OutputStream video_st = { 0 }, audio_st = { 0 };
const AVOutputFormat *fmt;
const char *filename;
AVOutputFormat *fmt;
AVFormatContext *oc;
const AVCodec *audio_codec, *video_codec;
AVCodec *audio_codec, *video_codec;
int ret;
int have_video = 0, have_audio = 0;
int encode_video = 0, encode_audio = 0;
@@ -624,6 +626,10 @@ int main(int argc, char **argv)
}
}
/* Write the trailer, if any. The trailer must be written before you
* close the CodecContexts open when you wrote the header; otherwise
* av_write_trailer() may try to use memory that was freed on
* av_codec_close(). */
av_write_trailer(oc);
/* Close each codec. */

View File

@@ -1,435 +0,0 @@
/*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file Intel QSV-accelerated video transcoding API usage example
* @example qsv_transcode.c
*
* Perform QSV-accelerated transcoding and show to dynamically change
* encoder's options.
*
* Usage: qsv_transcode input_stream codec output_stream initial option
* { frame_number new_option }
* e.g: - qsv_transcode input.mp4 h264_qsv output_h264.mp4 "g 60"
* - qsv_transcode input.mp4 hevc_qsv output_hevc.mp4 "g 60 async_depth 1"
* 100 "g 120"
* (initialize codec with gop_size 60 and change it to 120 after 100
* frames)
*/
#include <stdio.h>
#include <errno.h>
#include <libavutil/hwcontext.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
static AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
static AVBufferRef *hw_device_ctx = NULL;
static AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;
static int video_stream = -1;
typedef struct DynamicSetting {
int frame_number;
char* optstr;
} DynamicSetting;
static DynamicSetting *dynamic_setting;
static int setting_number;
static int current_setting_number;
static int str_to_dict(char* optstr, AVDictionary **opt)
{
char *key, *value;
if (strlen(optstr) == 0)
return 0;
key = strtok(optstr, " ");
if (key == NULL)
return AVERROR(ENAVAIL);
value = strtok(NULL, " ");
if (value == NULL)
return AVERROR(ENAVAIL);
av_dict_set(opt, key, value, 0);
do {
key = strtok(NULL, " ");
if (key == NULL)
return 0;
value = strtok(NULL, " ");
if (value == NULL)
return AVERROR(ENAVAIL);
av_dict_set(opt, key, value, 0);
} while(1);
}
static int dynamic_set_parameter(AVCodecContext *avctx)
{
AVDictionary *opts = NULL;
int ret = 0;
static int frame_number = 0;
frame_number++;
if (current_setting_number < setting_number &&
frame_number == dynamic_setting[current_setting_number].frame_number) {
AVDictionaryEntry *e = NULL;
ret = str_to_dict(dynamic_setting[current_setting_number++].optstr, &opts);
if (ret < 0) {
fprintf(stderr, "The dynamic parameter is wrong\n");
goto fail;
}
/* Set common option. The dictionary will be freed and replaced
* by a new one containing all options not found in common option list.
* Then this new dictionary is used to set private option. */
if ((ret = av_opt_set_dict(avctx, &opts)) < 0)
goto fail;
/* Set codec specific option */
if ((ret = av_opt_set_dict(avctx->priv_data, &opts)) < 0)
goto fail;
/* There is no "framerate" option in commom option list. Use "-r" to set
* framerate, which is compatible with ffmpeg commandline. The video is
* assumed to be average frame rate, so set time_base to 1/framerate. */
e = av_dict_get(opts, "r", NULL, 0);
if (e) {
avctx->framerate = av_d2q(atof(e->value), INT_MAX);
encoder_ctx->time_base = av_inv_q(encoder_ctx->framerate);
}
}
fail:
av_dict_free(&opts);
return ret;
}
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
while (*pix_fmts != AV_PIX_FMT_NONE) {
if (*pix_fmts == AV_PIX_FMT_QSV) {
return AV_PIX_FMT_QSV;
}
pix_fmts++;
}
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
return AV_PIX_FMT_NONE;
}
static int open_input_file(char *filename)
{
int ret;
const AVCodec *decoder = NULL;
AVStream *video = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
fprintf(stderr, "Cannot open input file '%s', Error code: %s\n",
filename, av_err2str(ret));
return ret;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
fprintf(stderr, "Cannot find input stream information. Error code: %s\n",
av_err2str(ret));
return ret;
}
ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Cannot find a video stream in the input file. "
"Error code: %s\n", av_err2str(ret));
return ret;
}
video_stream = ret;
video = ifmt_ctx->streams[video_stream];
switch(video->codecpar->codec_id) {
case AV_CODEC_ID_H264:
decoder = avcodec_find_decoder_by_name("h264_qsv");
break;
case AV_CODEC_ID_HEVC:
decoder = avcodec_find_decoder_by_name("hevc_qsv");
break;
case AV_CODEC_ID_VP9:
decoder = avcodec_find_decoder_by_name("vp9_qsv");
break;
case AV_CODEC_ID_VP8:
decoder = avcodec_find_decoder_by_name("vp8_qsv");
break;
case AV_CODEC_ID_AV1:
decoder = avcodec_find_decoder_by_name("av1_qsv");
break;
case AV_CODEC_ID_MPEG2VIDEO:
decoder = avcodec_find_decoder_by_name("mpeg2_qsv");
break;
case AV_CODEC_ID_MJPEG:
decoder = avcodec_find_decoder_by_name("mjpeg_qsv");
break;
default:
fprintf(stderr, "Codec is not supportted by qsv\n");
return AVERROR(ENAVAIL);
}
if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
return AVERROR(ENOMEM);
if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {
fprintf(stderr, "avcodec_parameters_to_context error. Error code: %s\n",
av_err2str(ret));
return ret;
}
decoder_ctx->framerate = av_guess_frame_rate(ifmt_ctx, video, NULL);
decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
if (!decoder_ctx->hw_device_ctx) {
fprintf(stderr, "A hardware device reference create failed.\n");
return AVERROR(ENOMEM);
}
decoder_ctx->get_format = get_format;
decoder_ctx->pkt_timebase = video->time_base;
if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)
fprintf(stderr, "Failed to open codec for decoding. Error code: %s\n",
av_err2str(ret));
return ret;
}
static int encode_write(AVPacket *enc_pkt, AVFrame *frame)
{
int ret = 0;
av_packet_unref(enc_pkt);
if((ret = dynamic_set_parameter(encoder_ctx)) < 0) {
fprintf(stderr, "Failed to set dynamic parameter. Error code: %s\n",
av_err2str(ret));
goto end;
}
if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {
fprintf(stderr, "Error during encoding. Error code: %s\n", av_err2str(ret));
goto end;
}
while (1) {
if (ret = avcodec_receive_packet(encoder_ctx, enc_pkt))
break;
enc_pkt->stream_index = 0;
av_packet_rescale_ts(enc_pkt, encoder_ctx->time_base,
ofmt_ctx->streams[0]->time_base);
if ((ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt)) < 0) {
fprintf(stderr, "Error during writing data to output file. "
"Error code: %s\n", av_err2str(ret));
return ret;
}
}
end:
if (ret == AVERROR_EOF)
return 0;
ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);
return ret;
}
static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec, char *optstr)
{
AVFrame *frame;
int ret = 0;
ret = avcodec_send_packet(decoder_ctx, pkt);
if (ret < 0) {
fprintf(stderr, "Error during decoding. Error code: %s\n", av_err2str(ret));
return ret;
}
while (ret >= 0) {
if (!(frame = av_frame_alloc()))
return AVERROR(ENOMEM);
ret = avcodec_receive_frame(decoder_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
av_frame_free(&frame);
return 0;
} else if (ret < 0) {
fprintf(stderr, "Error while decoding. Error code: %s\n", av_err2str(ret));
goto fail;
}
if (!encoder_ctx->hw_frames_ctx) {
AVDictionaryEntry *e = NULL;
AVDictionary *opts = NULL;
AVStream *ost;
/* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.
Only after we get a decoded frame, can we obtain its hw_frames_ctx */
encoder_ctx->hw_frames_ctx = av_buffer_ref(decoder_ctx->hw_frames_ctx);
if (!encoder_ctx->hw_frames_ctx) {
ret = AVERROR(ENOMEM);
goto fail;
}
/* set AVCodecContext Parameters for encoder, here we keep them stay
* the same as decoder.
*/
encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);
encoder_ctx->pix_fmt = AV_PIX_FMT_QSV;
encoder_ctx->width = decoder_ctx->width;
encoder_ctx->height = decoder_ctx->height;
if ((ret = str_to_dict(optstr, &opts)) < 0) {
fprintf(stderr, "Failed to set encoding parameter.\n");
goto fail;
}
/* There is no "framerate" option in commom option list. Use "-r" to
* set framerate, which is compatible with ffmpeg commandline. The
* video is assumed to be average frame rate, so set time_base to
* 1/framerate. */
e = av_dict_get(opts, "r", NULL, 0);
if (e) {
encoder_ctx->framerate = av_d2q(atof(e->value), INT_MAX);
encoder_ctx->time_base = av_inv_q(encoder_ctx->framerate);
}
if ((ret = avcodec_open2(encoder_ctx, enc_codec, &opts)) < 0) {
fprintf(stderr, "Failed to open encode codec. Error code: %s\n",
av_err2str(ret));
av_dict_free(&opts);
goto fail;
}
av_dict_free(&opts);
if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {
fprintf(stderr, "Failed to allocate stream for output format.\n");
ret = AVERROR(ENOMEM);
goto fail;
}
ost->time_base = encoder_ctx->time_base;
ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);
if (ret < 0) {
fprintf(stderr, "Failed to copy the stream parameters. "
"Error code: %s\n", av_err2str(ret));
goto fail;
}
/* write the stream header */
if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {
fprintf(stderr, "Error while writing stream header. "
"Error code: %s\n", av_err2str(ret));
goto fail;
}
}
frame->pts = av_rescale_q(frame->pts, decoder_ctx->pkt_timebase,
encoder_ctx->time_base);
if ((ret = encode_write(pkt, frame)) < 0)
fprintf(stderr, "Error during encoding and writing.\n");
fail:
av_frame_free(&frame);
}
return ret;
}
int main(int argc, char **argv)
{
const AVCodec *enc_codec;
int ret = 0;
AVPacket *dec_pkt = NULL;
if (argc < 5 || (argc - 5) % 2) {
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <encoder> <output file>"
" <\"encoding option set 0\"> [<frame_number> <\"encoding options set 1\">]...\n", argv[0]);
return 1;
}
setting_number = (argc - 5) / 2;
dynamic_setting = av_malloc(setting_number * sizeof(*dynamic_setting));
current_setting_number = 0;
for (int i = 0; i < setting_number; i++) {
dynamic_setting[i].frame_number = atoi(argv[i*2 + 5]);
dynamic_setting[i].optstr = argv[i*2 + 6];
}
ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_QSV, NULL, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Failed to create a QSV device. Error code: %s\n", av_err2str(ret));
goto end;
}
dec_pkt = av_packet_alloc();
if (!dec_pkt) {
fprintf(stderr, "Failed to allocate decode packet\n");
goto end;
}
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {
fprintf(stderr, "Could not find encoder '%s'\n", argv[2]);
ret = -1;
goto end;
}
if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {
fprintf(stderr, "Failed to deduce output format from file extension. Error code: "
"%s\n", av_err2str(ret));
goto end;
}
if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {
ret = AVERROR(ENOMEM);
goto end;
}
ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Cannot open output file. "
"Error code: %s\n", av_err2str(ret));
goto end;
}
/* read all packets and only transcoding video */
while (ret >= 0) {
if ((ret = av_read_frame(ifmt_ctx, dec_pkt)) < 0)
break;
if (video_stream == dec_pkt->stream_index)
ret = dec_enc(dec_pkt, enc_codec, argv[4]);
av_packet_unref(dec_pkt);
}
/* flush decoder */
av_packet_unref(dec_pkt);
if ((ret = dec_enc(dec_pkt, enc_codec, argv[4])) < 0) {
fprintf(stderr, "Failed to flush decoder %s\n", av_err2str(ret));
goto end;
}
/* flush encoder */
if ((ret = encode_write(dec_pkt, NULL)) < 0) {
fprintf(stderr, "Failed to flush encoder %s\n", av_err2str(ret));
goto end;
}
/* write the trailer for output stream */
if ((ret = av_write_trailer(ofmt_ctx)) < 0)
fprintf(stderr, "Failed to write trailer %s\n", av_err2str(ret));
end:
avformat_close_input(&ifmt_ctx);
avformat_close_input(&ofmt_ctx);
avcodec_free_context(&decoder_ctx);
avcodec_free_context(&encoder_ctx);
av_buffer_unref(&hw_device_ctx);
av_packet_free(&dec_pkt);
av_freep(&dynamic_setting);
return ret;
}

View File

@@ -21,11 +21,12 @@
*/
/**
* @file Intel QSV-accelerated H.264 decoding API usage example
* @example qsv_decode.c
* @file
* Intel QSV-accelerated H.264 decoding example.
*
* Perform QSV-accelerated H.264 decoding with output frames in the
* GPU video surfaces, write the decoded frames to an output file.
* @example qsvdec.c
* This example shows how to do QSV-accelerated H.264 decoding with output
* frames in the GPU video surfaces.
*/
#include "config.h"
@@ -43,10 +44,38 @@
#include "libavutil/hwcontext_qsv.h"
#include "libavutil/mem.h"
typedef struct DecodeContext {
AVBufferRef *hw_device_ref;
} DecodeContext;
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
while (*pix_fmts != AV_PIX_FMT_NONE) {
if (*pix_fmts == AV_PIX_FMT_QSV) {
DecodeContext *decode = avctx->opaque;
AVHWFramesContext *frames_ctx;
AVQSVFramesContext *frames_hwctx;
int ret;
/* create a pool of surfaces to be used by the decoder */
avctx->hw_frames_ctx = av_hwframe_ctx_alloc(decode->hw_device_ref);
if (!avctx->hw_frames_ctx)
return AV_PIX_FMT_NONE;
frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data;
frames_hwctx = frames_ctx->hwctx;
frames_ctx->format = AV_PIX_FMT_QSV;
frames_ctx->sw_format = avctx->sw_pix_fmt;
frames_ctx->width = FFALIGN(avctx->coded_width, 32);
frames_ctx->height = FFALIGN(avctx->coded_height, 32);
frames_ctx->initial_pool_size = 32;
frames_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
if (ret < 0)
return AV_PIX_FMT_NONE;
return AV_PIX_FMT_QSV;
}
@@ -58,7 +87,7 @@ static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
return AV_PIX_FMT_NONE;
}
static int decode_packet(AVCodecContext *decoder_ctx,
static int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,
AVFrame *frame, AVFrame *sw_frame,
AVPacket *pkt, AVIOContext *output_ctx)
{
@@ -112,15 +141,15 @@ int main(int argc, char **argv)
AVCodecContext *decoder_ctx = NULL;
const AVCodec *decoder;
AVPacket *pkt = NULL;
AVPacket pkt = { 0 };
AVFrame *frame = NULL, *sw_frame = NULL;
DecodeContext decode = { NULL };
AVIOContext *output_ctx = NULL;
int ret, i;
AVBufferRef *device_ref = NULL;
if (argc < 3) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
@@ -148,7 +177,7 @@ int main(int argc, char **argv)
}
/* open the hardware device */
ret = av_hwdevice_ctx_create(&device_ref, AV_HWDEVICE_TYPE_QSV,
ret = av_hwdevice_ctx_create(&decode.hw_device_ref, AV_HWDEVICE_TYPE_QSV,
"auto", NULL, 0);
if (ret < 0) {
fprintf(stderr, "Cannot open the hardware device\n");
@@ -180,8 +209,7 @@ int main(int argc, char **argv)
decoder_ctx->extradata_size = video_st->codecpar->extradata_size;
}
decoder_ctx->hw_device_ctx = av_buffer_ref(device_ref);
decoder_ctx->opaque = &decode;
decoder_ctx->get_format = get_format;
ret = avcodec_open2(decoder_ctx, NULL, NULL);
@@ -199,26 +227,27 @@ int main(int argc, char **argv)
frame = av_frame_alloc();
sw_frame = av_frame_alloc();
pkt = av_packet_alloc();
if (!frame || !sw_frame || !pkt) {
if (!frame || !sw_frame) {
ret = AVERROR(ENOMEM);
goto finish;
}
/* actual decoding */
while (ret >= 0) {
ret = av_read_frame(input_ctx, pkt);
ret = av_read_frame(input_ctx, &pkt);
if (ret < 0)
break;
if (pkt->stream_index == video_st->index)
ret = decode_packet(decoder_ctx, frame, sw_frame, pkt, output_ctx);
if (pkt.stream_index == video_st->index)
ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);
av_packet_unref(pkt);
av_packet_unref(&pkt);
}
/* flush the decoder */
ret = decode_packet(decoder_ctx, frame, sw_frame, NULL, output_ctx);
pkt.data = NULL;
pkt.size = 0;
ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);
finish:
if (ret < 0) {
@@ -231,11 +260,10 @@ finish:
av_frame_free(&frame);
av_frame_free(&sw_frame);
av_packet_free(&pkt);
avcodec_free_context(&decoder_ctx);
av_buffer_unref(&device_ref);
av_buffer_unref(&decode.hw_device_ref);
avio_close(output_ctx);

View File

@@ -21,11 +21,11 @@
*/
/**
* @file libavformat/libavcodec demuxing and muxing API usage example
* @example remux.c
* @file
* libavformat/libavcodec demuxing and muxing API example.
*
* Remux streams from one container format to another. Data is copied from the
* input to the output without transcoding.
* Remux streams from one container format to another.
* @example remuxing.c
*/
#include <libavutil/timestamp.h>
@@ -45,9 +45,9 @@ static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, cons
int main(int argc, char **argv)
{
const AVOutputFormat *ofmt = NULL;
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket *pkt = NULL;
AVPacket pkt;
const char *in_filename, *out_filename;
int ret, i;
int stream_index = 0;
@@ -65,12 +65,6 @@ int main(int argc, char **argv)
in_filename = argv[1];
out_filename = argv[2];
pkt = av_packet_alloc();
if (!pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
return 1;
}
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
@@ -91,7 +85,7 @@ int main(int argc, char **argv)
}
stream_mapping_size = ifmt_ctx->nb_streams;
stream_mapping = av_calloc(stream_mapping_size, sizeof(*stream_mapping));
stream_mapping = av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping));
if (!stream_mapping) {
ret = AVERROR(ENOMEM);
goto end;
@@ -146,39 +140,38 @@ int main(int argc, char **argv)
while (1) {
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, pkt);
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt->stream_index];
if (pkt->stream_index >= stream_mapping_size ||
stream_mapping[pkt->stream_index] < 0) {
av_packet_unref(pkt);
in_stream = ifmt_ctx->streams[pkt.stream_index];
if (pkt.stream_index >= stream_mapping_size ||
stream_mapping[pkt.stream_index] < 0) {
av_packet_unref(&pkt);
continue;
}
pkt->stream_index = stream_mapping[pkt->stream_index];
out_stream = ofmt_ctx->streams[pkt->stream_index];
log_packet(ifmt_ctx, pkt, "in");
pkt.stream_index = stream_mapping[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
log_packet(ifmt_ctx, &pkt, "in");
/* copy packet */
av_packet_rescale_ts(pkt, in_stream->time_base, out_stream->time_base);
pkt->pos = -1;
log_packet(ofmt_ctx, pkt, "out");
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
log_packet(ofmt_ctx, &pkt, "out");
ret = av_interleaved_write_frame(ofmt_ctx, pkt);
/* pkt is now blank (av_interleaved_write_frame() takes ownership of
* its contents and resets pkt), so that no unreferencing is necessary.
* This would be different if one used av_write_frame(). */
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(&pkt);
}
av_write_trailer(ofmt_ctx);
end:
av_packet_free(&pkt);
avformat_close_input(&ifmt_ctx);

View File

@@ -21,12 +21,8 @@
*/
/**
* @file audio resampling API usage example
* @example resample_audio.c
*
* Generate a synthetic audio signal, and Use libswresample API to perform audio
* resampling. The output is written to a raw audio file to be played with
* ffplay.
* @example resampling_audio.c
* libswresample API use example.
*/
#include <libavutil/opt.h>
@@ -84,7 +80,7 @@ static void fill_samples(double *dst, int nb_samples, int nb_channels, int sampl
int main(int argc, char **argv)
{
AVChannelLayout src_ch_layout = AV_CHANNEL_LAYOUT_STEREO, dst_ch_layout = AV_CHANNEL_LAYOUT_SURROUND;
int64_t src_ch_layout = AV_CH_LAYOUT_STEREO, dst_ch_layout = AV_CH_LAYOUT_SURROUND;
int src_rate = 48000, dst_rate = 44100;
uint8_t **src_data = NULL, **dst_data = NULL;
int src_nb_channels = 0, dst_nb_channels = 0;
@@ -96,7 +92,6 @@ int main(int argc, char **argv)
int dst_bufsize;
const char *fmt;
struct SwrContext *swr_ctx;
char buf[64];
double t;
int ret;
@@ -125,11 +120,11 @@ int main(int argc, char **argv)
}
/* set options */
av_opt_set_chlayout(swr_ctx, "in_chlayout", &src_ch_layout, 0);
av_opt_set_int(swr_ctx, "in_channel_layout", src_ch_layout, 0);
av_opt_set_int(swr_ctx, "in_sample_rate", src_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", src_sample_fmt, 0);
av_opt_set_chlayout(swr_ctx, "out_chlayout", &dst_ch_layout, 0);
av_opt_set_int(swr_ctx, "out_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx, "out_sample_rate", dst_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", dst_sample_fmt, 0);
@@ -141,7 +136,7 @@ int main(int argc, char **argv)
/* allocate source and destination samples buffers */
src_nb_channels = src_ch_layout.nb_channels;
src_nb_channels = av_get_channel_layout_nb_channels(src_ch_layout);
ret = av_samples_alloc_array_and_samples(&src_data, &src_linesize, src_nb_channels,
src_nb_samples, src_sample_fmt, 0);
if (ret < 0) {
@@ -156,7 +151,7 @@ int main(int argc, char **argv)
av_rescale_rnd(src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
/* buffer is going to be directly written to a rawaudio file, no alignment */
dst_nb_channels = dst_ch_layout.nb_channels;
dst_nb_channels = av_get_channel_layout_nb_channels(dst_ch_layout);
ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 0);
if (ret < 0) {
@@ -199,10 +194,9 @@ int main(int argc, char **argv)
if ((ret = get_format_from_sample_fmt(&fmt, dst_sample_fmt)) < 0)
goto end;
av_channel_layout_describe(&dst_ch_layout, buf, sizeof(buf));
fprintf(stderr, "Resampling succeeded. Play the output file with the command:\n"
"ffplay -f %s -channel_layout %s -channels %d -ar %d %s\n",
fmt, buf, dst_nb_channels, dst_rate, dst_filename);
"ffplay -f %s -channel_layout %"PRId64" -channels %d -ar %d %s\n",
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
end:
fclose(dst_file);

View File

@@ -21,10 +21,9 @@
*/
/**
* @file libswscale API usage example
* @example scale_video.c
*
* Generate a synthetic video signal and use libswscale to perform rescaling.
* @file
* libswscale API use example.
* @example scaling_video.c
*/
#include <libavutil/imgutils.h>

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2013-2022 Andreas Unterweger
* Copyright (c) 2013-2018 Andreas Unterweger
*
* This file is part of FFmpeg.
*
@@ -19,11 +19,12 @@
*/
/**
* @file audio transcoding to MPEG/AAC API usage example
* @example transcode_aac.c
* @file
* Simple audio converter
*
* Convert an input audio file to AAC in an MP4 container. Formats other than
* MP4 are supported based on the output file extension.
* @example transcode_aac.c
* Convert an input audio file to AAC in an MP4 container using FFmpeg.
* Formats other than MP4 are supported based on the output file extension.
* @author Andreas Unterweger (dustsigns@gmail.com)
*/
@@ -37,7 +38,6 @@
#include "libavutil/audio_fifo.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
#include "libavutil/channel_layout.h"
#include "libavutil/frame.h"
#include "libavutil/opt.h"
@@ -60,8 +60,7 @@ static int open_input_file(const char *filename,
AVCodecContext **input_codec_context)
{
AVCodecContext *avctx;
const AVCodec *input_codec;
const AVStream *stream;
AVCodec *input_codec;
int error;
/* Open the input file to read from it. */
@@ -89,10 +88,8 @@ static int open_input_file(const char *filename,
return AVERROR_EXIT;
}
stream = (*input_format_context)->streams[0];
/* Find a decoder for the audio stream. */
if (!(input_codec = avcodec_find_decoder(stream->codecpar->codec_id))) {
if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codecpar->codec_id))) {
fprintf(stderr, "Could not find input codec\n");
avformat_close_input(input_format_context);
return AVERROR_EXIT;
@@ -107,7 +104,7 @@ static int open_input_file(const char *filename,
}
/* Initialize the stream parameters with demuxer information. */
error = avcodec_parameters_to_context(avctx, stream->codecpar);
error = avcodec_parameters_to_context(avctx, (*input_format_context)->streams[0]->codecpar);
if (error < 0) {
avformat_close_input(input_format_context);
avcodec_free_context(&avctx);
@@ -123,9 +120,6 @@ static int open_input_file(const char *filename,
return error;
}
/* Set the packet timebase for the decoder. */
avctx->pkt_timebase = stream->time_base;
/* Save the decoder context for easier access later. */
*input_codec_context = avctx;
@@ -150,7 +144,7 @@ static int open_output_file(const char *filename,
AVCodecContext *avctx = NULL;
AVIOContext *output_io_context = NULL;
AVStream *stream = NULL;
const AVCodec *output_codec = NULL;
AVCodec *output_codec = NULL;
int error;
/* Open the output file to write to it. */
@@ -205,11 +199,15 @@ static int open_output_file(const char *filename,
/* Set the basic encoder parameters.
* The input file's sample rate is used to avoid a sample rate conversion. */
av_channel_layout_default(&avctx->ch_layout, OUTPUT_CHANNELS);
avctx->channels = OUTPUT_CHANNELS;
avctx->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
avctx->sample_rate = input_codec_context->sample_rate;
avctx->sample_fmt = output_codec->sample_fmts[0];
avctx->bit_rate = OUTPUT_BIT_RATE;
/* Allow the use of the experimental AAC encoder. */
avctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
/* Set the sample rate for the container. */
stream->time_base.den = input_codec_context->sample_rate;
stream->time_base.num = 1;
@@ -291,18 +289,21 @@ static int init_resampler(AVCodecContext *input_codec_context,
/*
* Create a resampler context for the conversion.
* Set the conversion parameters.
* Default channel layouts based on the number of channels
* are assumed for simplicity (they are sometimes not detected
* properly by the demuxer and/or decoder).
*/
error = swr_alloc_set_opts2(resample_context,
&output_codec_context->ch_layout,
*resample_context = swr_alloc_set_opts(NULL,
av_get_default_channel_layout(output_codec_context->channels),
output_codec_context->sample_fmt,
output_codec_context->sample_rate,
&input_codec_context->ch_layout,
av_get_default_channel_layout(input_codec_context->channels),
input_codec_context->sample_fmt,
input_codec_context->sample_rate,
0, NULL);
if (error < 0) {
if (!*resample_context) {
fprintf(stderr, "Could not allocate resample context\n");
return error;
return AVERROR(ENOMEM);
}
/*
* Perform a sanity check so that the number of converted samples is
@@ -330,7 +331,7 @@ static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)
{
/* Create the FIFO buffer based on the specified output sample format. */
if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt,
output_codec_context->ch_layout.nb_channels, 1))) {
output_codec_context->channels, 1))) {
fprintf(stderr, "Could not allocate FIFO\n");
return AVERROR(ENOMEM);
}
@@ -379,8 +380,6 @@ static int decode_audio_frame(AVFrame *frame,
if (error < 0)
return error;
*data_present = 0;
*finished = 0;
/* Read one audio frame from the input file into a temporary packet. */
if ((error = av_read_frame(input_format_context, input_packet)) < 0) {
/* If we are at the end of the file, flush the decoder below. */
@@ -447,17 +446,26 @@ static int init_converted_samples(uint8_t ***converted_input_samples,
int error;
/* Allocate as many pointers as there are audio channels.
* Each pointer will point to the audio samples of the corresponding
* Each pointer will later point to the audio samples of the corresponding
* channels (although it may be NULL for interleaved formats).
* Allocate memory for the samples of all channels in one consecutive
*/
if (!(*converted_input_samples = calloc(output_codec_context->channels,
sizeof(**converted_input_samples)))) {
fprintf(stderr, "Could not allocate converted input sample pointers\n");
return AVERROR(ENOMEM);
}
/* Allocate memory for the samples of all channels in one consecutive
* block for convenience. */
if ((error = av_samples_alloc_array_and_samples(converted_input_samples, NULL,
output_codec_context->ch_layout.nb_channels,
if ((error = av_samples_alloc(*converted_input_samples, NULL,
output_codec_context->channels,
frame_size,
output_codec_context->sample_fmt, 0)) < 0) {
fprintf(stderr,
"Could not allocate converted input samples (error '%s')\n",
av_err2str(error));
av_freep(&(*converted_input_samples)[0]);
free(*converted_input_samples);
return error;
}
return 0;
@@ -550,7 +558,7 @@ static int read_decode_convert_and_store(AVAudioFifo *fifo,
AVFrame *input_frame = NULL;
/* Temporary storage for the converted input samples. */
uint8_t **converted_input_samples = NULL;
int data_present;
int data_present = 0;
int ret = AVERROR_EXIT;
/* Initialize temporary storage for one input frame. */
@@ -589,9 +597,10 @@ static int read_decode_convert_and_store(AVAudioFifo *fifo,
ret = 0;
cleanup:
if (converted_input_samples)
if (converted_input_samples) {
av_freep(&converted_input_samples[0]);
av_freep(&converted_input_samples);
free(converted_input_samples);
}
av_frame_free(&input_frame);
return ret;
@@ -623,7 +632,7 @@ static int init_output_frame(AVFrame **frame,
* Default channel layouts based on the number of channels
* are assumed for simplicity. */
(*frame)->nb_samples = frame_size;
av_channel_layout_copy(&(*frame)->ch_layout, &output_codec_context->ch_layout);
(*frame)->channel_layout = output_codec_context->channel_layout;
(*frame)->format = output_codec_context->sample_fmt;
(*frame)->sample_rate = output_codec_context->sample_rate;
@@ -670,16 +679,17 @@ static int encode_audio_frame(AVFrame *frame,
pts += frame->nb_samples;
}
*data_present = 0;
/* Send the audio frame stored in the temporary packet to the encoder.
* The output audio stream encoder is used to do this. */
error = avcodec_send_frame(output_codec_context, frame);
/* Check for errors, but proceed with fetching encoded samples if the
* encoder signals that it has nothing more to encode. */
if (error < 0 && error != AVERROR_EOF) {
fprintf(stderr, "Could not send packet for encoding (error '%s')\n",
av_err2str(error));
goto cleanup;
/* The encoder signals that it has nothing more to encode. */
if (error == AVERROR_EOF) {
error = 0;
goto cleanup;
} else if (error < 0) {
fprintf(stderr, "Could not send packet for encoding (error '%s')\n",
av_err2str(error));
goto cleanup;
}
/* Receive one encoded frame from the encoder. */
@@ -850,6 +860,7 @@ int main(int argc, char **argv)
int data_written;
/* Flush the encoder as it may have delayed frames. */
do {
data_written = 0;
if (encode_audio_frame(NULL, output_format_context,
output_codec_context, &data_written))
goto cleanup;

View File

@@ -23,18 +23,15 @@
*/
/**
* @file demuxing, decoding, filtering, encoding and muxing API usage example
* @example transcode.c
*
* Convert input to output file, applying some hard-coded filter-graph on both
* audio and video streams.
* @file
* API example for demuxing, decoding, filtering, encoding and muxing
* @example transcoding.c
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/channel_layout.h>
#include <libavutil/opt.h>
#include <libavutil/pixdesc.h>
@@ -74,13 +71,13 @@ static int open_input_file(const char *filename)
return ret;
}
stream_ctx = av_calloc(ifmt_ctx->nb_streams, sizeof(*stream_ctx));
stream_ctx = av_mallocz_array(ifmt_ctx->nb_streams, sizeof(*stream_ctx));
if (!stream_ctx)
return AVERROR(ENOMEM);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *stream = ifmt_ctx->streams[i];
const AVCodec *dec = avcodec_find_decoder(stream->codecpar->codec_id);
AVCodec *dec = avcodec_find_decoder(stream->codecpar->codec_id);
AVCodecContext *codec_ctx;
if (!dec) {
av_log(NULL, AV_LOG_ERROR, "Failed to find decoder for stream #%u\n", i);
@@ -97,11 +94,6 @@ static int open_input_file(const char *filename)
"for stream #%u\n", i);
return ret;
}
/* Inform the decoder about the timebase for the packet timestamps.
* This is highly recommended, but not mandatory. */
codec_ctx->pkt_timebase = stream->time_base;
/* Reencode video & audio and remux subtitles etc. */
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
@@ -130,7 +122,7 @@ static int open_output_file(const char *filename)
AVStream *out_stream;
AVStream *in_stream;
AVCodecContext *dec_ctx, *enc_ctx;
const AVCodec *encoder;
AVCodec *encoder;
int ret;
unsigned int i;
@@ -182,9 +174,8 @@ static int open_output_file(const char *filename)
enc_ctx->time_base = av_inv_q(dec_ctx->framerate);
} else {
enc_ctx->sample_rate = dec_ctx->sample_rate;
ret = av_channel_layout_copy(&enc_ctx->ch_layout, &dec_ctx->ch_layout);
if (ret < 0)
return ret;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
/* take first format from list of supported formats */
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
@@ -271,7 +262,7 @@ static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
dec_ctx->pkt_timebase.num, dec_ctx->pkt_timebase.den,
dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num,
dec_ctx->sample_aspect_ratio.den);
@@ -297,7 +288,6 @@ static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
goto end;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
char buf[64];
buffersrc = avfilter_get_by_name("abuffer");
buffersink = avfilter_get_by_name("abuffersink");
if (!buffersrc || !buffersink) {
@@ -306,14 +296,14 @@ static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
goto end;
}
if (dec_ctx->ch_layout.order == AV_CHANNEL_ORDER_UNSPEC)
av_channel_layout_default(&dec_ctx->ch_layout, dec_ctx->ch_layout.nb_channels);
av_channel_layout_describe(&dec_ctx->ch_layout, buf, sizeof(buf));
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout =
av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=%s",
dec_ctx->pkt_timebase.num, dec_ctx->pkt_timebase.den, dec_ctx->sample_rate,
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
buf);
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
@@ -336,9 +326,9 @@ static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
goto end;
}
av_channel_layout_describe(&enc_ctx->ch_layout, buf, sizeof(buf));
ret = av_opt_set(buffersink_ctx, "ch_layouts",
buf, AV_OPT_SEARCH_CHILDREN);
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
(uint8_t*)&enc_ctx->channel_layout,
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
@@ -441,10 +431,6 @@ static int encode_write_frame(unsigned int stream_index, int flush)
/* encode filtered frame */
av_packet_unref(enc_pkt);
if (filt_frame && filt_frame->pts != AV_NOPTS_VALUE)
filt_frame->pts = av_rescale_q(filt_frame->pts, filt_frame->time_base,
stream->enc_ctx->time_base);
ret = avcodec_send_frame(stream->enc_ctx, filt_frame);
if (ret < 0)
@@ -499,7 +485,6 @@ static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
break;
}
filter->filtered_frame->time_base = av_buffersink_get_time_base(filter->buffersink_ctx);;
filter->filtered_frame->pict_type = AV_PICTURE_TYPE_NONE;
ret = encode_write_frame(stream_index, 0);
av_frame_unref(filter->filtered_frame);
@@ -554,6 +539,9 @@ int main(int argc, char **argv)
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
av_packet_rescale_ts(packet,
ifmt_ctx->streams[stream_index]->time_base,
stream->dec_ctx->time_base);
ret = avcodec_send_packet(stream->dec_ctx, packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
@@ -585,38 +573,11 @@ int main(int argc, char **argv)
av_packet_unref(packet);
}
/* flush decoders, filters and encoders */
/* flush filters and encoders */
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
StreamContext *stream;
/* flush filter */
if (!filter_ctx[i].filter_graph)
continue;
stream = &stream_ctx[i];
av_log(NULL, AV_LOG_INFO, "Flushing stream %u decoder\n", i);
/* flush decoder */
ret = avcodec_send_packet(stream->dec_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing decoding failed\n");
goto end;
}
while (ret >= 0) {
ret = avcodec_receive_frame(stream->dec_ctx, stream->dec_frame);
if (ret == AVERROR_EOF)
break;
else if (ret < 0)
goto end;
stream->dec_frame->pts = stream->dec_frame->best_effort_timestamp;
ret = filter_encode_write_frame(stream->dec_frame, i);
if (ret < 0)
goto end;
}
/* flush filter */
ret = filter_encode_write_frame(NULL, i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");

View File

@@ -1,4 +1,6 @@
/*
* Video Acceleration API (video encoding) encode sample
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
@@ -19,12 +21,13 @@
*/
/**
* @file Intel VAAPI-accelerated encoding API usage example
* @example vaapi_encode.c
* @file
* Intel VAAPI-accelerated encoding example.
*
* @example vaapi_encode.c
* This example shows how to do VAAPI-accelerated encoding. now only support NV12
* raw file, usage like: vaapi_encode 1920 1080 input.yuv output.h264
*
* Perform VAAPI-accelerated encoding. Read input from an NV12 raw
* file, and write the H.264 encoded data to an output raw file.
* Usage: vaapi_encode 1920 1080 input.yuv output.h264
*/
#include <stdio.h>
@@ -88,10 +91,6 @@ static int encode_write(AVCodecContext *avctx, AVFrame *frame, FILE *fout)
enc_pkt->stream_index = 0;
ret = fwrite(enc_pkt->data, enc_pkt->size, 1, fout);
av_packet_unref(enc_pkt);
if (ret != enc_pkt->size) {
ret = AVERROR(errno);
break;
}
}
end:
@@ -106,7 +105,7 @@ int main(int argc, char *argv[])
FILE *fin = NULL, *fout = NULL;
AVFrame *sw_frame = NULL, *hw_frame = NULL;
AVCodecContext *avctx = NULL;
const AVCodec *codec = NULL;
AVCodec *codec = NULL;
const char *enc_name = "h264_vaapi";
if (argc < 5) {

View File

@@ -1,4 +1,6 @@
/*
* Video Acceleration API (video transcoding) transcode sample
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
@@ -19,10 +21,11 @@
*/
/**
* @file Intel VAAPI-accelerated transcoding API usage example
* @example vaapi_transcode.c
* @file
* Intel VAAPI-accelerated transcoding example.
*
* Perform VAAPI-accelerated transcoding.
* @example vaapi_transcode.c
* This example shows how to do VAAPI-accelerated transcoding.
* Usage: vaapi_transcode input_stream codec output_stream
* e.g: - vaapi_transcode input.mp4 h264_vaapi output_h264.mp4
* - vaapi_transcode input.mp4 vp9_vaapi output_vp9.ivf
@@ -59,7 +62,7 @@ static enum AVPixelFormat get_vaapi_format(AVCodecContext *ctx,
static int open_input_file(const char *filename)
{
int ret;
const AVCodec *decoder = NULL;
AVCodec *decoder = NULL;
AVStream *video = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
@@ -139,7 +142,7 @@ end:
return ret;
}
static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec)
static int dec_enc(AVPacket *pkt, AVCodec *enc_codec)
{
AVFrame *frame;
int ret = 0;
@@ -215,15 +218,17 @@ static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec)
fail:
av_frame_free(&frame);
if (ret < 0)
return ret;
}
return ret;
return 0;
}
int main(int argc, char **argv)
{
const AVCodec *enc_codec;
int ret = 0;
AVPacket *dec_pkt;
AVCodec *enc_codec;
if (argc != 4) {
fprintf(stderr, "Usage: %s <input file> <encode codec> <output file>\n"

View File

@@ -79,21 +79,6 @@ Do not put a '~' character in the samples path to indicate a home
directory. Because of shell nuances, this will cause FATE to fail.
@end float
To get the complete list of tests, run the command:
@example
make fate-list
@end example
You can specify a subset of tests to run by specifying the
corresponding elements from the list with the @code{fate-} prefix,
e.g. as in:
@example
make fate-ffprobe_compact fate-ffprobe_xml
@end example
This makes it easier to run a few tests in case of failure without
running the complete test suite.
To use a custom wrapper to run the test, pass @option{--target-exec} to
@command{configure} or set the @var{TARGET_EXEC} Make variable.
@@ -223,14 +208,6 @@ meaning only while running the regression tests.
Specify how many threads to use while running regression tests, it is
quite useful to detect thread-related regressions.
This variable may be set to the string "random", optionally followed by a
number, like "random99", This will cause each test to use a random number of
threads. If a number is specified, it is used as a maximum number of threads,
otherwise 16 is the maximum.
In case a test fails, the thread count used for it will be written into the
errfile.
@item THREAD_TYPE
Specify which threading strategy test, either @samp{slice} or @samp{frame},
by default @samp{slice+frame}

View File

@@ -1,5 +1,5 @@
slot= # some unique identifier
repo=https://git.ffmpeg.org/ffmpeg.git # the source repository
repo=git://source.ffmpeg.org/ffmpeg.git # the source repository
#branch=release/2.6 # the branch to test
samples= # path to samples directory
workdir= # directory in which to do all the work

View File

@@ -17,9 +17,9 @@ ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_url}@}
@chapter Description
@c man begin DESCRIPTION
@command{ffmpeg} is a universal media converter. It can read a wide variety of
inputs - including live grabbing/recording devices - filter, and transcode them
into a plethora of output formats.
@command{ffmpeg} is a very fast video and audio converter that can also grab from
a live audio/video source. It can also convert between arbitrary sample
rates and resize video on the fly with a high quality polyphase filter.
@command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
files, pipes, network streams, grabbing devices, etc.), specified by the
@@ -49,32 +49,24 @@ Do not mix input and output files -- first specify all input files, then all
output files. Also do not mix options which belong to different files. All
options apply ONLY to the next input or output file and are reset between files.
Some simple examples follow.
@itemize
@item
Convert an input media file to a different format, by re-encoding media streams:
To set the video bitrate of the output file to 64 kbit/s:
@example
ffmpeg -i input.avi output.mp4
ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
@end example
@item
Set the video bitrate of the output file to 64 kbit/s:
To force the frame rate of the output file to 24 fps:
@example
ffmpeg -i input.avi -b:v 64k -bufsize 64k output.mp4
ffmpeg -i input.avi -r 24 output.avi
@end example
@item
Force the frame rate of the output file to 24 fps:
To force the frame rate of the input file (valid for raw formats only)
to 1 fps and the frame rate of the output file to 24 fps:
@example
ffmpeg -i input.avi -r 24 output.mp4
@end example
@item
Force the frame rate of the input file (valid for raw formats only) to 1 fps and
the frame rate of the output file to 24 fps:
@example
ffmpeg -r 1 -i input.m2v -r 24 output.mp4
ffmpeg -r 1 -i input.m2v -r 24 output.avi
@end example
@end itemize
@@ -457,11 +449,6 @@ output file already exists.
Set number of times input stream shall be looped. Loop 0 means no loop,
loop -1 means infinite loop.
@item -recast_media (@emph{global})
Allow forcing a decoder of a different media type than the one
detected or designated by the demuxer. Useful for decoding media
data muxed as data streams.
@item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
@itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
Select an encoder (when used before an output file) or a decoder (when used
@@ -526,21 +513,6 @@ see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1)
Like the @code{-ss} option but relative to the "end of file". That is negative
values are earlier in the file, 0 is at EOF.
@item -isync @var{input_index} (@emph{input})
Assign an input as a sync source.
This will take the difference between the start times of the target and reference inputs and
offset the timestamps of the target file by that difference. The source timestamps of the two
inputs should derive from the same clock source for expected results. If @code{copyts} is set
then @code{start_at_zero} must also be set. If either of the inputs has no starting timestamp
then no sync adjustment is made.
Acceptable values are those that refer to a valid ffmpeg input index. If the sync reference is
the target index itself or @var{-1}, then no adjustment is made to target timestamps. A sync
reference may not itself be synced to any other input.
Default value is @var{-1}.
@item -itsoffset @var{offset} (@emph{input})
Set the input time offset.
@@ -583,22 +555,27 @@ ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
@item -disposition[:stream_specifier] @var{value} (@emph{output,per-stream})
Sets the disposition for a stream.
By default, the disposition is copied from the input stream, unless the output
stream this option applies to is fed by a complex filtergraph - in that case the
disposition is unset by default.
This option overrides the disposition copied from the input stream. It is also
possible to delete the disposition by setting it to 0.
@var{value} is a sequence of items separated by '+' or '-'. The first item may
also be prefixed with '+' or '-', in which case this option modifies the default
value. Otherwise (the first item is not prefixed) this options overrides the
default value. A '+' prefix adds the given disposition, '-' removes it. It is
also possible to clear the disposition by setting it to 0.
If no @code{-disposition} options were specified for an output file, ffmpeg will
automatically set the 'default' disposition on the first stream of each type,
when there are multiple streams of this type in the output file and no stream of
that type is already marked as default.
The @code{-dispositions} option lists the known dispositions.
The following dispositions are recognized:
@table @option
@item default
@item dub
@item original
@item comment
@item lyrics
@item karaoke
@item forced
@item hearing_impaired
@item visual_impaired
@item clean_effects
@item attached_pic
@item captions
@item descriptions
@item dependent
@item metadata
@end table
For example, to make the second audio stream the default stream:
@example
@@ -647,21 +624,21 @@ The parameters set for each target are as follows.
@var{pal}:
-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
-s 352x288 -r 25
-codec:v mpeg1video -g 15 -b:v 1150k -maxrate:v 1150k -minrate:v 1150k -bufsize:v 327680
-codec:v mpeg1video -g 15 -b:v 1150k -maxrate:v 1150v -minrate:v 1150k -bufsize:v 327680
-ar 44100 -ac 2
-codec:a mp2 -b:a 224k
@var{ntsc}:
-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
-s 352x240 -r 30000/1001
-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150k -minrate:v 1150k -bufsize:v 327680
-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150v -minrate:v 1150k -bufsize:v 327680
-ar 44100 -ac 2
-codec:a mp2 -b:a 224k
@var{film}:
-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
-s 352x240 -r 24000/1001
-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150k -minrate:v 1150k -bufsize:v 327680
-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150v -minrate:v 1150k -bufsize:v 327680
-ar 44100 -ac 2
-codec:a mp2 -b:a 224k
@end example
@@ -777,22 +754,11 @@ syntax.
See the @ref{filter_complex_option,,-filter_complex option} if you
want to create filtergraphs with multiple inputs and/or outputs.
@anchor{filter_script option}
@item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
This option is similar to @option{-filter}, the only difference is that its
argument is the name of the file from which a filtergraph description is to be
read.
@item -reinit_filter[:@var{stream_specifier}] @var{integer} (@emph{input,per-stream})
This boolean option determines if the filtergraph(s) to which this stream is fed gets
reinitialized when input frame parameters change mid-stream. This option is enabled by
default as most video and all audio filters cannot handle deviation in input frame properties.
Upon reinitialization, existing filter state is lost, like e.g. the frame count @code{n}
reference available in some filters. Any frames buffered at time of reinitialization are lost.
The properties where a change triggers reinitialization are,
for video, frame resolution or pixel format;
for audio, sample format, sample rate, channel count or channel layout.
@item -filter_threads @var{nb_threads} (@emph{global})
Defines how many threads are used to process a filter pipeline. Each pipeline
will produce a thread pool with this many threads available for parallel processing.
@@ -886,20 +852,9 @@ This is not the same as the @option{-framerate} option used for some input forma
like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use @option{-framerate} instead of the input option @option{-r}.
As an output option:
@table @option
@item video encoding
Duplicate or drop frames right before encoding them to achieve constant output
As an output option, duplicate or drop input frames to achieve constant output
frame rate @var{fps}.
@item video streamcopy
Indicate to the muxer that @var{fps} is the stream frame rate. No data is
dropped or duplicated in this case. This may produce invalid files if @var{fps}
does not match the actual stream frame rate as determined by packet timestamps.
See also the @code{setts} bitstream filter.
@end table
@item -fpsmax[:@var{stream_specifier}] @var{fps} (@emph{output,per-stream})
Set maximum frame rate (Hz value, fraction or abbreviation).
@@ -932,32 +887,6 @@ If used together with @option{-vcodec copy}, it will affect the aspect ratio
stored at container level, but not the aspect ratio stored in encoded
frames, if it exists.
@item -display_rotation[:@var{stream_specifier}] @var{rotation} (@emph{input,per-stream})
Set video rotation metadata.
@var{rotation} is a decimal number specifying the amount in degree by
which the video should be rotated counter-clockwise before being
displayed.
This option overrides the rotation/display transform metadata stored in
the file, if any. When the video is being transcoded (rather than
copied) and @code{-autorotate} is enabled, the video will be rotated at
the filtering stage. Otherwise, the metadata will be written into the
output file if the muxer supports it.
If the @code{-display_hflip} and/or @code{-display_vflip} options are
given, they are applied after the rotation specified by this option.
@item -display_hflip[:@var{stream_specifier}] (@emph{input,per-stream})
Set whether on display the image should be horizontally flipped.
See the @code{-display_rotation} option for more details.
@item -display_vflip[:@var{stream_specifier}] (@emph{input,per-stream})
Set whether on display the image should be vertically flipped.
See the @code{-display_rotation} option for more details.
@item -vn (@emph{input/output})
As an input option, blocks all video streams of a file from being filtered or
being automatically selected or mapped for any output. See @code{-discard}
@@ -1023,12 +952,7 @@ If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
as the input (or graph output) and automatic conversions are disabled.
@item -sws_flags @var{flags} (@emph{input/output})
Set default flags for the libswscale library. These flags are used by
automatically inserted @code{scale} filters and those within simple
filtergraphs, if not overridden within the filtergraph definition.
See the @ref{scaler_options,,ffmpeg-scaler manual,ffmpeg-scaler} for a list
of scaler options.
Set SwScaler flags.
@item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
Rate control override for specific intervals, formatted as "int,int,int"
@@ -1036,24 +960,36 @@ list separated with slashes. Two first values are the beginning and
end frame numbers, last one is quantizer to use if positive, or quality
factor if negative.
@item -ilme
Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
Use this option if your input file is interlaced and you want
to keep the interlaced format for minimum losses.
The alternative is to deinterlace the input stream by use of a filter
such as @code{yadif} or @code{bwdif}, but deinterlacing introduces losses.
@item -psnr
Calculate PSNR of compressed frames. This option is deprecated, pass the
PSNR flag to the encoder instead, using @code{-flags +psnr}.
Calculate PSNR of compressed frames.
@item -vstats
Dump video coding statistics to @file{vstats_HHMMSS.log}. See the
@ref{vstats_file_format,,vstats file format} section for the format description.
Dump video coding statistics to @file{vstats_HHMMSS.log}.
@item -vstats_file @var{file}
Dump video coding statistics to @var{file}. See the
@ref{vstats_file_format,,vstats file format} section for the format description.
Dump video coding statistics to @var{file}.
@item -vstats_version @var{file}
Specify which version of the vstats format to use. Default is @code{2}. See the
@ref{vstats_file_format,,vstats file format} section for the format description.
Specifies which version of the vstats format to use. Default is 2.
version = 1 :
@code{frame= %5d q= %2.1f PSNR= %6.2f f_size= %6d s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s}
version > 1:
@code{out= %2d st= %2d frame= %5d q= %2.1f PSNR= %6.2f f_size= %6d s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s}
@item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
top=1/bottom=0/auto=-1 field first
@item -dc @var{precision}
Intra_dc_precision.
@item -vtag @var{fourcc/tag} (@emph{output})
Force video tag/fourcc. This is an alias for @code{-tag:v}.
@item -qphist (@emph{global})
Show QP histogram
@item -vbsf @var{bitstream_filter}
Deprecated see -bsf
@@ -1120,8 +1056,6 @@ starting from second 13:
@item source
If the argument is @code{source}, ffmpeg will force a key frame if
the current frame being encoded is marked as a key frame in its source.
In cases where this particular source frame has to be dropped,
enforce the next available frame to become a key frame instead.
@end table
@@ -1145,32 +1079,13 @@ device type:
@item cuda
@var{device} is the number of the CUDA device.
The following options are recognized:
@table @option
@item primary_ctx
If set to 1, uses the primary device context instead of creating a new one.
@end table
Examples:
@table @emph
@item -init_hw_device cuda:1
Choose the second device on the system.
@item -init_hw_device cuda:0,primary_ctx=1
Choose the first device and use the primary device context.
@end table
@item dxva2
@var{device} is the number of the Direct3D 9 display adapter.
@item d3d11va
@var{device} is the number of the Direct3D 11 display adapter.
@item vaapi
@var{device} is either an X11 display name, a DRM render node or a DirectX adapter index.
@var{device} is either an X11 display name or a DRM render node.
If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY})
and then the first DRM render node (@emph{/dev/dri/renderD128}), or the default
DirectX adapter on Windows.
and then the first DRM render node (@emph{/dev/dri/renderD128}).
@item vdpau
@var{device} is an X11 display name.
@@ -1190,21 +1105,9 @@ If not specified, it will attempt to open the default X11 display (@emph{$DISPLA
@end table
If not specified, @samp{auto_any} is used.
(Note that it may be easier to achieve the desired result for QSV by creating the
platform-appropriate subdevice (@samp{dxva2} or @samp{d3d11va} or @samp{vaapi}) and then deriving a
platform-appropriate subdevice (@samp{dxva2} or @samp{vaapi}) and then deriving a
QSV device from that.)
Alternatively, @samp{child_device_type} helps to choose platform-appropriate subdevice type.
On Windows @samp{d3d11va} is used as default subdevice type.
Examples:
@table @emph
@item -init_hw_device qsv:hw,child_device_type=d3d11va
Choose the GPU subdevice with type @samp{d3d11va} and create QSV device with @samp{MFX_IMPL_HARDWARE}.
@item -init_hw_device qsv:hw,child_device_type=dxva2
Choose the GPU subdevice with type @samp{dxva2} and create QSV device with @samp{MFX_IMPL_HARDWARE}.
@end table
@item opencl
@var{device} selects the platform and device as @emph{platform_index.device_index}.
@@ -1307,9 +1210,6 @@ Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
@item dxva2
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
@item d3d11va
Use D3D11VA (DirectX Video Acceleration) hardware acceleration.
@item vaapi
Use VAAPI (Video Acceleration API) hardware acceleration.
@@ -1343,25 +1243,7 @@ by name, or it can create a new device as if
were called immediately before.
@item -hwaccels
List all hardware acceleration components enabled in this build of ffmpeg.
Actual runtime availability depends on the hardware and its suitable driver
being installed.
@item -fix_sub_duration_heartbeat[:@var{stream_specifier}]
Set a specific output video stream as the heartbeat stream according to which
to split and push through currently in-progress subtitle upon receipt of a
random access packet.
This lowers the latency of subtitles for which the end packet or the following
subtitle has not yet been received. As a drawback, this will most likely lead
to duplication of subtitle events in order to cover the full duration, so
when dealing with use cases where latency of when the subtitle event is passed
on to output is not relevant this option should not be utilized.
Requires @option{-fix_sub_duration} to be set for the relevant input subtitle
stream for this to have any effect, as well as for the input subtitle stream
having to be directly mapped to the same output in which the heartbeat stream
resides.
List all hardware acceleration methods supported in this build of ffmpeg.
@end table
@@ -1461,18 +1343,18 @@ Set the size of the canvas used to render subtitles.
@section Advanced options
@table @option
@item -map [-]@var{input_file_id}[:@var{stream_specifier}][?] | @var{[linklabel]} (@emph{output})
@item -map [-]@var{input_file_id}[:@var{stream_specifier}][?][,@var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
Create one or more streams in the output file. This option has two forms for
specifying the data source(s): the first selects one or more streams from some
input file (specified with @code{-i}), the second takes an output from some
complex filtergraph (specified with @code{-filter_complex} or
@code{-filter_complex_script}).
Designate one or more input streams as a source for the output file. Each input
stream is identified by the input file index @var{input_file_id} and
the input stream index @var{input_stream_id} within the input
file. Both indices start at 0. If specified,
@var{sync_file_id}:@var{stream_specifier} sets which input stream
is used as a presentation sync reference.
In the first form, an output stream is created for every stream from the input
file with the index @var{input_file_id}. If @var{stream_specifier} is given,
only those streams that match the specifier are used (see the
@ref{Stream specifiers} section for the @var{stream_specifier} syntax).
The first @code{-map} option on the command line specifies the
source for output stream 0, the second @code{-map} option specifies
the source for output stream 1, etc.
A @code{-} character before the stream identifier creates a "negative" mapping.
It disables matching streams from already created mappings.
@@ -1486,56 +1368,39 @@ An alternative @var{[linklabel]} form will map outputs from complex filter
graphs (see the @option{-filter_complex} option) to the output file.
@var{linklabel} must correspond to a defined output link label in the graph.
This option may be specified multiple times, each adding more streams to the
output file. Any given input stream may also be mapped any number of times as a
source for different output streams, e.g. in order to use different encoding
options and/or filters. The streams are created in the output in the same order
in which the @code{-map} options are given on the commandline.
Using this option disables the default mappings for this output file.
Examples:
@table @emph
@item map everything
To map ALL streams from the first input file to output
For example, to map ALL streams from the first input file to output
@example
ffmpeg -i INPUT -map 0 output
@end example
@item select specific stream
If you have two audio streams in the first input file, these streams are
identified by @var{0:0} and @var{0:1}. You can use @code{-map} to select which
streams to place in an output file. For example:
For example, if you have two audio streams in the first input file,
these streams are identified by "0:0" and "0:1". You can use
@code{-map} to select which streams to place in an output file. For
example:
@example
ffmpeg -i INPUT -map 0:1 out.wav
@end example
will map the second input stream in @file{INPUT} to the (single) output stream
in @file{out.wav}.
will map the input stream in @file{INPUT} identified by "0:1" to
the (single) output stream in @file{out.wav}.
@item create multiple streams
To select the stream with index 2 from input file @file{a.mov} (specified by the
identifier @var{0:2}), and stream with index 6 from input @file{b.mov}
(specified by the identifier @var{1:6}), and copy them to the output file
@file{out.mov}:
For example, to select the stream with index 2 from input file
@file{a.mov} (specified by the identifier "0:2"), and stream with
index 6 from input @file{b.mov} (specified by the identifier "1:6"),
and copy them to the output file @file{out.mov}:
@example
ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
@end example
@item create multiple streams 2
To select all video and the third audio stream from an input file:
@example
ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
@end example
@item negative map
To map all the streams except the second audio, use negative mappings
@example
ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
@end example
@item optional map
To map the video and audio streams from the first input, and using the
trailing @code{?}, ignore the audio mapping if no audio streams exist in
the first input:
@@ -1543,13 +1408,12 @@ the first input:
ffmpeg -i INPUT -map 0:v -map 0:a? OUTPUT
@end example
@item map by language
To pick the English audio stream:
@example
ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
@end example
@end table
Note that using this option disables the default mappings for this output file.
@item -ignore_unknown
Ignore input streams with unknown type instead of failing if copying
@@ -1560,10 +1424,6 @@ Allow input streams with unknown type to be copied instead of failing if copying
such streams is attempted.
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][?][:@var{output_file_id}.@var{stream_specifier}]
This option is deprecated and will be removed. It can be replaced by the
@var{pan} filter. In some cases it may be easier to use some combination of the
@var{channelsplit}, @var{channelmap}, or @var{amerge} filters.
Map an audio channel from a given input to an output. If
@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
be mapped on all the audio streams.
@@ -1691,47 +1551,33 @@ Exit after ffmpeg has been running for @var{duration} seconds in CPU user time.
Dump each input packet to stderr.
@item -hex (@emph{global})
When dumping packets, also dump the payload.
@item -readrate @var{speed} (@emph{input})
Limit input read speed.
Its value is a floating-point positive number which represents the maximum duration of
media, in seconds, that should be ingested in one second of wallclock time.
Default value is zero and represents no imposed limitation on speed of ingestion.
Value @code{1} represents real-time speed and is equivalent to @code{-re}.
Mainly used to simulate a capture device or live input stream (e.g. when reading from a file).
Should not be used with a low value when input is an actual capture device or live stream as
it may cause packet loss.
It is useful for when flow speed of output packets is important, such as live streaming.
@item -re (@emph{input})
Read input at native frame rate. This is equivalent to setting @code{-readrate 1}.
@item -readrate_initial_burst @var{seconds}
Set an initial read burst time, in seconds, after which @option{-re/-readrate}
will be enforced.
@item -vsync @var{parameter} (@emph{global})
@itemx -fps_mode[:@var{stream_specifier}] @var{parameter} (@emph{output,per-stream})
Set video sync method / framerate mode. vsync is applied to all output video streams
but can be overridden for a stream by setting fps_mode. vsync is deprecated and will be
removed in the future.
For compatibility reasons some of the values for vsync can be specified as numbers (shown
in parentheses in the following table).
Read input at native frame rate. Mainly used to simulate a grab device,
or live input stream (e.g. when reading from a file). Should not be used
with actual grab devices or live input streams (where it can cause packet
loss).
By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
This option will slow down the reading of the input(s) to the native frame rate
of the input(s). It is useful for real-time output (e.g. live streaming).
@item -vsync @var{parameter}
Video sync method.
For compatibility reasons old values can be specified as numbers.
Newly added values will have to be specified as strings always.
@table @option
@item passthrough (0)
@item 0, passthrough
Each frame is passed with its timestamp from the demuxer to the muxer.
@item cfr (1)
@item 1, cfr
Frames will be duplicated and dropped to achieve exactly the requested
constant frame rate.
@item vfr (2)
@item 2, vfr
Frames are passed through with their timestamp or dropped so as to
prevent 2 frames from having the same timestamp.
@item drop
As passthrough but destroys all timestamps, making the muxer generate
fresh timestamps based on frame-rate.
@item auto (-1)
Chooses between cfr and vfr depending on muxer capabilities. This is the
@item -1, auto
Chooses between 1 and 2 depending on muxer capabilities. This is the
default method.
@end table
@@ -1750,6 +1596,24 @@ The default is -1.1. One possible usecase is to avoid framedrops in case
of noisy timestamps or to increase frame drop precision in case of exact
timestamps.
@item -async @var{samples_per_second}
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
the parameter is the maximum samples per second by which the audio is changed.
-async 1 is a special case where only the start of the audio stream is corrected
without any later correction.
Note that the timestamps may be further modified by the muxer, after this.
For example, in the case that the format option @option{avoid_negative_ts}
is enabled.
This option has been deprecated. Use the @code{aresample} audio filter instead.
@item -adrift_threshold @var{time}
Set the minimum difference between timestamps and audio data (in seconds) to trigger
adding/dropping samples to make it match the timestamps. This option effectively is
a threshold to select between hard (add/drop) and soft (squeeze/stretch) compensation.
@code{-async} must be set to a positive value.
@item -apad @var{parameters} (@emph{output,per-stream})
Pad the output audio stream(s). This is the same as applying @code{-af apad}.
Argument is a string of filter parameters composed the same as with the @code{apad} filter.
@@ -1796,7 +1660,8 @@ Try to make the choice automatically, in order to generate a sane output.
Default value is -1.
@item -enc_time_base[:@var{stream_specifier}] @var{timebase} (@emph{output,per-stream})
Set the encoder timebase. @var{timebase} can assume one of the following values:
Set the encoder timebase. @var{timebase} is a floating point number,
and can assume one of the following values:
@table @option
@item 0
@@ -1804,17 +1669,16 @@ Assign a default value according to the media type.
For video - use 1/framerate, for audio - use 1/samplerate.
@item demux
Use the timebase from the demuxer.
@item -1
Use the input stream timebase when possible.
@item filter
Use the timebase from the filtergraph.
If an input stream is not available, the default timebase will be used.
@item a positive number
@item >0
Use the provided number as the timebase.
This field can be provided as a ratio of two integers (e.g. 1:24, 1:48000)
or as a decimal number (e.g. 0.04166, 2.0833e-5)
or as a floating point number (e.g. 0.04166, 2.0833e-5)
@end table
Default value is 0.
@@ -1822,55 +1686,13 @@ Default value is 0.
@item -bitexact (@emph{input/output})
Enable bitexact mode for (de)muxer and (de/en)coder
@item -shortest (@emph{output})
Finish encoding when the shortest output stream ends.
Note that this option may require buffering frames, which introduces extra
latency. The maximum amount of this latency may be controlled with the
@code{-shortest_buf_duration} option.
@item -shortest_buf_duration @var{duration} (@emph{output})
The @code{-shortest} option may require buffering potentially large amounts
of data when at least one of the streams is "sparse" (i.e. has large gaps
between frames this is typically the case for subtitles).
This option controls the maximum duration of buffered frames in seconds.
Larger values may allow the @code{-shortest} option to produce more accurate
results, but increase memory use and latency.
The default value is 10 seconds.
@item -dts_delta_threshold @var{threshold}
Timestamp discontinuity delta threshold, expressed as a decimal number
of seconds.
The timestamp discontinuity correction enabled by this option is only
applied to input formats accepting timestamp discontinuity (for which
the @code{AV_FMT_DISCONT} flag is enabled), e.g. MPEG-TS and HLS, and
is automatically disabled when employing the @code{-copy_ts} option
(unless wrapping is detected).
If a timestamp discontinuity is detected whose absolute value is
greater than @var{threshold}, ffmpeg will remove the discontinuity by
decreasing/increasing the current DTS and PTS by the corresponding
delta value.
The default value is 10.
@item -dts_error_threshold @var{threshold}
Timestamp error delta threshold, expressed as a decimal number of
seconds.
The timestamp correction enabled by this option is only applied to
input formats not accepting timestamp discontinuity (for which the
@code{AV_FMT_DISCONT} flag is not enabled).
If a timestamp discontinuity is detected whose absolute value is
greater than @var{threshold}, ffmpeg will drop the PTS/DTS timestamp
value.
The default value is @code{3600*30} (30 hours), which is arbitrarily
picked and quite conservative.
Finish encoding when the shortest input stream ends.
@item -dts_delta_threshold
Timestamp discontinuity delta threshold.
@item -dts_error_threshold @var{seconds}
Timestamp error delta threshold. This threshold use to discard crazy/damaged
timestamps and the default is 30 hours which is arbitrarily picked and quite
conservative.
@item -muxdelay @var{seconds} (@emph{output})
Set the maximum demux-decode delay.
@item -muxpreload @var{seconds} (@emph{output})
@@ -1981,7 +1803,6 @@ The default is the number of available CPUs.
Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
outputs. Equivalent to @option{-filter_complex}.
@anchor{filter_complex_script option}
@item -filter_complex_script @var{filename} (@emph{global})
This option is similar to @option{-filter_complex}, the only difference is that
its argument is the name of the file from which a complex filtergraph
@@ -2000,15 +1821,12 @@ to the @option{-ss} option is considered an actual timestamp, and is not
offset by the start time of the file. This matters only for files which do
not start from timestamp 0, such as transport streams.
@item -thread_queue_size @var{size} (@emph{input/output})
For input, this option sets the maximum number of queued packets when reading
from the file or device. With low latency / high rate live streams, packets may
be discarded if they are not read in a timely manner; setting this value can
@item -thread_queue_size @var{size} (@emph{input})
This option sets the maximum number of queued packets when reading from the
file or device. With low latency / high rate live streams, packets may be
discarded if they are not read in a timely manner; setting this value can
force ffmpeg to use a separate input thread and read packets as soon as they
arrive. By default ffmpeg only does this if multiple inputs are specified.
For output, this option specified the maximum number of packets that may be
queued to each muxing thread.
arrive. By default ffmpeg only do this if multiple inputs are specified.
@item -sdp_file @var{file} (@emph{global})
Print sdp information for an output stream to @var{file}.
@@ -2083,132 +1901,6 @@ filter (scale, aresample) in the graph.
On by default, to explicitly disable it you need to specify
@code{-noauto_conversion_filters}.
@item -bits_per_raw_sample[:@var{stream_specifier}] @var{value} (@emph{output,per-stream})
Declare the number of bits per raw sample in the given output stream to be
@var{value}. Note that this option sets the information provided to the
encoder/muxer, it does not change the stream to conform to this value. Setting
values that do not match the stream properties may result in encoding failures
or invalid output files.
@anchor{stats_enc_options}
@item -stats_enc_pre[:@var{stream_specifier}] @var{path} (@emph{output,per-stream})
@item -stats_enc_post[:@var{stream_specifier}] @var{path} (@emph{output,per-stream})
@item -stats_mux_pre[:@var{stream_specifier}] @var{path} (@emph{output,per-stream})
Write per-frame encoding information about the matching streams into the file
given by @var{path}.
@option{-stats_enc_pre} writes information about raw video or audio frames right
before they are sent for encoding, while @option{-stats_enc_post} writes
information about encoded packets as they are received from the encoder.
@option{-stats_mux_pre} writes information about packets just as they are about to
be sent to the muxer. Every frame or packet produces one line in the specified
file. The format of this line is controlled by @option{-stats_enc_pre_fmt} /
@option{-stats_enc_post_fmt} / @option{-stats_mux_pre_fmt}.
When stats for multiple streams are written into a single file, the lines
corresponding to different streams will be interleaved. The precise order of
this interleaving is not specified and not guaranteed to remain stable between
different invocations of the program, even with the same options.
@item -stats_enc_pre_fmt[:@var{stream_specifier}] @var{format_spec} (@emph{output,per-stream})
@item -stats_enc_post_fmt[:@var{stream_specifier}] @var{format_spec} (@emph{output,per-stream})
@item -stats_mux_pre_fmt[:@var{stream_specifier}] @var{format_spec} (@emph{output,per-stream})
Specify the format for the lines written with @option{-stats_enc_pre} /
@option{-stats_enc_post} / @option{-stats_mux_pre}.
@var{format_spec} is a string that may contain directives of the form
@var{@{fmt@}}. @var{format_spec} is backslash-escaped --- use \@{, \@}, and \\
to write a literal @{, @}, or \, respectively, into the output.
The directives given with @var{fmt} may be one of the following:
@table @option
@item fidx
Index of the output file.
@item sidx
Index of the output stream in the file.
@item n
Frame number. Pre-encoding: number of frames sent to the encoder so far.
Post-encoding: number of packets received from the encoder so far.
Muxing: number of packets submitted to the muxer for this stream so far.
@item ni
Input frame number. Index of the input frame (i.e. output by a decoder) that
corresponds to this output frame or packet. -1 if unavailable.
@item tb
Timebase in which this frame/packet's timestamps are expressed, as a rational
number @var{num/den}. Note that encoder and muxer may use different timebases.
@item tbi
Timebase for @var{ptsi}, as a rational number @var{num/den}. Available when
@var{ptsi} is available, @var{0/1} otherwise.
@item pts
Presentation timestamp of the frame or packet, as an integer. Should be
multiplied by the timebase to compute presentation time.
@item ptsi
Presentation timestamp of the input frame (see @var{ni}), as an integer. Should
be multiplied by @var{tbi} to compute presentation time. Printed as
(2^63 - 1 = 9223372036854775807) when not available.
@item t
Presentation time of the frame or packet, as a decimal number. Equal to
@var{pts} multiplied by @var{tb}.
@item ti
Presentation time of the input frame (see @var{ni}), as a decimal number. Equal
to @var{ptsi} multiplied by @var{tbi}. Printed as inf when not available.
@item dts (@emph{packet})
Decoding timestamp of the packet, as an integer. Should be multiplied by the
timebase to compute presentation time.
@item dt (@emph{packet})
Decoding time of the frame or packet, as a decimal number. Equal to
@var{dts} multiplied by @var{tb}.
@item sn (@emph{frame,audio})
Number of audio samples sent to the encoder so far.
@item samp (@emph{frame,audio})
Number of audio samples in the frame.
@item size (@emph{packet})
Size of the encoded packet in bytes.
@item br (@emph{packet})
Current bitrate in bits per second. Post-encoding only.
@item abr (@emph{packet})
Average bitrate for the whole stream so far, in bits per second, -1 if it cannot
be determined at this point. Post-encoding only.
@end table
Directives tagged with @emph{packet} may only be used with
@option{-stats_enc_post_fmt} and @option{-stats_mux_pre_fmt}.
Directives tagged with @emph{frame} may only be used with
@option{-stats_enc_pre_fmt}.
Directives tagged with @emph{audio} may only be used with audio streams.
The default format strings are:
@table @option
@item pre-encoding
@{fidx@} @{sidx@} @{n@} @{t@}
@item post-encoding
@{fidx@} @{sidx@} @{n@} @{t@}
@end table
In the future, new items may be added to the end of the default formatting
strings. Users who depend on the format staying exactly the same, should
prescribe it manually.
Note that stats for different streams written into the same file may have
different formats.
@end table
@section Preset files
@@ -2266,63 +1958,6 @@ search for the file @file{libvpx-1080p.avpreset}.
If no such file is found, then ffmpeg will search for a file named
@var{arg}.avpreset in the same directories.
@anchor{vstats_file_format}
@section vstats file format
The @code{-vstats} and @code{-vstats_file} options enable generation of a file
containing statistics about the generated video outputs.
The @code{-vstats_version} option controls the format version of the generated
file.
With version @code{1} the format is:
@example
frame= @var{FRAME} q= @var{FRAME_QUALITY} PSNR= @var{PSNR} f_size= @var{FRAME_SIZE} s_size= @var{STREAM_SIZE}kB time= @var{TIMESTAMP} br= @var{BITRATE}kbits/s avg_br= @var{AVERAGE_BITRATE}kbits/s
@end example
With version @code{2} the format is:
@example
out= @var{OUT_FILE_INDEX} st= @var{OUT_FILE_STREAM_INDEX} frame= @var{FRAME_NUMBER} q= @var{FRAME_QUALITY}f PSNR= @var{PSNR} f_size= @var{FRAME_SIZE} s_size= @var{STREAM_SIZE}kB time= @var{TIMESTAMP} br= @var{BITRATE}kbits/s avg_br= @var{AVERAGE_BITRATE}kbits/s
@end example
The value corresponding to each key is described below:
@table @option
@item avg_br
average bitrate expressed in Kbits/s
@item br
bitrate expressed in Kbits/s
@item frame
number of encoded frame
@item out
out file index
@item PSNR
Peak Signal to Noise Ratio
@item q
quality of the frame
@item f_size
encoded packet size expressed as number of bytes
@item s_size
stream size expressed in KiB
@item st
out file stream index
@item time
time of the packet
@item type
picture type
@end table
See also the @ref{stats_enc_options,,-stats_enc options} for an alternative way
to show encoding statistics.
@c man end OPTIONS
@chapter Examples
@@ -2586,7 +2221,7 @@ ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
@ignore
@setfilename ffmpeg
@settitle ffmpeg media converter
@settitle ffmpeg video converter
@end ignore

View File

@@ -34,6 +34,10 @@ various FFmpeg APIs.
Force displayed width.
@item -y @var{height}
Force displayed height.
@item -s @var{size}
Set frame size (WxH or abbreviation), needed for videos which do
not contain a header with the frame size like raw YUV. This option
has been deprecated in favor of private options, try -video_size.
@item -fs
Start in fullscreen mode.
@item -an
@@ -122,6 +126,10 @@ Read @var{input_url}.
@section Advanced options
@table @option
@item -pix_fmt @var{format}
Set pixel format.
This option has been deprecated in favor of private options, try -pixel_format.
@item -stats
Print several playback statistics, in particular show the stream
duration, the codec parameters, the current position in the stream and
@@ -214,6 +222,8 @@ Pause.
Toggle mute.
@item 9, 0
Decrease and increase volume respectively.
@item /, *
Decrease and increase volume respectively.

View File

@@ -12,7 +12,7 @@
@chapter Synopsis
ffprobe [@var{options}] @file{input_url}
ffprobe [@var{options}] [@file{input_url}]
@chapter Description
@c man begin DESCRIPTION
@@ -28,9 +28,6 @@ If a url is specified in input, ffprobe will try to open and
probe the url content. If the url cannot be opened or recognized as
a multimedia file, a positive exit code is returned.
If no output is specified as output with @option{o} ffprobe will write
to stdout.
ffprobe may be employed both as a standalone application or in
combination with a textual filter, which may perform more
sophisticated processing, e.g. statistical processing or plotting.
@@ -41,7 +38,7 @@ ffprobe will show it.
ffprobe output is designed to be easily parsable by a textual filter,
and consists of one or more sections of a form defined by the selected
writer, which is specified by the @option{output_format} option.
writer, which is specified by the @option{print_format} option.
Sections may contain other nested sections, and are identified by a
name (which may be shared by other sections), and an unique
@@ -83,7 +80,7 @@ Use sexagesimal format HH:MM:SS.MICROSECONDS for time values.
Prettify the format of the displayed values, it corresponds to the
options "-unit -prefix -byte_binary_prefix -sexagesimal".
@item -output_format, -of, -print_format @var{writer_name}[=@var{writer_options}]
@item -of, -print_format @var{writer_name}[=@var{writer_options}]
Set the output printing format.
@var{writer_name} specifies the name of the writer, and
@@ -91,7 +88,7 @@ Set the output printing format.
For example for printing the output in JSON format, specify:
@example
-output_format json
-print_format json
@end example
For more details on the available output printing formats, see the
@@ -338,12 +335,6 @@ Show information about all pixel formats supported by FFmpeg.
Pixel format information for each format is printed within a section
with name "PIXEL_FORMAT".
@item -show_optional_fields @var{value}
Some writers viz. JSON and XML, omit the printing of fields with invalid or non-applicable values,
while other writers always print them. This option enables one to control this behaviour.
Valid values are @code{always}/@code{1}, @code{never}/@code{0} and @code{auto}/@code{-1}.
Default is @var{auto}.
@item -bitexact
Force bitexact output, useful to produce output which is not dependent
on the specific build.
@@ -351,10 +342,6 @@ on the specific build.
@item -i @var{input_url}
Read @var{input_url}.
@item -o @var{output_url}
Write output to @var{output_url}. If not specified, the output is sent
to stdout.
@end table
@c man end

View File

@@ -1,409 +1,389 @@
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.ffmpeg.org/schema/ffprobe"
xmlns:ffprobe="http://www.ffmpeg.org/schema/ffprobe">
targetNamespace="http://www.ffmpeg.org/schema/ffprobe"
xmlns:ffprobe="http://www.ffmpeg.org/schema/ffprobe">
<xsd:element name="ffprobe" type="ffprobe:ffprobeType"/>
<xsd:element name="ffprobe" type="ffprobe:ffprobeType"/>
<xsd:complexType name="ffprobeType">
<xsd:sequence>
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="pixel_formats" type="ffprobe:pixelFormatsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets_and_frames" type="ffprobe:packetsAndFramesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" />
<xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" />
<xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="ffprobeType">
<xsd:sequence>
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="pixel_formats" type="ffprobe:pixelFormatsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets_and_frames" type="ffprobe:packetsAndFramesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" />
<xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" />
<xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetsType">
<xsd:sequence>
<xsd:element name="packet" type="ffprobe:packetType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetsType">
<xsd:sequence>
<xsd:element name="packet" type="ffprobe:packetType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="framesType">
<xsd:choice minOccurs="0" maxOccurs="unbounded">
<xsd:element name="frame" type="ffprobe:frameType"/>
<xsd:element name="subtitle" type="ffprobe:subtitleType"/>
</xsd:choice>
</xsd:complexType>
<xsd:complexType name="framesType">
<xsd:sequence>
<xsd:choice minOccurs="0" maxOccurs="unbounded">
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:choice>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetsAndFramesType">
<xsd:choice minOccurs="0" maxOccurs="unbounded">
<xsd:element name="packet" type="ffprobe:packetType"/>
<xsd:element name="frame" type="ffprobe:frameType"/>
<xsd:element name="subtitle" type="ffprobe:subtitleType"/>
</xsd:choice>
</xsd:complexType>
<xsd:complexType name="packetsAndFramesType">
<xsd:sequence>
<xsd:choice minOccurs="0" maxOccurs="unbounded">
<xsd:element name="packet" type="ffprobe:packetType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:choice>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="tagsType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:complexType name="packetType">
<xsd:sequence>
<xsd:element name="tags" type="ffprobe:tagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:attribute name="codec_type" type="xsd:string" use="required" />
<xsd:attribute name="stream_index" type="xsd:int" use="required" />
<xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float" />
<xsd:attribute name="dts" type="xsd:long" />
<xsd:attribute name="dts_time" type="xsd:float" />
<xsd:attribute name="duration" type="xsd:long" />
<xsd:attribute name="duration_time" type="xsd:float" />
<xsd:attribute name="size" type="xsd:long" use="required" />
<xsd:attribute name="pos" type="xsd:long" />
<xsd:attribute name="flags" type="xsd:string" use="required" />
<xsd:attribute name="data" type="xsd:string" />
<xsd:attribute name="data_hash" type="xsd:string" />
</xsd:complexType>
<xsd:attribute name="codec_type" type="xsd:string" use="required" />
<xsd:attribute name="stream_index" type="xsd:int" use="required" />
<xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float" />
<xsd:attribute name="dts" type="xsd:long" />
<xsd:attribute name="dts_time" type="xsd:float" />
<xsd:attribute name="duration" type="xsd:long" />
<xsd:attribute name="duration_time" type="xsd:float" />
<xsd:attribute name="size" type="xsd:long" use="required" />
<xsd:attribute name="pos" type="xsd:long" />
<xsd:attribute name="flags" type="xsd:string" use="required" />
<xsd:attribute name="data" type="xsd:string" />
<xsd:attribute name="data_hash" type="xsd:string" />
</xsd:complexType>
<xsd:complexType name="packetSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:packetSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetSideDataType">
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
</xsd:complexType>
<xsd:complexType name="packetSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:packetSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="logs" type="ffprobe:logsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="side_data_list" type="ffprobe:frameSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:complexType name="packetSideDataType">
<xsd:sequence>
<xsd:element name="side_datum" type="ffprobe:packetSideDatumType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
<xsd:attribute name="media_type" type="xsd:string" use="required"/>
<xsd:attribute name="stream_index" type="xsd:int" />
<xsd:attribute name="key_frame" type="xsd:int" use="required"/>
<xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float"/>
<xsd:attribute name="pkt_pts" type="xsd:long" />
<xsd:attribute name="pkt_pts_time" type="xsd:float"/>
<xsd:attribute name="pkt_dts" type="xsd:long" />
<xsd:attribute name="pkt_dts_time" type="xsd:float"/>
<xsd:attribute name="best_effort_timestamp" type="xsd:long" />
<xsd:attribute name="best_effort_timestamp_time" type="xsd:float" />
<xsd:attribute name="pkt_duration" type="xsd:long" />
<xsd:attribute name="pkt_duration_time" type="xsd:float"/>
<xsd:attribute name="pkt_pos" type="xsd:long" />
<xsd:attribute name="pkt_size" type="xsd:int" />
<xsd:attribute name="type" type="xsd:string"/>
</xsd:complexType>
<!-- audio attributes -->
<xsd:attribute name="sample_fmt" type="xsd:string"/>
<xsd:attribute name="nb_samples" type="xsd:long" />
<xsd:attribute name="channels" type="xsd:int" />
<xsd:attribute name="channel_layout" type="xsd:string"/>
<xsd:complexType name="packetSideDatumType">
<xsd:attribute name="key" type="xsd:string"/>
<xsd:attribute name="value" type="xsd:string"/>
</xsd:complexType>
<!-- video attributes -->
<xsd:attribute name="width" type="xsd:long" />
<xsd:attribute name="height" type="xsd:long" />
<xsd:attribute name="pix_fmt" type="xsd:string"/>
<xsd:attribute name="sample_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="pict_type" type="xsd:string"/>
<xsd:attribute name="coded_picture_number" type="xsd:long" />
<xsd:attribute name="display_picture_number" type="xsd:long" />
<xsd:attribute name="interlaced_frame" type="xsd:int" />
<xsd:attribute name="top_field_first" type="xsd:int" />
<xsd:attribute name="repeat_pict" type="xsd:int" />
<xsd:attribute name="color_range" type="xsd:string"/>
<xsd:attribute name="color_space" type="xsd:string"/>
<xsd:attribute name="color_primaries" type="xsd:string"/>
<xsd:attribute name="color_transfer" type="xsd:string"/>
<xsd:attribute name="chroma_location" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="frameType">
<xsd:sequence>
<xsd:element name="tags" type="ffprobe:tagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="logs" type="ffprobe:logsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="side_data_list" type="ffprobe:frameSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:complexType name="logsType">
<xsd:sequence>
<xsd:element name="log" type="ffprobe:logType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="logType">
<xsd:attribute name="context" type="xsd:string"/>
<xsd:attribute name="level" type="xsd:int" />
<xsd:attribute name="category" type="xsd:int" />
<xsd:attribute name="parent_context" type="xsd:string"/>
<xsd:attribute name="parent_category" type="xsd:int" />
<xsd:attribute name="message" type="xsd:string"/>
</xsd:complexType>
<xsd:attribute name="media_type" type="xsd:string" use="required"/>
<xsd:attribute name="stream_index" type="xsd:int" />
<xsd:attribute name="key_frame" type="xsd:int" use="required"/>
<xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float"/>
<xsd:attribute name="pkt_dts" type="xsd:long" />
<xsd:attribute name="pkt_dts_time" type="xsd:float"/>
<xsd:attribute name="best_effort_timestamp" type="xsd:long" />
<xsd:attribute name="best_effort_timestamp_time" type="xsd:float" />
<xsd:attribute name="pkt_duration" type="xsd:long" />
<xsd:attribute name="pkt_duration_time" type="xsd:float"/>
<xsd:attribute name="duration" type="xsd:long" />
<xsd:attribute name="duration_time" type="xsd:float"/>
<xsd:attribute name="pkt_pos" type="xsd:long" />
<xsd:attribute name="pkt_size" type="xsd:int" />
<xsd:complexType name="frameSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:frameSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataType">
<xsd:sequence>
<xsd:element name="timecodes" type="ffprobe:frameSideDataTimecodeList" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<!-- audio attributes -->
<xsd:attribute name="sample_fmt" type="xsd:string"/>
<xsd:attribute name="nb_samples" type="xsd:long" />
<xsd:attribute name="channels" type="xsd:int" />
<xsd:attribute name="channel_layout" type="xsd:string"/>
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
<xsd:attribute name="timecode" type="xsd:string"/>
</xsd:complexType>
<!-- video attributes -->
<xsd:attribute name="width" type="xsd:long" />
<xsd:attribute name="height" type="xsd:long" />
<xsd:attribute name="crop_top" type="xsd:long" />
<xsd:attribute name="crop_bottom" type="xsd:long" />
<xsd:attribute name="crop_left" type="xsd:long" />
<xsd:attribute name="crop_right" type="xsd:long" />
<xsd:attribute name="pix_fmt" type="xsd:string"/>
<xsd:attribute name="sample_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="pict_type" type="xsd:string"/>
<xsd:attribute name="coded_picture_number" type="xsd:long" />
<xsd:attribute name="display_picture_number" type="xsd:long" />
<xsd:attribute name="interlaced_frame" type="xsd:int" />
<xsd:attribute name="top_field_first" type="xsd:int" />
<xsd:attribute name="repeat_pict" type="xsd:int" />
<xsd:attribute name="color_range" type="xsd:string"/>
<xsd:attribute name="color_space" type="xsd:string"/>
<xsd:attribute name="color_primaries" type="xsd:string"/>
<xsd:attribute name="color_transfer" type="xsd:string"/>
<xsd:attribute name="chroma_location" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="frameSideDataTimecodeList">
<xsd:sequence>
<xsd:element name="timecode" type="ffprobe:frameSideDataTimecodeType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="logsType">
<xsd:sequence>
<xsd:element name="log" type="ffprobe:logType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="logType">
<xsd:attribute name="context" type="xsd:string"/>
<xsd:attribute name="level" type="xsd:int" />
<xsd:attribute name="category" type="xsd:int" />
<xsd:attribute name="parent_context" type="xsd:string"/>
<xsd:attribute name="parent_category" type="xsd:int" />
<xsd:attribute name="message" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="frameSideDataTimecodeType">
<xsd:attribute name="value" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="frameSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:frameSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataType">
<xsd:sequence>
<xsd:element name="timecodes" type="ffprobe:frameSideDataTimecodeList" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:complexType name="subtitleType">
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
<xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float"/>
<xsd:attribute name="format" type="xsd:int" />
<xsd:attribute name="start_display_time" type="xsd:int" />
<xsd:attribute name="end_display_time" type="xsd:int" />
<xsd:attribute name="num_rects" type="xsd:int" />
</xsd:complexType>
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
<xsd:attribute name="timecode" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="streamsType">
<xsd:sequence>
<xsd:element name="stream" type="ffprobe:streamType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataTimecodeList">
<xsd:sequence>
<xsd:element name="timecode" type="ffprobe:frameSideDataTimecodeType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="programsType">
<xsd:sequence>
<xsd:element name="program" type="ffprobe:programType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataTimecodeType">
<xsd:attribute name="value" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="streamDispositionType">
<xsd:attribute name="default" type="xsd:int" use="required" />
<xsd:attribute name="dub" type="xsd:int" use="required" />
<xsd:attribute name="original" type="xsd:int" use="required" />
<xsd:attribute name="comment" type="xsd:int" use="required" />
<xsd:attribute name="lyrics" type="xsd:int" use="required" />
<xsd:attribute name="karaoke" type="xsd:int" use="required" />
<xsd:attribute name="forced" type="xsd:int" use="required" />
<xsd:attribute name="hearing_impaired" type="xsd:int" use="required" />
<xsd:attribute name="visual_impaired" type="xsd:int" use="required" />
<xsd:attribute name="clean_effects" type="xsd:int" use="required" />
<xsd:attribute name="attached_pic" type="xsd:int" use="required" />
<xsd:attribute name="timed_thumbnails" type="xsd:int" use="required" />
</xsd:complexType>
<xsd:complexType name="subtitleType">
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
<xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float"/>
<xsd:attribute name="format" type="xsd:int" />
<xsd:attribute name="start_display_time" type="xsd:int" />
<xsd:attribute name="end_display_time" type="xsd:int" />
<xsd:attribute name="num_rects" type="xsd:int" />
</xsd:complexType>
<xsd:complexType name="streamType">
<xsd:sequence>
<xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:complexType name="streamsType">
<xsd:sequence>
<xsd:element name="stream" type="ffprobe:streamType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:attribute name="index" type="xsd:int" use="required"/>
<xsd:attribute name="codec_name" type="xsd:string" />
<xsd:attribute name="codec_long_name" type="xsd:string" />
<xsd:attribute name="profile" type="xsd:string" />
<xsd:attribute name="codec_type" type="xsd:string" />
<xsd:attribute name="codec_tag" type="xsd:string" use="required"/>
<xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/>
<xsd:attribute name="extradata" type="xsd:string" />
<xsd:attribute name="extradata_hash" type="xsd:string" />
<xsd:complexType name="programsType">
<xsd:sequence>
<xsd:element name="program" type="ffprobe:programType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<!-- video attributes -->
<xsd:attribute name="width" type="xsd:int"/>
<xsd:attribute name="height" type="xsd:int"/>
<xsd:attribute name="coded_width" type="xsd:int"/>
<xsd:attribute name="coded_height" type="xsd:int"/>
<xsd:attribute name="closed_captions" type="xsd:boolean"/>
<xsd:attribute name="has_b_frames" type="xsd:int"/>
<xsd:attribute name="sample_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="pix_fmt" type="xsd:string"/>
<xsd:attribute name="level" type="xsd:int"/>
<xsd:attribute name="color_range" type="xsd:string"/>
<xsd:attribute name="color_space" type="xsd:string"/>
<xsd:attribute name="color_transfer" type="xsd:string"/>
<xsd:attribute name="color_primaries" type="xsd:string"/>
<xsd:attribute name="chroma_location" type="xsd:string"/>
<xsd:attribute name="field_order" type="xsd:string"/>
<xsd:attribute name="refs" type="xsd:int"/>
<xsd:complexType name="streamDispositionType">
<xsd:attribute name="default" type="xsd:int" use="required" />
<xsd:attribute name="dub" type="xsd:int" use="required" />
<xsd:attribute name="original" type="xsd:int" use="required" />
<xsd:attribute name="comment" type="xsd:int" use="required" />
<xsd:attribute name="lyrics" type="xsd:int" use="required" />
<xsd:attribute name="karaoke" type="xsd:int" use="required" />
<xsd:attribute name="forced" type="xsd:int" use="required" />
<xsd:attribute name="hearing_impaired" type="xsd:int" use="required" />
<xsd:attribute name="visual_impaired" type="xsd:int" use="required" />
<xsd:attribute name="clean_effects" type="xsd:int" use="required" />
<xsd:attribute name="attached_pic" type="xsd:int" use="required" />
<xsd:attribute name="timed_thumbnails" type="xsd:int" use="required" />
<xsd:attribute name="non_diegetic" type="xsd:int" use="required" />
<xsd:attribute name="captions" type="xsd:int" use="required" />
<xsd:attribute name="descriptions" type="xsd:int" use="required" />
<xsd:attribute name="metadata" type="xsd:int" use="required" />
<xsd:attribute name="dependent" type="xsd:int" use="required" />
<xsd:attribute name="still_image" type="xsd:int" use="required" />
</xsd:complexType>
<!-- audio attributes -->
<xsd:attribute name="sample_fmt" type="xsd:string"/>
<xsd:attribute name="sample_rate" type="xsd:int"/>
<xsd:attribute name="channels" type="xsd:int"/>
<xsd:attribute name="channel_layout" type="xsd:string"/>
<xsd:attribute name="bits_per_sample" type="xsd:int"/>
<xsd:complexType name="streamType">
<xsd:sequence>
<xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="tags" type="ffprobe:tagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:attribute name="id" type="xsd:string"/>
<xsd:attribute name="r_frame_rate" type="xsd:string" use="required"/>
<xsd:attribute name="avg_frame_rate" type="xsd:string" use="required"/>
<xsd:attribute name="time_base" type="xsd:string" use="required"/>
<xsd:attribute name="start_pts" type="xsd:long"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="duration_ts" type="xsd:long"/>
<xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="bit_rate" type="xsd:int"/>
<xsd:attribute name="max_bit_rate" type="xsd:int"/>
<xsd:attribute name="bits_per_raw_sample" type="xsd:int"/>
<xsd:attribute name="nb_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_packets" type="xsd:int"/>
</xsd:complexType>
<xsd:attribute name="index" type="xsd:int" use="required"/>
<xsd:attribute name="codec_name" type="xsd:string" />
<xsd:attribute name="codec_long_name" type="xsd:string" />
<xsd:attribute name="profile" type="xsd:string" />
<xsd:attribute name="codec_type" type="xsd:string" />
<xsd:attribute name="codec_tag" type="xsd:string" use="required"/>
<xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/>
<xsd:attribute name="extradata" type="xsd:string" />
<xsd:attribute name="extradata_size" type="xsd:int" />
<xsd:attribute name="extradata_hash" type="xsd:string" />
<xsd:complexType name="programType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<!-- video attributes -->
<xsd:attribute name="width" type="xsd:int"/>
<xsd:attribute name="height" type="xsd:int"/>
<xsd:attribute name="coded_width" type="xsd:int"/>
<xsd:attribute name="coded_height" type="xsd:int"/>
<xsd:attribute name="closed_captions" type="xsd:boolean"/>
<xsd:attribute name="film_grain" type="xsd:boolean"/>
<xsd:attribute name="has_b_frames" type="xsd:int"/>
<xsd:attribute name="sample_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="pix_fmt" type="xsd:string"/>
<xsd:attribute name="level" type="xsd:int"/>
<xsd:attribute name="color_range" type="xsd:string"/>
<xsd:attribute name="color_space" type="xsd:string"/>
<xsd:attribute name="color_transfer" type="xsd:string"/>
<xsd:attribute name="color_primaries" type="xsd:string"/>
<xsd:attribute name="chroma_location" type="xsd:string"/>
<xsd:attribute name="field_order" type="xsd:string"/>
<xsd:attribute name="refs" type="xsd:int"/>
<xsd:attribute name="program_id" type="xsd:int" use="required"/>
<xsd:attribute name="program_num" type="xsd:int" use="required"/>
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="start_pts" type="xsd:long"/>
<xsd:attribute name="end_time" type="xsd:float"/>
<xsd:attribute name="end_pts" type="xsd:long"/>
<xsd:attribute name="pmt_pid" type="xsd:int" use="required"/>
<xsd:attribute name="pcr_pid" type="xsd:int" use="required"/>
</xsd:complexType>
<!-- audio attributes -->
<xsd:attribute name="sample_fmt" type="xsd:string"/>
<xsd:attribute name="sample_rate" type="xsd:int"/>
<xsd:attribute name="channels" type="xsd:int"/>
<xsd:attribute name="channel_layout" type="xsd:string"/>
<xsd:attribute name="bits_per_sample" type="xsd:int"/>
<xsd:attribute name="initial_padding" type="xsd:int"/>
<xsd:complexType name="formatType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
<xsd:attribute name="id" type="xsd:string"/>
<xsd:attribute name="r_frame_rate" type="xsd:string" use="required"/>
<xsd:attribute name="avg_frame_rate" type="xsd:string" use="required"/>
<xsd:attribute name="time_base" type="xsd:string" use="required"/>
<xsd:attribute name="start_pts" type="xsd:long"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="duration_ts" type="xsd:long"/>
<xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="bit_rate" type="xsd:int"/>
<xsd:attribute name="max_bit_rate" type="xsd:int"/>
<xsd:attribute name="bits_per_raw_sample" type="xsd:int"/>
<xsd:attribute name="nb_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_packets" type="xsd:int"/>
</xsd:complexType>
<xsd:attribute name="filename" type="xsd:string" use="required"/>
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
<xsd:attribute name="nb_programs" type="xsd:int" use="required"/>
<xsd:attribute name="format_name" type="xsd:string" use="required"/>
<xsd:attribute name="format_long_name" type="xsd:string"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="size" type="xsd:long"/>
<xsd:attribute name="bit_rate" type="xsd:long"/>
<xsd:attribute name="probe_score" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="programType">
<xsd:sequence>
<xsd:element name="tags" type="ffprobe:tagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:complexType name="tagType">
<xsd:attribute name="key" type="xsd:string" use="required"/>
<xsd:attribute name="value" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:attribute name="program_id" type="xsd:int" use="required"/>
<xsd:attribute name="program_num" type="xsd:int" use="required"/>
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
<xsd:attribute name="pmt_pid" type="xsd:int" use="required"/>
<xsd:attribute name="pcr_pid" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="errorType">
<xsd:attribute name="code" type="xsd:int" use="required"/>
<xsd:attribute name="string" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:complexType name="formatType">
<xsd:sequence>
<xsd:element name="tags" type="ffprobe:tagsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:complexType name="programVersionType">
<xsd:attribute name="version" type="xsd:string" use="required"/>
<xsd:attribute name="copyright" type="xsd:string" use="required"/>
<xsd:attribute name="build_date" type="xsd:string"/>
<xsd:attribute name="build_time" type="xsd:string"/>
<xsd:attribute name="compiler_ident" type="xsd:string" use="required"/>
<xsd:attribute name="configuration" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:attribute name="filename" type="xsd:string" use="required"/>
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
<xsd:attribute name="nb_programs" type="xsd:int" use="required"/>
<xsd:attribute name="format_name" type="xsd:string" use="required"/>
<xsd:attribute name="format_long_name" type="xsd:string"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="size" type="xsd:long"/>
<xsd:attribute name="bit_rate" type="xsd:long"/>
<xsd:attribute name="probe_score" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="chaptersType">
<xsd:sequence>
<xsd:element name="chapter" type="ffprobe:chapterType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="tagType">
<xsd:attribute name="key" type="xsd:string" use="required"/>
<xsd:attribute name="value" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:complexType name="chapterType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
<xsd:complexType name="errorType">
<xsd:attribute name="code" type="xsd:int" use="required"/>
<xsd:attribute name="string" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:attribute name="id" type="xsd:int" use="required"/>
<xsd:attribute name="time_base" type="xsd:string" use="required"/>
<xsd:attribute name="start" type="xsd:int" use="required"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="end" type="xsd:int" use="required"/>
<xsd:attribute name="end_time" type="xsd:float" use="required"/>
</xsd:complexType>
<xsd:complexType name="programVersionType">
<xsd:attribute name="version" type="xsd:string" use="required"/>
<xsd:attribute name="copyright" type="xsd:string" use="required"/>
<xsd:attribute name="build_date" type="xsd:string"/>
<xsd:attribute name="build_time" type="xsd:string"/>
<xsd:attribute name="compiler_ident" type="xsd:string" use="required"/>
<xsd:attribute name="configuration" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:complexType name="libraryVersionType">
<xsd:attribute name="name" type="xsd:string" use="required"/>
<xsd:attribute name="major" type="xsd:int" use="required"/>
<xsd:attribute name="minor" type="xsd:int" use="required"/>
<xsd:attribute name="micro" type="xsd:int" use="required"/>
<xsd:attribute name="version" type="xsd:int" use="required"/>
<xsd:attribute name="ident" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:complexType name="chaptersType">
<xsd:sequence>
<xsd:element name="chapter" type="ffprobe:chapterType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="libraryVersionsType">
<xsd:sequence>
<xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="chapterType">
<xsd:sequence>
<xsd:element name="tags" type="ffprobe:tagsType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
<xsd:complexType name="pixelFormatFlagsType">
<xsd:attribute name="big_endian" type="xsd:int" use="required"/>
<xsd:attribute name="palette" type="xsd:int" use="required"/>
<xsd:attribute name="bitstream" type="xsd:int" use="required"/>
<xsd:attribute name="hwaccel" type="xsd:int" use="required"/>
<xsd:attribute name="planar" type="xsd:int" use="required"/>
<xsd:attribute name="rgb" type="xsd:int" use="required"/>
<xsd:attribute name="alpha" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:attribute name="id" type="xsd:int" use="required"/>
<xsd:attribute name="time_base" type="xsd:string" use="required"/>
<xsd:attribute name="start" type="xsd:int" use="required"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="end" type="xsd:int" use="required"/>
<xsd:attribute name="end_time" type="xsd:float" use="required"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentType">
<xsd:attribute name="index" type="xsd:int" use="required"/>
<xsd:attribute name="bit_depth" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="libraryVersionType">
<xsd:attribute name="name" type="xsd:string" use="required"/>
<xsd:attribute name="major" type="xsd:int" use="required"/>
<xsd:attribute name="minor" type="xsd:int" use="required"/>
<xsd:attribute name="micro" type="xsd:int" use="required"/>
<xsd:attribute name="version" type="xsd:int" use="required"/>
<xsd:attribute name="ident" type="xsd:string" use="required"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentsType">
<xsd:sequence>
<xsd:element name="component" type="ffprobe:pixelFormatComponentType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="libraryVersionsType">
<xsd:sequence>
<xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="pixelFormatType">
<xsd:sequence>
<xsd:element name="flags" type="ffprobe:pixelFormatFlagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="components" type="ffprobe:pixelFormatComponentsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:complexType name="pixelFormatFlagsType">
<xsd:attribute name="big_endian" type="xsd:int" use="required"/>
<xsd:attribute name="palette" type="xsd:int" use="required"/>
<xsd:attribute name="bitstream" type="xsd:int" use="required"/>
<xsd:attribute name="hwaccel" type="xsd:int" use="required"/>
<xsd:attribute name="planar" type="xsd:int" use="required"/>
<xsd:attribute name="rgb" type="xsd:int" use="required"/>
<xsd:attribute name="alpha" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:attribute name="name" type="xsd:string" use="required"/>
<xsd:attribute name="nb_components" type="xsd:int" use="required"/>
<xsd:attribute name="log2_chroma_w" type="xsd:int"/>
<xsd:attribute name="log2_chroma_h" type="xsd:int"/>
<xsd:attribute name="bits_per_pixel" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentType">
<xsd:attribute name="index" type="xsd:int" use="required"/>
<xsd:attribute name="bit_depth" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentsType">
<xsd:sequence>
<xsd:element name="component" type="ffprobe:pixelFormatComponentType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="pixelFormatType">
<xsd:sequence>
<xsd:element name="flags" type="ffprobe:pixelFormatFlagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="components" type="ffprobe:pixelFormatComponentsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:attribute name="name" type="xsd:string" use="required"/>
<xsd:attribute name="nb_components" type="xsd:int" use="required"/>
<xsd:attribute name="log2_chroma_w" type="xsd:int"/>
<xsd:attribute name="log2_chroma_h" type="xsd:int"/>
<xsd:attribute name="bits_per_pixel" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatsType">
<xsd:sequence>
<xsd:element name="pixel_format" type="ffprobe:pixelFormatType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="pixelFormatsType">
<xsd:sequence>
<xsd:element name="pixel_format" type="ffprobe:pixelFormatType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>

View File

@@ -167,9 +167,6 @@ Show available sample formats.
@item -layouts
Show channel names and standard channel layouts.
@item -dispositions
Show stream dispositions.
@item -colors
Show recognized color names.
@@ -356,13 +353,6 @@ Possible flags for this option are:
@end table
@end table
@item -cpucount @var{count} (@emph{global})
Override detection of CPU count. This option is intended
for testing. Do not use it unless you know what you're doing.
@example
ffmpeg -cpucount 2
@end example
@item -max_alloc @var{bytes}
Set the maximum size limit for allocating a block on the heap by ffmpeg's
family of malloc functions. Exercise @strong{extreme caution} when using

File diff suppressed because it is too large Load Diff

View File

@@ -49,6 +49,7 @@ Generate missing PTS if DTS is present.
Ignore DTS if PTS is set. Inert when nofillin is set.
@item ignidx
Ignore index.
@item keepside (@emph{deprecated},@emph{inert})
@item nobuffer
Reduce the latency introduced by buffering during initial input streams analysis.
@item nofillin
@@ -69,6 +70,7 @@ This ensures that file and data checksums are reproducible and match between
platforms. Its primary use is for regression testing.
@item flush_packets
Write out packets immediately.
@item latm (@emph{deprecated},@emph{inert})
@item shortest
Stop muxing at the end of the shortest stream.
It may be needed to increase max_interleave_delta to avoid flushing the longer

View File

@@ -171,13 +171,6 @@ Go to @url{https://github.com/TimothyGu/libilbc} and follow the instructions for
installing the library. Then pass @code{--enable-libilbc} to configure to
enable it.
@section libjxl
JPEG XL is an image format intended to fully replace legacy JPEG for an extended
period of life. See @url{https://jpegxl.info/} for more information, and see
@url{https://github.com/libjxl/libjxl} for the library source. You can pass
@code{--enable-libjxl} to configure in order enable the libjxl wrapper.
@section libvpx
FFmpeg can make use of the libvpx library for VP8/VP9 decoding and encoding.
@@ -270,7 +263,7 @@ to @file{./configure}.
FFmpeg can make use of the Scalable Video Technology for AV1 library for AV1 encoding.
Go to @url{https://gitlab.com/AOMediaCodec/SVT-AV1/} and follow the instructions
Go to @url{https://github.com/OpenVisualCloud/SVT-AV1/} and follow the instructions
for installing the library. Then pass @code{--enable-libsvtav1} to configure to
enable it.
@@ -510,8 +503,6 @@ library:
@tab A format used by libvpx
@item Internet Video Recording @tab @tab X
@item IRCAM @tab X @tab X
@item LAF @tab @tab X
@tab Limitless Audio Format
@item LATM @tab X @tab X
@item LMLM4 @tab @tab X
@tab Used by Linux Media Labs MPEG-4 PCI boards
@@ -534,8 +525,6 @@ library:
@item Metal Gear Solid: The Twin Snakes @tab @tab X
@item Megalux Frame @tab @tab X
@tab Used by Megalux Ultimate Paint
@item MobiClip MODS @tab @tab X
@item MobiClip MOFLEX @tab @tab X
@item Mobotix .mxg @tab @tab X
@item Monkey's Audio @tab @tab X
@item Motion Pixels MVI @tab @tab X
@@ -579,7 +568,6 @@ library:
@item Ogg @tab X @tab X
@item Playstation Portable PMP @tab @tab X
@item Portable Voice Format @tab @tab X
@item RK Audio (RKA) @tab @tab X
@item TechnoTrend PVA @tab @tab X
@tab Used by TechnoTrend DVB PCI boards.
@item QCP @tab @tab X
@@ -587,12 +575,9 @@ library:
@item raw AC-3 @tab X @tab X
@item raw AMR-NB @tab @tab X
@item raw AMR-WB @tab @tab X
@item raw APAC @tab @tab X
@item raw aptX @tab X @tab X
@item raw aptX HD @tab X @tab X
@item raw Bonk @tab @tab X
@item raw Chinese AVS video @tab X @tab X
@item raw DFPWM @tab X @tab X
@item raw Dirac @tab X @tab X
@item raw DNxHD @tab X @tab X
@item raw DTS @tab X @tab X
@@ -614,8 +599,6 @@ library:
@item raw NULL @tab X @tab
@item raw video @tab X @tab X
@item raw id RoQ @tab X @tab
@item raw OBU @tab X @tab X
@item raw OSQ @tab @tab X
@item raw SBC @tab X @tab X
@item raw Shorten @tab @tab X
@item raw TAK @tab @tab X
@@ -667,10 +650,8 @@ library:
@item Sample Dump eXchange @tab @tab X
@item SAP @tab X @tab X
@item SBG @tab @tab X
@item SDNS @tab @tab X
@item SDP @tab @tab X
@item SER @tab @tab X
@item Digital Pictures SGA @tab @tab X
@item Sega FILM/CPK @tab X @tab X
@tab Used in many Sega Saturn console games.
@item Silicon Graphics Movie @tab @tab X
@@ -708,21 +689,18 @@ library:
@item Vivo @tab @tab X
@item VPK @tab @tab X
@tab Audio format used in Sony PS games.
@item Marble WADY @tab @tab X
@item WAV @tab X @tab X
@item Waveform Archiver @tab @tab X
@item WavPack @tab X @tab X
@item WebM @tab X @tab X
@item Windows Televison (WTV) @tab X @tab X
@item Wing Commander III movie @tab @tab X
@tab Multimedia format used in Origin's Wing Commander III computer game.
@item Westwood Studios audio @tab X @tab X
@item Westwood Studios audio @tab @tab X
@tab Multimedia format used in Westwood Studios games.
@item Westwood Studios VQA @tab @tab X
@tab Multimedia format used in Westwood Studios games.
@item Wideband Single-bit Data (WSD) @tab @tab X
@item WVE @tab @tab X
@item Konami XMD @tab @tab X
@item XMV @tab @tab X
@tab Microsoft video container used in Xbox games.
@item XVAG @tab @tab X
@@ -762,17 +740,12 @@ following image formats are supported:
@tab OpenEXR
@item FITS @tab X @tab X
@tab Flexible Image Transport System
@item HDR @tab X @tab X
@tab Radiance HDR RGBE Image format
@item IMG @tab @tab X
@tab GEM Raster image
@item JPEG @tab X @tab X
@tab Progressive JPEG is not supported.
@item JPEG 2000 @tab X @tab X
@item JPEG-LS @tab X @tab X
@item LJPEG @tab X @tab
@tab Lossless JPEG
@item Media 100 @tab @tab X
@item MSP @tab @tab X
@tab Microsoft Paint image
@item PAM @tab X @tab X
@@ -791,8 +764,6 @@ following image formats are supported:
@tab PGM with U and V components in YUV 4:2:0
@item PGX @tab @tab X
@tab PGX file decoder
@item PHM @tab X @tab X
@tab Portable HalfFloatMap image
@item PIC @tab @tab X
@tab Pictor/PC Paint
@item PNG @tab X @tab X
@@ -803,8 +774,6 @@ following image formats are supported:
@tab Photoshop
@item PTX @tab @tab X
@tab V.Flash PTX format
@item QOI @tab X @tab X
@tab Quite OK Image format
@item SGI @tab X @tab X
@tab SGI RGB image format
@item Sun Rasterfile @tab X @tab X
@@ -813,10 +782,6 @@ following image formats are supported:
@tab YUV, JPEG and some extension is not supported yet.
@item Truevision Targa @tab X @tab X
@tab Targa (.TGA) image format
@item VBN @tab X @tab X
@tab Vizrt Binary Image format
@item WBMP @tab X @tab X
@tab Wireless Application Protocol Bitmap image format
@item WebP @tab E @tab X
@tab WebP image format, encoding supported through external library libwebp
@item XBM @tab X @tab X
@@ -1007,7 +972,7 @@ following image formats are supported:
@tab Also known as Microsoft Screen 3.
@item Microsoft Expression Encoder Screen @tab @tab X
@tab Also known as Microsoft Titanium Screen 2.
@item Microsoft RLE @tab X @tab X
@item Microsoft RLE @tab @tab X
@item Microsoft Screen 1 @tab @tab X
@tab Also known as Windows Media Video V7 Screen.
@item Microsoft Screen 2 @tab @tab X
@@ -1053,7 +1018,7 @@ following image formats are supported:
@item QuickTime 8BPS video @tab @tab X
@item QuickTime Animation (RLE) video @tab X @tab X
@tab fourcc: 'rle '
@item QuickTime Graphics (SMC) @tab X @tab X
@item QuickTime Graphics (SMC) @tab @tab X
@tab fourcc: 'smc '
@item QuickTime video (RPZA) @tab X @tab X
@tab fourcc: rpza
@@ -1067,8 +1032,6 @@ following image formats are supported:
@item RealVideo 4.0 @tab @tab X
@item Renderware TXD (TeXture Dictionary) @tab @tab X
@tab Texture dictionaries used by the Renderware Engine.
@item RivaTuner Video @tab @tab X
@tab fourcc: 'RTV1'
@item RL2 video @tab @tab X
@tab used in some games by Entertainment Software Partners
@item ScreenPressor @tab @tab X
@@ -1105,8 +1068,6 @@ following image formats are supported:
@item v408 QuickTime uncompressed 4:4:4:4 @tab X @tab X
@item v410 QuickTime uncompressed 4:4:4 10-bit @tab X @tab X
@item VBLE Lossless Codec @tab @tab X
@item vMix Video @tab @tab X
@tab fourcc: 'VMX1'
@item VMware Screen Codec / VMware Video @tab @tab X
@tab Codec used in videos captured by VMware.
@item Westwood Studios VQA (Vector Quantized Animation) video @tab @tab X
@@ -1165,7 +1126,6 @@ following image formats are supported:
@item ADPCM Electronic Arts XAS @tab @tab X
@item ADPCM G.722 @tab X @tab X
@item ADPCM G.726 @tab X @tab X
@item ADPCM IMA Acorn Replay @tab @tab X
@item ADPCM IMA AMV @tab X @tab X
@tab Used in AMV files
@item ADPCM IMA Cunning Developments @tab @tab X
@@ -1173,7 +1133,6 @@ following image formats are supported:
@item ADPCM IMA Electronic Arts SEAD @tab @tab X
@item ADPCM IMA Funcom @tab @tab X
@item ADPCM IMA High Voltage Software ALP @tab X @tab X
@item ADPCM IMA Mobiclip MOFLEX @tab @tab X
@item ADPCM IMA QuickTime @tab X @tab X
@item ADPCM IMA Simon & Schuster Interactive @tab X @tab X
@item ADPCM IMA Ubisoft APM @tab X @tab X
@@ -1203,8 +1162,7 @@ following image formats are supported:
@item ADPCM Sound Blaster Pro 4-bit @tab @tab X
@item ADPCM VIMA @tab @tab X
@tab Used in LucasArts SMUSH animations.
@item ADPCM Konami XMD @tab @tab X
@item ADPCM Westwood Studios IMA @tab X @tab X
@item ADPCM Westwood Studios IMA @tab @tab X
@tab Used in Westwood Studios games like Command and Conquer.
@item ADPCM Yamaha @tab X @tab X
@item ADPCM Zork @tab @tab X
@@ -1225,7 +1183,6 @@ following image formats are supported:
@item ATRAC9 @tab @tab X
@item Bink Audio @tab @tab X
@tab Used in Bink and Smacker files in many games.
@item Bonk audio @tab @tab X
@item CELT @tab @tab E
@tab decoding supported through external library libcelt
@item codec2 @tab E @tab E
@@ -1233,7 +1190,6 @@ following image formats are supported:
@item CRI HCA @tab @tab X
@item Delphine Software International CIN audio @tab @tab X
@tab Codec used in Delphine Software International games.
@item DFPWM @tab X @tab X
@item Digital Speech Standard - Standard Play mode (DSS SP) @tab @tab X
@item Discworld II BMV Audio @tab @tab X
@item COOK @tab @tab X
@@ -1241,12 +1197,9 @@ following image formats are supported:
@item DCA (DTS Coherent Acoustics) @tab X @tab X
@tab supported extensions: XCh, XXCH, X96, XBR, XLL, LBR (partially)
@item Dolby E @tab @tab X
@item DPCM Cuberoot-Delta-Exact @tab @tab X
@tab Used in few games.
@item DPCM Gremlin @tab @tab X
@item DPCM id RoQ @tab X @tab X
@tab Used in Quake III, Jedi Knight 2 and other computer games.
@item DPCM Marble WADY @tab @tab X
@item DPCM Interplay @tab @tab X
@tab Used in various Interplay computer games.
@item DPCM Squareroot-Delta-Exact @tab @tab X
@@ -1267,7 +1220,6 @@ following image formats are supported:
@item Enhanced AC-3 @tab X @tab X
@item EVRC (Enhanced Variable Rate Codec) @tab @tab X
@item FLAC (Free Lossless Audio Codec) @tab X @tab IX
@item FTR Voice @tab @tab X
@item G.723.1 @tab X @tab X
@item G.729 @tab @tab X
@item GSM @tab E @tab X
@@ -1275,14 +1227,12 @@ following image formats are supported:
@item GSM Microsoft variant @tab E @tab X
@tab encoding supported through external library libgsm
@item IAC (Indeo Audio Coder) @tab @tab X
@item iLBC (Internet Low Bitrate Codec) @tab E @tab EX
@item iLBC (Internet Low Bitrate Codec) @tab E @tab E
@tab encoding and decoding supported through external library libilbc
@item IMC (Intel Music Coder) @tab @tab X
@item Interplay ACM @tab @tab X
@item MACE (Macintosh Audio Compression/Expansion) 3:1 @tab @tab X
@item MACE (Macintosh Audio Compression/Expansion) 6:1 @tab @tab X
@item Marian's A-pac audio @tab @tab X
@item MI-SC4 (Micronas SC-4 Audio) @tab @tab X
@item MLP (Meridian Lossless Packing) @tab X @tab X
@tab Used in DVD-Audio discs.
@item Monkey's Audio @tab @tab X
@@ -1292,14 +1242,12 @@ following image formats are supported:
@item MP3 (MPEG audio layer 3) @tab E @tab IX
@tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
@item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X
@item MobiClip FastAudio @tab @tab X
@item Musepack SV7 @tab @tab X
@item Musepack SV8 @tab @tab X
@item Nellymoser Asao @tab X @tab X
@item On2 AVC (Audio for Video Codec) @tab @tab X
@item Opus @tab E @tab X
@tab encoding supported through external library libopus
@item OSQ (Original Sound Quality) @tab @tab X
@item PCM A-law @tab X @tab X
@item PCM mu-law @tab X @tab X
@item PCM Archimedes VIDC @tab X @tab X
@@ -1328,7 +1276,6 @@ following image formats are supported:
@item PCM unsigned 24-bit little-endian @tab X @tab X
@item PCM unsigned 32-bit big-endian @tab X @tab X
@item PCM unsigned 32-bit little-endian @tab X @tab X
@item PCM SGA @tab @tab X
@item QCELP / PureVoice @tab @tab X
@item QDesign Music Codec 1 @tab @tab X
@item QDesign Music Codec 2 @tab @tab X
@@ -1341,7 +1288,6 @@ following image formats are supported:
@tab Real low bitrate AC-3 codec
@item RealAudio Lossless @tab @tab X
@item RealAudio SIPR / ACELP.NET @tab @tab X
@item RK Audio (RKA) @tab @tab X
@item SBC (low-complexity subband codec) @tab X @tab X
@tab Used in Bluetooth A2DP
@item Shorten @tab @tab X
@@ -1353,7 +1299,7 @@ following image formats are supported:
@tab experimental codec
@item Sonic lossless @tab X @tab X
@tab experimental codec
@item Speex @tab E @tab EX
@item Speex @tab E @tab E
@tab supported through external library libspeex
@item TAK (Tom's lossless Audio Kompressor) @tab @tab X
@item True Audio (TTA) @tab X @tab X
@@ -1362,11 +1308,9 @@ following image formats are supported:
@item TwinVQ (VQF flavor) @tab @tab X
@item VIMA @tab @tab X
@tab Used in LucasArts SMUSH animations.
@item ViewQuest VQC @tab @tab X
@item Vorbis @tab E @tab X
@tab A native but very primitive encoder exists.
@item Voxware MetaSound @tab @tab X
@item Waveform Archiver @tab @tab X
@item WavPack @tab X @tab X
@item Westwood Audio (SND1) @tab @tab X
@item Windows Media Audio 1 @tab X @tab X

View File

@@ -53,7 +53,7 @@ Most distribution and operating system provide a package for it.
@section Cloning the source tree
@example
git clone https://git.ffmpeg.org/ffmpeg.git <target>
git clone git://source.ffmpeg.org/ffmpeg <target>
@end example
This will put the FFmpeg sources into the directory @var{<target>}.
@@ -143,7 +143,7 @@ git log <filename(s)>
@end example
You may also use the graphical tools like @command{gitview} or @command{gitk}
or the web interface available at @url{https://git.ffmpeg.org/ffmpeg.git}.
or the web interface available at @url{http://source.ffmpeg.org/}.
@section Checking source tree status
@@ -187,18 +187,11 @@ to make sure you don't have untracked files or deletions.
git add [-i|-p|-A] <filenames/dirnames>
@end example
Make sure you have told Git your name, email address and GPG key
Make sure you have told Git your name and email address
@example
git config --global user.name "My Name"
git config --global user.email my@@email.invalid
git config --global user.signingkey ABCDEF0123245
@end example
Enable signing all commits or use -S
@example
git config --global commit.gpgsign true
@end example
Use @option{--global} to set the global configuration for all your Git checkouts.
@@ -224,46 +217,16 @@ git config --global core.editor
or set by one of the following environment variables:
@var{GIT_EDITOR}, @var{VISUAL} or @var{EDITOR}.
@section Writing a commit message
Log messages should be concise but descriptive. Explain why you made a change,
what you did will be obvious from the changes themselves most of the time.
Saying just "bug fix" or "10l" is bad. Remember that people of varying skill
levels look at and educate themselves while reading through your code. Don't
include filenames in log messages, Git provides that information.
Log messages should be concise but descriptive.
The first line must contain the context, a colon and a very short
summary of what the commit does. Details can be added, if necessary,
separated by an empty line. These details should not exceed 60-72 characters
per line, except when containing code.
Example of a good commit message:
@example
avcodec/cbs: add a helper to read extradata within packet side data
Using ff_cbs_read() on the raw buffer will not parse it as extradata,
resulting in parsing errors for example when handling ISOBMFF avcC.
This helper works around that.
@end example
@example
ptr might be NULL
@end example
If the summary on the first line is not enough, in the body of the message,
explain why you made a change, what you did will be obvious from the changes
themselves most of the time. Saying just "bug fix" or "10l" is bad. Remember
that people of varying skill levels look at and educate themselves while
reading through your code. Don't include filenames in log messages except in
the context, Git provides that information.
If the commit fixes a registered issue, state it in a separate line of the
body: @code{Fix Trac ticket #42.}
The first line will be used to name
Possibly make the commit message have a terse, descriptive first line, an
empty line and then a full description. The first line will be used to name
the patch by @command{git format-patch}.
Common mistakes for the first line, as seen in @command{git log --oneline}
include: missing context at the beginning; description of what the code did
before the patch; line too long or wrapped to the second line.
@section Preparing a patchset
@example
@@ -430,19 +393,6 @@ git checkout -b svn_23456 $SHA1
where @var{$SHA1} is the commit hash from the @command{git log} output.
@chapter gpg key generation
If you have no gpg key yet, we recommend that you create a ed25519 based key as it
is small, fast and secure. Especially it results in small signatures in git.
@example
gpg --default-new-key-algo "ed25519/cert,sign+cv25519/encr" --quick-generate-key "human@@server.com"
@end example
When generating a key, make sure the email specified matches the email used in git as some sites like
github consider mismatches a reason to declare such commits unverified. After generating a key you
can add it to the MAINTAINER file and upload it to a keyserver.
@chapter Pre-push checklist
Once you have a set of commits that you feel are ready for pushing,

View File

@@ -344,23 +344,9 @@ Defines number of audio channels to capture. Must be @samp{2}, @samp{8} or @samp
Defaults to @samp{2}.
@item duplex_mode
Sets the decklink device duplex/profile mode. Must be @samp{unset}, @samp{half}, @samp{full},
@samp{one_sub_device_full}, @samp{one_sub_device_half}, @samp{two_sub_device_full},
@samp{four_sub_device_half}
Sets the decklink device duplex mode. Must be @samp{unset}, @samp{half} or @samp{full}.
Defaults to @samp{unset}.
Note: DeckLink SDK 11.0 have replaced the duplex property by a profile property.
For the DeckLink Duo 2 and DeckLink Quad 2, a profile is shared between any 2
sub-devices that utilize the same connectors. For the DeckLink 8K Pro, a profile
is shared between all 4 sub-devices. So DeckLink 8K Pro support four profiles.
Valid profile modes for DeckLink 8K Pro(with DeckLink SDK >= 11.0):
@samp{one_sub_device_full}, @samp{one_sub_device_half}, @samp{two_sub_device_full},
@samp{four_sub_device_half}
Valid profile modes for DeckLink Quad 2 and DeckLink Duo 2:
@samp{half}, @samp{full}
@item timecode_format
Timecode type to include in the frame and video stream metadata. Must be
@samp{none}, @samp{rp188vitc}, @samp{rp188vitc2}, @samp{rp188ltc},
@@ -625,12 +611,6 @@ Save the currently used video capture filter device and its
parameters (if the filter supports it) to a file.
If a file with the same name exists it will be overwritten.
@item use_video_device_timestamps
If set to @option{false}, the timestamp for video frames will be
derived from the wallclock instead of the timestamp provided by
the capture device. This allows working around devices that
provide unreliable timestamps.
@end table
@subsection Examples
@@ -991,8 +971,9 @@ This input device reads data from the open output pads of a libavfilter
filtergraph.
For each filtergraph open output, the input device will create a
corresponding stream which is mapped to the generated output.
The filtergraph is specified through the option @option{graph}.
corresponding stream which is mapped to the generated output. Currently
only video data is supported. The filtergraph is specified through the
option @option{graph}.
@subsection Options
@@ -1288,11 +1269,11 @@ Specify the samplerate in Hz, by default 48kHz is used.
Specify the channels in use, by default 2 (stereo) is set.
@item frame_size
This option does nothing and is deprecated.
Specify the number of bytes per frame, by default it is set to 1024.
@item fragment_size
Specify the size in bytes of the minimal buffering fragment in PulseAudio, it
will affect the audio latency. By default it is set to 50 ms amount of data.
Specify the minimal buffering fragment in PulseAudio, it will affect the
audio latency. By default it is unset.
@item wallclock
Set the initial PTS using the current time. Default is 1.

View File

@@ -116,7 +116,7 @@ or is abusive towards others).
@section How long does it take for my message in the moderation queue to be approved?
The queue is not checked on a regular basis. You can ask on the
@t{#ffmpeg-devel} IRC channel on Libera Chat for someone to approve your message.
@t{#ffmpeg-devel} IRC channel on Freenode for someone to approve your message.
@anchor{How do I delete my message in the moderation queue?}
@section How do I delete my message in the moderation queue?
@@ -155,7 +155,7 @@ Perform a site search using your favorite search engine. Example:
@section Is there an alternative to the mailing list?
You can ask for help in the official @t{#ffmpeg} IRC channel on Libera Chat.
You can ask for help in the official @t{#ffmpeg} IRC channel on Freenode.
Some users prefer the third-party @url{http://www.ffmpeg-archive.org/, Nabble}
interface which presents the mailing lists in a typical forum layout.
@@ -344,7 +344,7 @@ recommended.
Avoid sending the same message to multiple mailing lists.
@item
Please follow our @url{https://ffmpeg.org/community.html#Code-of-conduct, Code of Conduct}.
Please follow our @url{https://ffmpeg.org/developer.html#Code-of-conduct, Code of Conduct}.
@end itemize
@chapter Help

View File

@@ -48,6 +48,11 @@ Files that have MIPS copyright notice in them:
float_dsp_mips.c
libm_mips.h
softfloat_tables.h
* libavcodec/
fft_fixed_32.c
fft_init_table.c
fft_table.h
mdct_fixed_32.c
* libavcodec/mips/
aacdec_fixed.c
aacsbr_fixed.c
@@ -65,6 +70,9 @@ Files that have MIPS copyright notice in them:
compute_antialias_float.h
lsp_mips.h
dsputil_mips.c
fft_mips.c
fft_table.h
fft_init_table.c
fmtconvert_mips.c
iirfilter_mips.c
mpegaudiodsp_mips_fixed.c

View File

@@ -55,7 +55,8 @@ speed gain at this point but it should work.
If there are inter-frame dependencies, so the codec calls
ff_thread_report/await_progress(), set FF_CODEC_CAP_ALLOCATE_PROGRESS in
FFCodec.caps_internal and use ff_thread_get_buffer() to allocate frames.
AVCodec.caps_internal and use ff_thread_get_buffer() to allocate frames. The
frames must then be freed with ff_thread_release_buffer().
Otherwise decode directly into the user-supplied frames.
Call ff_thread_report_progress() after some part of the current picture has decoded.

View File

@@ -19,33 +19,6 @@ enabled demuxers and muxers.
A description of some of the currently available muxers follows.
@anchor{a64}
@section a64
A64 muxer for Commodore 64 video. Accepts a single @code{a64_multi} or @code{a64_multi5} codec video stream.
@anchor{adts}
@section adts
Audio Data Transport Stream muxer. It accepts a single AAC stream.
@subsection Options
It accepts the following options:
@table @option
@item write_id3v2 @var{bool}
Enable to write ID3v2.4 tags at the start of the stream. Default is disabled.
@item write_apetag @var{bool}
Enable to write APE tags at the end of the stream. Default is disabled.
@item write_mpeg2 @var{bool}
Enable to set MPEG version bit in the ADTS frame header to 1 which indicates MPEG-2. Default is 0, which indicates MPEG-4.
@end table
@anchor{aiff}
@section aiff
@@ -65,37 +38,6 @@ ID3v2.3 and ID3v2.4) are supported. The default is version 4.
@end table
@anchor{alp}
@section alp
Muxer for audio of High Voltage Software's Lego Racers game. It accepts a single ADPCM_IMA_ALP stream
with no more than 2 channels nor a sample rate greater than 44100 Hz.
Extensions: tun, pcm
@subsection Options
It accepts the following options:
@table @option
@item type @var{type}
Set file type.
@table @samp
@item tun
Set file type as music. Must have a sample rate of 22050 Hz.
@item pcm
Set file type as sfx.
@item auto
Set file type as per output file extension. @code{.pcm} results in type @code{pcm} else type @code{tun} is set. @var{(default)}
@end table
@end table
@anchor{asf}
@section asf
@@ -231,6 +173,37 @@ and the input video converted to MPEG-2 video, use the command:
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
@end example
@section flv
Adobe Flash Video Format muxer.
This muxer accepts the following options:
@table @option
@item flvflags @var{flags}
Possible values:
@table @samp
@item aac_seq_header_detect
Place AAC sequence header based on audio stream data.
@item no_sequence_end
Disable sequence end tag.
@item no_metadata
Disable metadata tag.
@item no_duration_filesize
Disable duration and filesize in metadata when they are equal to zero
at the end of stream. (Be used to non-seekable living stream).
@item add_keyframe_index
Used to facilitate seeking; particularly for HTTP pseudo streaming.
@end table
@end table
@anchor{dash}
@section dash
@@ -264,6 +237,8 @@ ffmpeg -re -i <input> -map 0 -map 0 -c:a libfdk_aac -c:v libx264 \
@end example
@table @option
@item min_seg_duration @var{microseconds}
This is a deprecated option to set the segment length in microseconds, use @var{seg_duration} instead.
@item seg_duration @var{duration}
Set the segment length in seconds (fractional value can be set). The value is
treated as average segment duration when @var{use_template} is enabled and
@@ -362,13 +337,12 @@ Ignore IO errors during open and write. Useful for long-duration runs with netwo
@item lhls @var{lhls}
Enable Low-latency HLS(LHLS). Adds #EXT-X-PREFETCH tag with current segment's URI.
hls.js player folks are trying to standardize an open LHLS spec. The draft spec is available in https://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md
This option tries to comply with the above open spec.
It enables @var{streaming} and @var{hls_playlist} options automatically.
Apple doesn't have an official spec for LHLS. Meanwhile hls.js player folks are
trying to standardize a open LHLS spec. The draft spec is available in https://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md
This option will also try to comply with the above open spec, till Apple's spec officially supports it.
Applicable only when @var{streaming} and @var{hls_playlist} options are enabled.
This is an experimental feature.
Note: This is not Apple's version LHLS. See @url{https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis}
@item ldash @var{ldash}
Enable Low-latency Dash by constraining the presence and values of some elements.
@@ -406,137 +380,6 @@ adjusting playback latency and buffer occupancy during normal playback by client
@end table
@anchor{fifo}
@section fifo
The fifo pseudo-muxer allows the separation of encoding and muxing by using
first-in-first-out queue and running the actual muxer in a separate thread. This
is especially useful in combination with the @ref{tee} muxer and can be used to
send data to several destinations with different reliability/writing speed/latency.
API users should be aware that callback functions (interrupt_callback,
io_open and io_close) used within its AVFormatContext must be thread-safe.
The behavior of the fifo muxer if the queue fills up or if the output fails is
selectable,
@itemize @bullet
@item
output can be transparently restarted with configurable delay between retries
based on real time or time of the processed stream.
@item
encoding can be blocked during temporary failure, or continue transparently
dropping packets in case fifo queue fills up.
@end itemize
@table @option
@item fifo_format
Specify the format name. Useful if it cannot be guessed from the
output name suffix.
@item queue_size
Specify size of the queue (number of packets). Default value is 60.
@item format_opts
Specify format options for the underlying muxer. Muxer options can be specified
as a list of @var{key}=@var{value} pairs separated by ':'.
@item drop_pkts_on_overflow @var{bool}
If set to 1 (true), in case the fifo queue fills up, packets will be dropped
rather than blocking the encoder. This makes it possible to continue streaming without
delaying the input, at the cost of omitting part of the stream. By default
this option is set to 0 (false), so in such cases the encoder will be blocked
until the muxer processes some of the packets and none of them is lost.
@item attempt_recovery @var{bool}
If failure occurs, attempt to recover the output. This is especially useful
when used with network output, since it makes it possible to restart streaming transparently.
By default this option is set to 0 (false).
@item max_recovery_attempts
Sets maximum number of successive unsuccessful recovery attempts after which
the output fails permanently. By default this option is set to 0 (unlimited).
@item recovery_wait_time @var{duration}
Waiting time before the next recovery attempt after previous unsuccessful
recovery attempt. Default value is 5 seconds.
@item recovery_wait_streamtime @var{bool}
If set to 0 (false), the real time is used when waiting for the recovery
attempt (i.e. the recovery will be attempted after at least
recovery_wait_time seconds).
If set to 1 (true), the time of the processed stream is taken into account
instead (i.e. the recovery will be attempted after at least @var{recovery_wait_time}
seconds of the stream is omitted).
By default, this option is set to 0 (false).
@item recover_any_error @var{bool}
If set to 1 (true), recovery will be attempted regardless of type of the error
causing the failure. By default this option is set to 0 (false) and in case of
certain (usually permanent) errors the recovery is not attempted even when
@var{attempt_recovery} is set to 1.
@item restart_with_keyframe @var{bool}
Specify whether to wait for the keyframe after recovering from
queue overflow or failure. This option is set to 0 (false) by default.
@item timeshift @var{duration}
Buffer the specified amount of packets and delay writing the output. Note that
@var{queue_size} must be big enough to store the packets for timeshift. At the
end of the input the fifo buffer is flushed at realtime speed.
@end table
@subsection Examples
@itemize
@item
Stream something to rtmp server, continue processing the stream at real-time
rate even in case of temporary failure (network outage) and attempt to recover
streaming every second indefinitely.
@example
ffmpeg -re -i ... -c:v libx264 -c:a aac -f fifo -fifo_format flv -map 0:v -map 0:a
-drop_pkts_on_overflow 1 -attempt_recovery 1 -recovery_wait_time 1 rtmp://example.com/live/stream_name
@end example
@end itemize
@section flv
Adobe Flash Video Format muxer.
This muxer accepts the following options:
@table @option
@item flvflags @var{flags}
Possible values:
@table @samp
@item aac_seq_header_detect
Place AAC sequence header based on audio stream data.
@item no_sequence_end
Disable sequence end tag.
@item no_metadata
Disable metadata tag.
@item no_duration_filesize
Disable duration and filesize in metadata when they are equal to zero
at the end of stream. (Be used to non-seekable living stream).
@item add_keyframe_index
Used to facilitate seeking; particularly for HTTP pseudo streaming.
@end table
@end table
@anchor{framecrc}
@section framecrc
@@ -795,6 +638,20 @@ deletes them. Increase this to allow continue clients to download segments which
were recently referenced in the playlist. Default value is 1, meaning segments older than
@code{hls_list_size+1} will be deleted.
@item hls_ts_options @var{options_list}
Set output format options using a :-separated list of key=value
parameters. Values containing @code{:} special characters must be
escaped.
@item hls_wrap @var{wrap}
This is a deprecated option, you can use @code{hls_list_size}
and @code{hls_flags delete_segments} instead it
This option is useful to avoid to fill the disk with many segment
files, and limits the maximum number of segment files written to disk
to @var{wrap}.
@item hls_start_number_source
Start the playlist sequence number (@code{#EXT-X-MEDIA-SEQUENCE}) according to the specified source.
Unless @code{hls_flags single_file} is set, it also specifies source of starting sequence numbers of
@@ -880,6 +737,9 @@ This example will produce the playlists segment file sets:
@file{vs0/file_000.ts}, @file{vs0/file_001.ts}, @file{vs0/file_002.ts}, etc. and
@file{vs1/file_000.ts}, @file{vs1/file_001.ts}, @file{vs1/file_002.ts}, etc.
@item use_localtime
Same as strftime option, will be deprecated.
@item strftime
Use strftime() on @var{filename} to expand the segment filename with localtime.
The segment number is also available in this mode, but to use it, you need to specify second_level_segment_index
@@ -897,6 +757,9 @@ ffmpeg -i in.nut -strftime 1 -hls_flags second_level_segment_index -hls_segment_
This example will produce the playlist, @file{out.m3u8}, and segment files:
@file{file-20160215-0001.ts}, @file{file-20160215-0002.ts}, etc.
@item use_localtime_mkdir
Same as strftime_mkdir option, will be deprecated .
@item strftime_mkdir
Used together with -strftime_mkdir, it will create all subdirectories which
is expanded in @var{filename}.
@@ -914,10 +777,6 @@ This example will create a directory hierarchy 2016/02/15 (if any of them do not
produce the playlist, @file{out.m3u8}, and segment files:
@file{2016/02/15/file-20160215-1455569023.ts}, @file{2016/02/15/file-20160215-1455569024.ts}, etc.
@item hls_segment_options @var{options_list}
Set output format options using a :-separated list of key=value
parameters. Values containing @code{:} special characters must be
escaped.
@item hls_key_info_file @var{key_info_file}
Use the information in @var{key_info_file} for segment encryption. The first
@@ -1054,8 +913,6 @@ and remove the @code{#EXT-X-ENDLIST} from the old segment list.
@item round_durations
Round the duration info in the playlist file segment info to integer
values, instead of using floating point.
If there are no other features requiring higher HLS versions be used,
then this will allow ffmpeg to output a HLS version 2 m3u8.
@item discont_start
Add the @code{#EXT-X-DISCONTINUITY} tag to the playlist, before the
@@ -1325,7 +1182,7 @@ Use persistent HTTP connections. Applicable only for HTTP output.
@item timeout
Set timeout for socket I/O operations. Applicable only for HTTP output.
@item ignore_io_errors
@item -ignore_io_errors
Ignore IO errors during open, write and delete. Useful for long-duration runs with network output.
@item headers
@@ -1421,10 +1278,6 @@ overwritten with new images. Default value is 0.
If set to 1, expand the filename with date and time information from
@code{strftime()}. Default value is 0.
@item atomic_writing
Write output to a temporary file, which is renamed to target filename once
writing is completed. Default is disabled.
@item protocol_opts @var{options_list}
Set protocol options as a :-separated list of key=value parameters. Values
containing the @code{:} special character must be escaped.
@@ -1468,7 +1321,7 @@ ffmpeg -f v4l2 -r 1 -i /dev/video0 -f image2 -strftime 1 "%Y-%m-%d_%H-%M-%S.jpg"
You can set the file name with current frame's PTS:
@example
ffmpeg -f v4l2 -r 1 -i /dev/video0 -copyts -f image2 -frame_pts true %d.jpg
ffmpeg -f v4l2 -r 1 -i /dev/video0 -copyts -f image2 -frame_pts true %d.jpg"
@end example
A more complex example is to publish contents of your desktop directly to a
@@ -1563,27 +1416,18 @@ A safe size for most use cases should be about 50kB per hour of video.
Note that cues are only written if the output is seekable and this option will
have no effect if it is not.
@item cues_to_front
If set, the muxer will write the index at the beginning of the file
by shifting the main data if necessary. This can be combined with
reserve_index_space in which case the data is only shifted if
the initially reserved space turns out to be insufficient.
This option is ignored if the output is unseekable.
@item default_mode
This option controls how the FlagDefault of the output tracks will be set.
It influences which tracks players should play by default. The default mode
is @samp{passthrough}.
is @samp{infer}.
@table @samp
@item infer
Every track with disposition default will have the FlagDefault set.
Additionally, for each type of track (audio, video or subtitle), if no track
with disposition default of this type exists, then the first track of this type
will be marked as default (if existing). This ensures that the default flag
is set in a sensible way even if the input originated from containers that
lack the concept of default tracks.
In this mode, for each type of track (audio, video or subtitle), if there is
a track with disposition default of this type, then the first such track
(i.e. the one with the lowest index) will be marked as default; if no such
track exists, the first track of this type will be marked as default instead
(if existing). This ensures that the default flag is set in a sensible way even
if the input originated from containers that lack the concept of default tracks.
@item infer_no_subs
This mode is the same as infer except that if no subtitle track with
disposition default exists, no subtitle track will be marked as default.
@@ -1630,10 +1474,8 @@ MOV/MP4/ISMV (Smooth Streaming) muxer.
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
file has all the metadata about all packets stored in one location
(written at the end of the file, it can be moved to the start for
better playback by adding @code{+faststart} to the @code{-movflags}, or
using the @command{qt-faststart} tool).
A fragmented
better playback by adding @var{faststart} to the @var{movflags}, or
using the @command{qt-faststart} tool). A fragmented
file consists of a number of fragments, where packets and metadata
about these packets are stored together. Writing a fragmented
file has the advantage that the file is decodable even if the
@@ -1643,34 +1485,40 @@ very long files (since writing normal MOV/MP4 files stores info about
every single packet in memory until the file is closed). The downside
is that it is less compatible with other applications.
Fragmentation is enabled by setting one of the options that define
how to cut the file into fragments: @code{-frag_duration}, @code{-frag_size},
@code{-min_frag_duration}, @code{-movflags +frag_keyframe} and
@code{-movflags +frag_custom}. If more than one condition is specified,
fragments are cut when one of the specified conditions is fulfilled. The
exception to this is @code{-min_frag_duration}, which has to be fulfilled for
any of the other conditions to apply.
@subsection Options
Fragmentation is enabled by setting one of the AVOptions that define
how to cut the file into fragments:
@table @option
@item frag_duration @var{duration}
Create fragments that are @var{duration} microseconds long.
@item frag_size @var{size}
Create fragments that contain up to @var{size} bytes of payload data.
@item min_frag_duration @var{duration}
Don't create fragments that are shorter than @var{duration} microseconds long.
@item movflags @var{flags}
Set various muxing switches. The following flags can be used:
@table @samp
@item frag_keyframe
@item -moov_size @var{bytes}
Reserves space for the moov atom at the beginning of the file instead of placing the
moov atom at the end. If the space reserved is insufficient, muxing will fail.
@item -movflags frag_keyframe
Start a new fragment at each video keyframe.
@item frag_custom
@item -frag_duration @var{duration}
Create fragments that are @var{duration} microseconds long.
@item -frag_size @var{size}
Create fragments that contain up to @var{size} bytes of payload data.
@item -movflags frag_custom
Allow the caller to manually choose when to cut fragments, by
calling @code{av_write_frame(ctx, NULL)} to write a fragment with
the packets written so far. (This is only useful with other
applications integrating libavformat, not from @command{ffmpeg}.)
@item empty_moov
@item -min_frag_duration @var{duration}
Don't create fragments that are shorter than @var{duration} microseconds long.
@end table
If more than one condition is specified, fragments are cut when
one of the specified conditions is fulfilled. The exception to this is
@code{-min_frag_duration}, which has to be fulfilled for any of the other
conditions to apply.
Additionally, the way the output file is written can be adjusted
through a few other options:
@table @option
@item -movflags empty_moov
Write an initial moov atom directly at the start of the file, without
describing any samples in it. Generally, an mdat/moov pair is written
at the start of the file, as a normal MOV/MP4 file, containing only
@@ -1679,40 +1527,43 @@ mdat atom, and the moov atom only describes the tracks but has
a zero duration.
This option is implicitly set when writing ismv (Smooth Streaming) files.
@item separate_moof
@item -movflags separate_moof
Write a separate moof (movie fragment) atom for each track. Normally,
packets for all tracks are written in a moof atom (which is slightly
more efficient), but with this option set, the muxer writes one moof/mdat
pair for each track, making it easier to separate tracks.
This option is implicitly set when writing ismv (Smooth Streaming) files.
@item skip_sidx
@item -movflags skip_sidx
Skip writing of sidx atom. When bitrate overhead due to sidx atom is high,
this option could be used for cases where sidx atom is not mandatory.
When global_sidx flag is enabled, this option will be ignored.
@item faststart
@item -movflags faststart
Run a second pass moving the index (moov atom) to the beginning of the file.
This operation can take a while, and will not work in various situations such
as fragmented output, thus it is not enabled by default.
@item rtphint
@item -movflags rtphint
Add RTP hinting tracks to the output file.
@item disable_chpl
@item -movflags disable_chpl
Disable Nero chapter markers (chpl atom). Normally, both Nero chapters
and a QuickTime chapter track are written to the file. With this option
set, only the QuickTime chapter track will be written. Nero chapters can
cause failures when the file is reprocessed with certain tagging programs, like
mp3Tag 2.61a and iTunes 11.3, most likely other versions are affected as well.
@item omit_tfhd_offset
@item -movflags omit_tfhd_offset
Do not write any absolute base_data_offset in tfhd atoms. This avoids
tying fragments to absolute byte positions in the file/streams.
@item default_base_moof
@item -movflags default_base_moof
Similarly to the omit_tfhd_offset, this flag avoids writing the
absolute base_data_offset field in tfhd atoms, but does so by using
the new default-base-is-moof flag instead. This flag is new from
14496-12:2012. This may make the fragments easier to parse in certain
circumstances (avoiding basing track fragment location calculations
on the implicit end of the previous track fragment).
@item negative_cts_offsets
@item -write_tmcd
Specify @code{on} to force writing a timecode track, @code{off} to disable it
and @code{auto} to write a timecode track only for mov and mp4 output (default).
@item -movflags negative_cts_offsets
Enables utilization of version 1 of the CTTS box, in which the CTS offsets can
be negative. This enables the initial sample to have DTS/CTS of zero, and
reduces the need for edit lists for some cases such as video tracks with
@@ -1720,24 +1571,7 @@ B-frames. Additionally, eases conformance with the DASH-IF interoperability
guidelines.
This option is implicitly set when writing ismv (Smooth Streaming) files.
@end table
@item moov_size @var{bytes}
Reserves space for the moov atom at the beginning of the file instead of placing the
moov atom at the end. If the space reserved is insufficient, muxing will fail.
@item write_tmcd
Specify @code{on} to force writing a timecode track, @code{off} to disable it
and @code{auto} to write a timecode track only for mov and mp4 output (default).
@item write_btrt @var{bool}
Force or disable writing bitrate box inside stsd box of a track.
The box contains decoding buffer size (in bytes), maximum bitrate and
average bitrate for the track. The box will be skipped if none of these values
can be computed.
Default is @code{-1} or @code{auto}, which will write the box only in MP4 mode.
@item write_prft
@item -write_prft
Write producer time reference box (PRFT) with a specified time source for the
NTP field in the PRFT box. Set value as @samp{wallclock} to specify timesource
as wallclock time and @samp{pts} to specify timesource as input packets' PTS
@@ -1747,19 +1581,6 @@ Setting value to @samp{pts} is applicable only for a live encoding use case,
where PTS values are set as as wallclock time at the source. For example, an
encoding use case with decklink capture source where @option{video_pts} and
@option{audio_pts} are set to @samp{abs_wallclock}.
@item empty_hdlr_name @var{bool}
Enable to skip writing the name inside a @code{hdlr} box.
Default is @code{false}.
@item movie_timescale @var{scale}
Set the timescale written in the movie header box (@code{mvhd}).
Range is 1 to INT_MAX. Default is 1000.
@item video_track_timescale @var{scale}
Set the timescale used for video tracks. Range is 0 to INT_MAX.
If set to @code{0}, the timescale is automatically set based on
the native stream time base. Default is 0.
@end table
@subsection Example
@@ -1909,10 +1730,6 @@ Reemit PAT and PMT at each video frame.
Conform to System B (DVB) instead of System A (ATSC).
@item initial_discontinuity
Mark the initial packet of each stream as discontinuity.
@item nit
Emit NIT table.
@item omit_rai
Disable writing of random access indicator.
@end table
@item mpegts_copyts @var{boolean}
@@ -1934,11 +1751,8 @@ Maximum time in seconds between PAT/PMT tables. Default is @code{0.1}.
@item sdt_period @var{duration}
Maximum time in seconds between SDT tables. Default is @code{0.5}.
@item nit_period @var{duration}
Maximum time in seconds between NIT tables. Default is @code{0.5}.
@item tables_version @var{integer}
Set PAT, PMT, SDT and NIT version (default @code{0}, valid values are from 0 to 31, inclusively).
Set PAT, PMT and SDT version (default @code{0}, valid values are from 0 to 31, inclusively).
This option allows updating stream structure so that standard consumer may
detect the change. To do so, reopen output @code{AVFormatContext} (in case of API
usage) or restart @command{ffmpeg} instance, cyclically changing
@@ -2050,188 +1864,6 @@ ogg files can be safely chained.
@end table
@anchor{raw muxers}
@section raw muxers
Raw muxers accept a single stream matching the designated codec. They do not store timestamps or metadata.
The recognized extension is the same as the muxer name unless indicated otherwise.
@subsection ac3
Dolby Digital, also known as AC-3, audio.
@subsection adx
CRI Middleware ADX audio.
This muxer will write out the total sample count near the start of the first packet
when the output is seekable and the count can be stored in 32 bits.
@subsection aptx
aptX (Audio Processing Technology for Bluetooth) audio.
@subsection aptx_hd
aptX HD (Audio Processing Technology for Bluetooth) audio.
Extensions: aptxhd
@subsection avs2
AVS2-P2/IEEE1857.4 video.
Extensions: avs, avs2
@subsection cavsvideo
Chinese AVS (Audio Video Standard) video.
Extensions: cavs
@subsection codec2raw
Codec 2 audio.
No extension is registered so format name has to be supplied e.g. with the ffmpeg CLI tool @code{-f codec2raw}.
@subsection data
Data muxer accepts a single stream with any codec of any type.
The input stream has to be selected using the @code{-map} option with the ffmpeg CLI tool.
No extension is registered so format name has to be supplied e.g. with the ffmpeg CLI tool @code{-f data}.
@subsection dirac
BBC Dirac video. The Dirac Pro codec is a subset and is standardized as SMPTE VC-2.
Extensions: drc, vc2
@subsection dnxhd
Avid DNxHD video. It is standardized as SMPTE VC-3. Accepts DNxHR streams.
Extensions: dnxhd, dnxhr
@subsection dts
DTS Coherent Acoustics (DCA) audio.
@subsection eac3
Dolby Digital Plus, also known as Enhanced AC-3, audio.
@subsection evc
MPEG-5 Essential Video Coding (EVC) / EVC / MPEG-5 Part 1 EVC video.
Extensions: evc
@subsection g722
ITU-T G.722 audio.
@subsection g723_1
ITU-T G.723.1 audio.
Extensions: tco, rco
@subsection g726
ITU-T G.726 big-endian ("left-justified") audio.
No extension is registered so format name has to be supplied e.g. with the ffmpeg CLI tool @code{-f g726}.
@subsection g726le
ITU-T G.726 little-endian ("right-justified") audio.
No extension is registered so format name has to be supplied e.g. with the ffmpeg CLI tool @code{-f g726le}.
@subsection gsm
Global System for Mobile Communications audio.
@subsection h261
ITU-T H.261 video.
@subsection h263
ITU-T H.263 / H.263-1996, H.263+ / H.263-1998 / H.263 version 2 video.
@subsection h264
ITU-T H.264 / MPEG-4 Part 10 AVC video. Bitstream shall be converted to Annex B syntax if it's in length-prefixed mode.
Extensions: h264, 264
@subsection hevc
ITU-T H.265 / MPEG-H Part 2 HEVC video. Bitstream shall be converted to Annex B syntax if it's in length-prefixed mode.
Extensions: hevc, h265, 265
@subsection m4v
MPEG-4 Part 2 video.
@subsection mjpeg
Motion JPEG video.
Extensions: mjpg, mjpeg
@subsection mlp
Meridian Lossless Packing, also known as Packed PCM, audio.
@subsection mp2
MPEG-1 Audio Layer II audio.
Extensions: mp2, m2a, mpa
@subsection mpeg1video
MPEG-1 Part 2 video.
Extensions: mpg, mpeg, m1v
@subsection mpeg2video
ITU-T H.262 / MPEG-2 Part 2 video.
Extensions: m2v
@subsection obu
AV1 low overhead Open Bitstream Units muxer. Temporal delimiter OBUs will be inserted in all temporal units of the stream.
@subsection rawvideo
Raw uncompressed video.
Extensions: yuv, rgb
@subsection sbc
Bluetooth SIG low-complexity subband codec audio.
Extensions: sbc, msbc
@subsection truehd
Dolby TrueHD audio.
Extensions: thd
@subsection vc1
SMPTE 421M / VC-1 video.
@anchor{segment}
@section segment, stream_segment, ssegment
@@ -2371,11 +2003,6 @@ Note that splitting may not be accurate, unless you force the
reference stream key-frames at the given time. See the introductory
notice and the examples below.
@item min_seg_duration @var{time}
Set minimum segment duration to @var{time}, the value must be a duration
specification. This prevents the muxer ending segments at a duration below
this value. Only effective with @code{segment_time}. Default value is "0".
@item segment_atclocktime @var{1|0}
If set to "1" split at regular clock time intervals starting from 00:00
o'clock. The @var{time} value specified in @option{segment_time} is
@@ -2605,6 +2232,106 @@ ffmpeg -i INPUT -f streamhash -hash md5 -
See also the @ref{hash} and @ref{framehash} muxers.
@anchor{fifo}
@section fifo
The fifo pseudo-muxer allows the separation of encoding and muxing by using
first-in-first-out queue and running the actual muxer in a separate thread. This
is especially useful in combination with the @ref{tee} muxer and can be used to
send data to several destinations with different reliability/writing speed/latency.
API users should be aware that callback functions (interrupt_callback,
io_open and io_close) used within its AVFormatContext must be thread-safe.
The behavior of the fifo muxer if the queue fills up or if the output fails is
selectable,
@itemize @bullet
@item
output can be transparently restarted with configurable delay between retries
based on real time or time of the processed stream.
@item
encoding can be blocked during temporary failure, or continue transparently
dropping packets in case fifo queue fills up.
@end itemize
@table @option
@item fifo_format
Specify the format name. Useful if it cannot be guessed from the
output name suffix.
@item queue_size
Specify size of the queue (number of packets). Default value is 60.
@item format_opts
Specify format options for the underlying muxer. Muxer options can be specified
as a list of @var{key}=@var{value} pairs separated by ':'.
@item drop_pkts_on_overflow @var{bool}
If set to 1 (true), in case the fifo queue fills up, packets will be dropped
rather than blocking the encoder. This makes it possible to continue streaming without
delaying the input, at the cost of omitting part of the stream. By default
this option is set to 0 (false), so in such cases the encoder will be blocked
until the muxer processes some of the packets and none of them is lost.
@item attempt_recovery @var{bool}
If failure occurs, attempt to recover the output. This is especially useful
when used with network output, since it makes it possible to restart streaming transparently.
By default this option is set to 0 (false).
@item max_recovery_attempts
Sets maximum number of successive unsuccessful recovery attempts after which
the output fails permanently. By default this option is set to 0 (unlimited).
@item recovery_wait_time @var{duration}
Waiting time before the next recovery attempt after previous unsuccessful
recovery attempt. Default value is 5 seconds.
@item recovery_wait_streamtime @var{bool}
If set to 0 (false), the real time is used when waiting for the recovery
attempt (i.e. the recovery will be attempted after at least
recovery_wait_time seconds).
If set to 1 (true), the time of the processed stream is taken into account
instead (i.e. the recovery will be attempted after at least @var{recovery_wait_time}
seconds of the stream is omitted).
By default, this option is set to 0 (false).
@item recover_any_error @var{bool}
If set to 1 (true), recovery will be attempted regardless of type of the error
causing the failure. By default this option is set to 0 (false) and in case of
certain (usually permanent) errors the recovery is not attempted even when
@var{attempt_recovery} is set to 1.
@item restart_with_keyframe @var{bool}
Specify whether to wait for the keyframe after recovering from
queue overflow or failure. This option is set to 0 (false) by default.
@item timeshift @var{duration}
Buffer the specified amount of packets and delay writing the output. Note that
@var{queue_size} must be big enough to store the packets for timeshift. At the
end of the input the fifo buffer is flushed at realtime speed.
@end table
@subsection Examples
@itemize
@item
Stream something to rtmp server, continue processing the stream at real-time
rate even in case of temporary failure (network outage) and attempt to recover
streaming every second indefinitely.
@example
ffmpeg -re -i ... -c:v libx264 -c:a aac -f fifo -fifo_format flv -map 0:v -map 0:a
-drop_pkts_on_overflow 1 -attempt_recovery 1 -recovery_wait_time 1 rtmp://example.com/live/stream_name
@end example
@end itemize
@anchor{tee}
@section tee
@@ -2737,49 +2464,6 @@ ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac
@end example
@end itemize
@section webm_chunk
WebM Live Chunk Muxer.
This muxer writes out WebM headers and chunks as separate files which can be
consumed by clients that support WebM Live streams via DASH.
@subsection Options
This muxer supports the following options:
@table @option
@item chunk_start_index
Index of the first chunk (defaults to 0).
@item header
Filename of the header where the initialization data will be written.
@item audio_chunk_duration
Duration of each audio chunk in milliseconds (defaults to 5000).
@end table
@subsection Example
@example
ffmpeg -f v4l2 -i /dev/video0 \
-f alsa -i hw:0 \
-map 0:0 \
-c:v libvpx-vp9 \
-s 640x360 -keyint_min 30 -g 30 \
-f webm_chunk \
-header webm_live_video_360.hdr \
-chunk_start_index 1 \
webm_live_video_360_%d.chk \
-map 1:0 \
-c:a libvorbis \
-b:a 128k \
-f webm_chunk \
-header webm_live_audio_128.hdr \
-chunk_start_index 1 \
-audio_chunk_duration 1000 \
webm_live_audio_128_%d.chk
@end example
@section webm_dash_manifest
WebM DASH Manifest muxer.
@@ -2846,4 +2530,47 @@ ffmpeg -f webm_dash_manifest -i video1.webm \
manifest.xml
@end example
@section webm_chunk
WebM Live Chunk Muxer.
This muxer writes out WebM headers and chunks as separate files which can be
consumed by clients that support WebM Live streams via DASH.
@subsection Options
This muxer supports the following options:
@table @option
@item chunk_start_index
Index of the first chunk (defaults to 0).
@item header
Filename of the header where the initialization data will be written.
@item audio_chunk_duration
Duration of each audio chunk in milliseconds (defaults to 5000).
@end table
@subsection Example
@example
ffmpeg -f v4l2 -i /dev/video0 \
-f alsa -i hw:0 \
-map 0:0 \
-c:v libvpx-vp9 \
-s 640x360 -keyint_min 30 -g 30 \
-f webm_chunk \
-header webm_live_video_360.hdr \
-chunk_start_index 1 \
webm_live_video_360_%d.chk \
-map 1:0 \
-c:a libvorbis \
-b:a 128k \
-f webm_chunk \
-header webm_live_audio_128.hdr \
-chunk_start_index 1 \
-audio_chunk_duration 1000 \
webm_live_audio_128_%d.chk
@end example
@c man end MUXERS

View File

@@ -267,11 +267,6 @@ CELL/SPU:
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/30B3520C93F437AB87257060006FFE5E/$file/Language_Extensions_for_CBEA_2.4.pdf
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/9F820A5FFA3ECE8C8725716A0062585F/$file/CBE_Handbook_v1.1_24APR2007_pub.pdf
RISC-V-specific:
----------------
The RISC-V Instruction Set Manual, Volume 1, Unprivileged ISA:
https://riscv.org/technical/specifications/
GCC asm links:
--------------
official doc but quite ugly

View File

@@ -198,48 +198,13 @@ Amount of time to preroll video in seconds.
Defaults to @option{0.5}.
@item duplex_mode
Sets the decklink device duplex/profile mode. Must be @samp{unset}, @samp{half}, @samp{full},
@samp{one_sub_device_full}, @samp{one_sub_device_half}, @samp{two_sub_device_full},
@samp{four_sub_device_half}
Sets the decklink device duplex mode. Must be @samp{unset}, @samp{half} or @samp{full}.
Defaults to @samp{unset}.
Note: DeckLink SDK 11.0 have replaced the duplex property by a profile property.
For the DeckLink Duo 2 and DeckLink Quad 2, a profile is shared between any 2
sub-devices that utilize the same connectors. For the DeckLink 8K Pro, a profile
is shared between all 4 sub-devices. So DeckLink 8K Pro support four profiles.
Valid profile modes for DeckLink 8K Pro(with DeckLink SDK >= 11.0):
@samp{one_sub_device_full}, @samp{one_sub_device_half}, @samp{two_sub_device_full},
@samp{four_sub_device_half}
Valid profile modes for DeckLink Quad 2 and DeckLink Duo 2:
@samp{half}, @samp{full}
@item timing_offset
Sets the genlock timing pixel offset on the used output.
Defaults to @samp{unset}.
@item link
Sets the SDI video link configuration on the used output. Must be
@samp{unset}, @samp{single} link SDI, @samp{dual} link SDI or @samp{quad} link
SDI.
Defaults to @samp{unset}.
@item sqd
Enable Square Division Quad Split mode for Quad-link SDI output.
Must be @samp{unset}, @samp{true} or @samp{false}.
Defaults to @option{unset}.
@item level_a
Enable SMPTE Level A mode on the used output.
Must be @samp{unset}, @samp{true} or @samp{false}.
Defaults to @option{unset}.
@item vanc_queue_size
Sets maximum output buffer size in bytes for VANC data. If the buffering reaches this value,
outgoing VANC data will be dropped.
Defaults to @samp{1048576}.
@end table
@subsection Examples
@@ -426,18 +391,13 @@ For more information about SDL, check:
@table @option
@item window_borderless
Set SDL window border off.
Default value is 0 (enable window border).
@item window_title
Set the SDL window title, if not specified default to the filename
specified for the output device.
@item window_enable_quit
Enable quit action (using window button or keyboard key)
when non-zero value is provided.
Default value is 1 (enable quit action).
@item window_fullscreen
Set fullscreen mode when non-zero value is provided.
Default value is zero.
@item icon_title
Set the name of the iconified SDL window, if not specified it is set
to the same value of @var{window_title}.
@item window_size
Set the SDL window size, can be a string of the form
@@ -445,13 +405,18 @@ Set the SDL window size, can be a string of the form
If not specified it defaults to the size of the input video,
downscaled according to the aspect ratio.
@item window_title
Set the SDL window title, if not specified default to the filename
specified for the output device.
@item window_x
@item window_y
Set the position of the window on the screen.
@item window_fullscreen
Set fullscreen mode when non-zero value is provided.
Default value is zero.
@item window_enable_quit
Enable quit action (using window button or keyboard key)
when non-zero value is provided.
Default value is 1 (enable quit action)
@end table
@subsection Interactive commands

View File

@@ -92,6 +92,9 @@ For information about compiling FFmpeg on OS/2 see
@chapter Windows
To get help and instructions for building FFmpeg under Windows, check out
the FFmpeg Windows Help Forum at @url{http://ffmpeg.zeranoe.com/forum/}.
@section Native Windows compilation using MinGW or MinGW-w64
FFmpeg can be built to run natively on Windows using the MinGW-w64

View File

@@ -215,38 +215,6 @@ ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
Note that you may need to escape the character "|" which is special for
many shells.
@section concatf
Physical concatenation protocol using a line break delimited list of
resources.
Read and seek from many resources in sequence as if they were
a unique resource.
A URL accepted by this protocol has the syntax:
@example
concatf:@var{URL}
@end example
where @var{URL} is the url containing a line break delimited list of
resources to be concatenated, each one possibly specifying a distinct
protocol. Special characters must be escaped with backslash or single
quotes. See @ref{quoting_and_escaping,,the "Quoting and escaping"
section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
For example to read a sequence of files @file{split1.mpeg},
@file{split2.mpeg}, @file{split3.mpeg} listed in separate lines within
a file @file{split.txt} with @command{ffplay} use the command:
@example
ffplay concatf:split.txt
@end example
Where @file{split.txt} contains the lines:
@example
split1.mpeg
split2.mpeg
split3.mpeg
@end example
@section crypto
AES-encrypted stream reading protocol.
@@ -275,33 +243,6 @@ For example, to convert a GIF file given inline with @command{ffmpeg}:
ffmpeg -i "data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP///////////////ywAAAAACAAIAAADF0gEDLojDgdGiJdJqUX02iB4E8Q9jUMkADs=" smiley.png
@end example
@section fd
File descriptor access protocol.
The accepted syntax is:
@example
fd: -fd @var{file_descriptor}
@end example
If @option{fd} is not specified, by default the stdout file descriptor will be
used for writing, stdin for reading. Unlike the pipe protocol, fd protocol has
seek support if it corresponding to a regular file. fd protocol doesn't support
pass file descriptor via URL for security.
This protocol accepts the following options:
@table @option
@item blocksize
Set I/O operation maximum block size, in bytes. Default value is
@code{INT_MAX}, which results in not limiting the requested block size.
Setting this value reasonably low improves user termination request reaction
time, which is valuable if data transmission is slow.
@item fd
Set file descriptor.
@end table
@section file
File access protocol.
@@ -465,6 +406,9 @@ Set the Referer header. Include 'Referer: URL' header in HTTP request.
Override the User-Agent header. If not specified the protocol will use a
string describing the libavformat build. ("Lavf/<version>")
@item user-agent
This is a deprecated option, you can use user_agent instead it.
@item reconnect_at_eof
If set then eof is treated like an error and causes reconnection, this is useful
for live / endless streams.
@@ -641,35 +585,6 @@ Establish a TLS (HTTPS) connection to Icecast.
icecast://[@var{username}[:@var{password}]@@]@var{server}:@var{port}/@var{mountpoint}
@end example
@section ipfs
InterPlanetary File System (IPFS) protocol support. One can access files stored
on the IPFS network through so-called gateways. These are http(s) endpoints.
This protocol wraps the IPFS native protocols (ipfs:// and ipns://) to be sent
to such a gateway. Users can (and should) host their own node which means this
protocol will use one's local gateway to access files on the IPFS network.
This protocol accepts the following options:
@table @option
@item gateway
Defines the gateway to use. When not set, the protocol will first try
locating the local gateway by looking at @code{$IPFS_GATEWAY}, @code{$IPFS_PATH}
and @code{$HOME/.ipfs/}, in that order.
@end table
One can use this protocol in 2 ways. Using IPFS:
@example
ffplay ipfs://<hash>
@end example
Or the IPNS protocol (IPNS is mutable IPFS):
@example
ffplay ipns://<hash>
@end example
@section mmst
MMS (Microsoft Media Server) protocol over TCP.
@@ -714,7 +629,7 @@ The accepted syntax is:
pipe:[@var{number}]
@end example
If @option{fd} isn't specified, @var{number} is the number corresponding to the file descriptor of the
@var{number} is the number corresponding to the file descriptor of the
pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
is not specified, by default the stdout file descriptor will be used
for writing, stdin for reading.
@@ -741,8 +656,6 @@ Set I/O operation maximum block size, in bytes. Default value is
@code{INT_MAX}, which results in not limiting the requested block size.
Setting this value reasonably low improves user termination request reaction
time, which is valuable if data transmission is slow.
@item fd
Set file descriptor.
@end table
Note that some formats (typically MOV), require the output protocol to
@@ -803,14 +716,6 @@ Set internal RIST buffer size in milliseconds for retransmission of data.
Default value is 0 which means the librist default (1 sec). Maximum value is 30
seconds.
@item fifo_size
Size of the librist receiver output fifo in number of packets. This must be a
power of 2.
Defaults to 8192 (vs the librist default of 1024).
@item overrun_nonfatal=@var{1|0}
Survive in case of librist fifo buffer overrun. Default value is 0.
@item pkt_size
Set maximum packet size for sending data. 1316 by default.
@@ -896,13 +801,6 @@ be named, by prefixing the type with 'N' and specifying the name before
the value (i.e. @code{NB:myFlag:1}). This option may be used multiple
times to construct arbitrary AMF sequences.
@item rtmp_enhanced_codecs
Specify the list of codecs the client advertises to support in an
enhanced RTMP stream. This option should be set to a comma separated
list of fourcc values, like @code{hvc1,av01,vp09} for multiple codecs
or @code{hvc1} for only one codec. The specified list will be presented
in the "fourCcLive" property of the Connect Command Message.
@item rtmp_flashver
Version of the Flash plugin used to run the SWF player. The default
is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible;
@@ -948,11 +846,6 @@ URL to player swf file, compute hash/size automatically.
@item rtmp_tcurl
URL of the target stream. Defaults to proto://host[:port]/app.
@item tcp_nodelay=@var{1|0}
Set TCP_NODELAY to disable Nagle's algorithm. Default value is 0.
@emph{Remark: Writing to the socket is currently not optimized to minimize system calls and reduces the efficiency / effect of TCP_NODELAY.}
@end table
For example to read with @command{ffplay} a multimedia resource named
@@ -1160,10 +1053,6 @@ set to 1) or to a default remote address (if set to 0).
@item localport=@var{n}
Set the local RTP port to @var{n}.
@item localaddr=@var{addr}
Local IP address of a network interface used for sending packets or joining
multicast groups.
@item timeout=@var{n}
Set timeout (in microseconds) of socket I/O operations to @var{n}.
@@ -1211,59 +1100,6 @@ Options can be set on the @command{ffmpeg}/@command{ffplay} command
line, or set in code via @code{AVOption}s or in
@code{avformat_open_input}.
@subsection Muxer
The following options are supported.
@table @option
@item rtsp_transport
Set RTSP transport protocols.
It accepts the following values:
@table @samp
@item udp
Use UDP as lower transport protocol.
@item tcp
Use TCP (interleaving within the RTSP control channel) as lower
transport protocol.
@end table
Default value is @samp{0}.
@item rtsp_flags
Set RTSP flags.
The following values are accepted:
@table @samp
@item latm
Use MP4A-LATM packetization instead of MPEG4-GENERIC for AAC.
@item rfc2190
Use RFC 2190 packetization instead of RFC 4629 for H.263.
@item skip_rtcp
Don't send RTCP sender reports.
@item h264_mode0
Use mode 0 for H.264 in RTP.
@item send_bye
Send RTCP BYE packets when finishing.
@end table
Default value is @samp{0}.
@item min_port
Set minimum local UDP port. Default value is 5000.
@item max_port
Set maximum local UDP port. Default value is 65000.
@item buffer_size
Set the maximum socket buffer size in bytes.
@item pkt_size
Set max send packet size (in bytes). Default value is 1472.
@end table
@subsection Demuxer
The following options are supported.
@table @option
@@ -1289,10 +1125,6 @@ Use UDP multicast as lower transport protocol.
@item http
Use HTTP tunneling as lower transport protocol, which is useful for
passing proxies.
@item https
Use HTTPs tunneling as lower transport protocol, which is useful for
passing proxies and widely used for security consideration.
@end table
Multiple lower transport protocols may be specified, in that case they are
@@ -1310,9 +1142,6 @@ Accept packets only from negotiated peer address and port.
Act as a server, listening for an incoming connection.
@item prefer_tcp
Try TCP for RTP transport first, if TCP is available as RTSP RTP transport.
@item satip_raw
Export raw MPEG-TS stream instead of demuxing. The flag will simply write out
the raw stream, with the original PAT/PMT/PIDs intact.
@end table
Default value is @samp{none}.
@@ -1325,7 +1154,6 @@ The following flags are accepted:
@item video
@item audio
@item data
@item subtitle
@end table
By default it accepts all media types.
@@ -1336,23 +1164,21 @@ Set minimum local UDP port. Default value is 5000.
@item max_port
Set maximum local UDP port. Default value is 65000.
@item listen_timeout
Set maximum timeout (in seconds) to establish an initial connection. Setting
@option{listen_timeout} > 0 sets @option{rtsp_flags} to @samp{listen}. Default is -1
which means an infinite timeout when @samp{listen} mode is set.
@item timeout
Set maximum timeout (in seconds) to wait for incoming connections.
A value of -1 means infinite (default). This option implies the
@option{rtsp_flags} set to @samp{listen}.
@item reorder_queue_size
Set number of packets to buffer for handling of reordered packets.
@item timeout
@item stimeout
Set socket TCP I/O timeout in microseconds.
@item user_agent
@item user-agent
Override User-Agent header. If not specified, it defaults to the
libavformat identifier string.
@item buffer_size
Set the maximum socket buffer size in bytes.
@end table
When receiving data over UDP, the demuxer tries to reorder received packets
@@ -1637,12 +1463,6 @@ when the old encryption key is decommissioned. Default is -1.
-1 means auto (0x1000 in srt library). The range for
this option is integers in the 0 - @code{INT_MAX}.
@item snddropdelay=@var{microseconds}
The sender's extra delay before dropping packets. This delay is
added to the default drop delay time interval value.
Special value -1: Do not drop packets on the sender at all.
@item payload_size=@var{bytes}
Sets the maximum declared size of a packet transferred
during the single call to the sending function in Live
@@ -1742,9 +1562,6 @@ This option doesnt make sense in Rendezvous connection; the result
might be that simply one side will override the value from the other
side and its the matter of luck which one would win
@item srt_streamid=@var{string}
Alias for @samp{streamid} to avoid conflict with ffmpeg command line option.
@item smoother=@var{live|file}
The type of Smoother used for the transmission for that socket, which
is responsible for the transmission and congestion control. The Smoother
@@ -1794,11 +1611,6 @@ Default is -1. -1 means auto (off with 0 seconds in live mode, on with 180
seconds in file mode). The range for this option is integers in the
0 - @code{INT_MAX}.
@item tsbpd=@var{1|0}
When true, use Timestamp-based Packet Delivery mode. The default behavior
depends on the transmission type: enabled in live mode, disabled in file
mode.
@end table
For more information see: @url{https://github.com/Haivision/srt}.
@@ -1889,12 +1701,6 @@ The list of supported options follows.
Listen for an incoming connection. 0 disables listen, 1 enables listen in
single client mode, 2 enables listen in multi-client mode. Default value is 0.
@item local_addr=@var{addr}
Local IP address of a network interface used for tcp socket connect.
@item local_port=@var{port}
Local port used for tcp socket connect.
@item timeout=@var{microseconds}
Set raise error timeout, expressed in microseconds.
@@ -1913,8 +1719,6 @@ Set send buffer size, expressed bytes.
@item tcp_nodelay=@var{1|0}
Set TCP_NODELAY to disable Nagle's algorithm. Default value is 0.
@emph{Remark: Writing to the socket is currently not optimized to minimize system calls and reduces the efficiency / effect of TCP_NODELAY.}
@item tcp_mss=@var{bytes}
Set maximum segment size for outgoing TCP packets, expressed in bytes.
@end table
@@ -2168,4 +1972,5 @@ decoding errors.
@end table
@c man end PROTOCOLS

View File

@@ -11,8 +11,18 @@ programmatic use.
@table @option
@item uchl, used_chlayout
Set used input channel layout. Default is unset. This option is
@item ich, in_channel_count
Set the number of input channels. Default value is 0. Setting this
value is not mandatory if the corresponding channel layout
@option{in_channel_layout} is set.
@item och, out_channel_count
Set the number of output channels. Default value is 0. Setting this
value is not mandatory if the corresponding channel layout
@option{out_channel_layout} is set.
@item uch, used_channel_count
Set the number of used input channels. Default value is 0. This option is
only used for special remapping.
@item isr, in_sample_rate
@@ -31,8 +41,8 @@ Specify the output sample format. It is set by default to @code{none}.
Set the internal sample format. Default value is @code{none}.
This will automatically be chosen when it is not explicitly set.
@item ichl, in_chlayout
@item ochl, out_chlayout
@item icl, in_channel_layout
@item ocl, out_channel_layout
Set the input/output channel layout.
See @ref{channel layout syntax,,the Channel Layout section in the ffmpeg-utils(1) manual,ffmpeg-utils}

View File

@@ -20,45 +20,8 @@
# License along with FFmpeg; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
# Texinfo 7.0 changed the syntax of various functions.
# Provide a shim for older versions.
sub ff_set_from_init_file($$) {
my $key = shift;
my $value = shift;
if (exists &{'texinfo_set_from_init_file'}) {
texinfo_set_from_init_file($key, $value);
} else {
set_from_init_file($key, $value);
}
}
sub ff_get_conf($) {
my $key = shift;
if (exists &{'texinfo_get_conf'}) {
texinfo_get_conf($key);
} else {
get_conf($key);
}
}
sub get_formatting_function($$) {
my $obj = shift;
my $func = shift;
my $sub = $obj->can('formatting_function');
if ($sub) {
return $obj->formatting_function($func);
} else {
return $obj->{$func};
}
}
# determine texinfo version
my $program_version_num = version->declare(ff_get_conf('PACKAGE_VERSION'))->numify;
my $program_version_6_8 = $program_version_num >= 6.008000;
# no navigation elements
ff_set_from_init_file('HEADERS', 0);
set_from_init_file('HEADERS', 0);
sub ffmpeg_heading_command($$$$$)
{
@@ -92,7 +55,7 @@ sub ffmpeg_heading_command($$$$$)
$element = $command->{'parent'};
}
if ($element) {
$result .= &{get_formatting_function($self, 'format_element_header')}($self, $cmdname,
$result .= &{$self->{'format_element_header'}}($self, $cmdname,
$command, $element);
}
@@ -149,11 +112,7 @@ sub ffmpeg_heading_command($$$$$)
$cmdname
= $Texinfo::Common::level_to_structuring_command{$cmdname}->[$heading_level];
}
# format_heading_text expects an array of headings for texinfo >= 7.0
if ($program_version_num >= 7.000000) {
$heading = [$heading];
}
$result .= &{get_formatting_function($self,'format_heading_text')}(
$result .= &{$self->{'format_heading_text'}}(
$self, $cmdname, $heading,
$heading_level +
$self->get_conf('CHAPTER_HEADER_LEVEL') - 1, $command);
@@ -168,18 +127,14 @@ foreach my $command (keys(%Texinfo::Common::sectioning_commands), 'node') {
}
# print the TOC where @contents is used
if ($program_version_6_8) {
ff_set_from_init_file('CONTENTS_OUTPUT_LOCATION', 'inline');
} else {
ff_set_from_init_file('INLINE_CONTENTS', 1);
}
set_from_init_file('INLINE_CONTENTS', 1);
# make chapters <h2>
ff_set_from_init_file('CHAPTER_HEADER_LEVEL', 2);
set_from_init_file('CHAPTER_HEADER_LEVEL', 2);
# Do not add <hr>
ff_set_from_init_file('DEFAULT_RULE', '');
ff_set_from_init_file('BIG_RULE', '');
set_from_init_file('DEFAULT_RULE', '');
set_from_init_file('BIG_RULE', '');
# Customized file beginning
sub ffmpeg_begin_file($$$)
@@ -196,18 +151,7 @@ sub ffmpeg_begin_file($$$)
my ($title, $description, $encoding, $date, $css_lines,
$doctype, $bodytext, $copying_comment, $after_body_open,
$extra_head, $program_and_version, $program_homepage,
$program, $generator);
if ($program_version_num >= 7.000000) {
($title, $description, $encoding, $date, $css_lines,
$doctype, $bodytext, $copying_comment, $after_body_open,
$extra_head, $program_and_version, $program_homepage,
$program, $generator) = $self->_file_header_information($command);
} else {
($title, $description, $encoding, $date, $css_lines,
$doctype, $bodytext, $copying_comment, $after_body_open,
$extra_head, $program_and_version, $program_homepage,
$program, $generator) = $self->_file_header_informations($command);
}
$program, $generator) = $self->_file_header_informations($command);
my $links = $self->_get_links ($filename, $element);
@@ -240,11 +184,7 @@ EOT
return $head1 . $head_title . $head2 . $head_title . $head3;
}
if ($program_version_6_8) {
texinfo_register_formatting_function('format_begin_file', \&ffmpeg_begin_file);
} else {
texinfo_register_formatting_function('begin_file', \&ffmpeg_begin_file);
}
texinfo_register_formatting_function('begin_file', \&ffmpeg_begin_file);
sub ffmpeg_program_string($)
{
@@ -261,17 +201,13 @@ sub ffmpeg_program_string($)
$self->gdt('This document was generated automatically.'));
}
}
if ($program_version_6_8) {
texinfo_register_formatting_function('format_program_string', \&ffmpeg_program_string);
} else {
texinfo_register_formatting_function('program_string', \&ffmpeg_program_string);
}
texinfo_register_formatting_function('program_string', \&ffmpeg_program_string);
# Customized file ending
sub ffmpeg_end_file($)
{
my $self = shift;
my $program_string = &{get_formatting_function($self,'format_program_string')}($self);
my $program_string = &{$self->{'format_program_string'}}($self);
my $program_text = <<EOT;
<p style="font-size: small;">
$program_string
@@ -284,15 +220,11 @@ EOT
EOT
return $program_text . $footer;
}
if ($program_version_6_8) {
texinfo_register_formatting_function('format_end_file', \&ffmpeg_end_file);
} else {
texinfo_register_formatting_function('end_file', \&ffmpeg_end_file);
}
texinfo_register_formatting_function('end_file', \&ffmpeg_end_file);
# Dummy title command
# Ignore title. Title is handled through ffmpeg_begin_file().
ff_set_from_init_file('USE_TITLEPAGE_FOR_TITLE', 1);
set_from_init_file('USE_TITLEPAGE_FOR_TITLE', 1);
sub ffmpeg_title($$$$)
{
return '';
@@ -310,14 +242,8 @@ sub ffmpeg_float($$$$$)
my $args = shift;
my $content = shift;
my ($caption, $prepended);
if ($program_version_num >= 7.000000) {
($caption, $prepended) = Texinfo::Convert::Converter::float_name_caption($self,
$command);
} else {
($caption, $prepended) = Texinfo::Common::float_name_caption($self,
$command);
}
my ($caption, $prepended) = Texinfo::Common::float_name_caption($self,
$command);
my $caption_text = '';
my $prepended_text;
my $prepended_save = '';
@@ -389,13 +315,8 @@ sub ffmpeg_float($$$$$)
$caption->{'args'}->[0], 'float caption');
}
if ($prepended_text.$caption_text ne '') {
if ($program_version_num >= 7.000000) {
$prepended_text = $self->html_attribute_class('div',['float-caption']). '>'
. $prepended_text;
} else {
$prepended_text = $self->_attribute_class('div','float-caption'). '>'
. $prepended_text;
}
$prepended_text = $self->_attribute_class('div','float-caption'). '>'
. $prepended_text;
$caption_text .= '</div>';
}
my $html_class = '';
@@ -408,13 +329,8 @@ sub ffmpeg_float($$$$$)
$prepended_text = '';
$caption_text = '';
}
if ($program_version_num >= 7.000000) {
return $self->html_attribute_class('div', [$html_class]). '>' . "\n" .
$prepended_text . $caption_text . $content . '</div>';
} else {
return $self->_attribute_class('div', $html_class). '>' . "\n" .
$prepended_text . $caption_text . $content . '</div>';
}
return $self->_attribute_class('div', $html_class). '>' . "\n" .
$prepended_text . $caption_text . $content . '</div>';
}
texinfo_register_command_formatting('float',

File diff suppressed because it is too large Load Diff

View File

@@ -695,8 +695,6 @@ FL+FR+FC+LFE+SL+SR
FL+FR+FC+BC+SL+SR
@item 6.0(front)
FL+FR+FLC+FRC+SL+SR
@item 3.1.2
FL+FR+FC+LFE+TFL+TFR
@item hexagonal
FL+FR+FC+BL+BR+BC
@item 6.1
@@ -715,48 +713,28 @@ FL+FR+FC+LFE+BL+BR+SL+SR
FL+FR+FC+LFE+BL+BR+FLC+FRC
@item 7.1(wide-side)
FL+FR+FC+LFE+FLC+FRC+SL+SR
@item 5.1.2
FL+FR+FC+LFE+BL+BR+TFL+TFR
@item octagonal
FL+FR+FC+BL+BR+BC+SL+SR
@item cube
FL+FR+BL+BR+TFL+TFR+TBL+TBR
@item 5.1.4
FL+FR+FC+LFE+BL+BR+TFL+TFR+TBL+TBR
@item 7.1.2
FL+FR+FC+LFE+BL+BR+SL+SR+TFL+TFR
@item 7.1.4
FL+FR+FC+LFE+BL+BR+SL+SR+TFL+TFR+TBL+TBR
@item hexadecagonal
FL+FR+FC+BL+BR+BC+SL+SR+WL+WR+TBL+TBR+TBC+TFC+TFL+TFR
@item downmix
DL+DR
@item 22.2
FL+FR+FC+LFE+BL+BR+FLC+FRC+BC+SL+SR+TC+TFL+TFC+TFR+TBL+TBC+TBR+LFE2+TSL+TSR+BFC+BFL+BFR
@end table
A custom channel layout can be specified as a sequence of terms, separated by '+'.
Each term can be:
A custom channel layout can be specified as a sequence of terms, separated by
'+' or '|'. Each term can be:
@itemize
@item
the name of a single channel (e.g. @samp{FL}, @samp{FR}, @samp{FC}, @samp{LFE}, etc.),
each optionally containing a custom name after a '@@', (e.g. @samp{FL@@Left},
@samp{FR@@Right}, @samp{FC@@Center}, @samp{LFE@@Low_Frequency}, etc.)
@end itemize
A standard channel layout can be specified by the following:
@itemize
@item
the name of a single channel (e.g. @samp{FL}, @samp{FR}, @samp{FC}, @samp{LFE}, etc.)
@item
the name of a standard channel layout (e.g. @samp{mono},
@samp{stereo}, @samp{4.0}, @samp{quad}, @samp{5.0}, etc.)
@item
the name of a single channel (e.g. @samp{FL}, @samp{FR}, @samp{FC}, @samp{LFE}, etc.)
@item
a number of channels, in decimal, followed by 'c', yielding the default channel
layout for that number of channels (see the function
@code{av_channel_layout_default}). Note that not all channel counts have a
@code{av_get_default_channel_layout}). Note that not all channel counts have a
default layout.
@item
@@ -773,7 +751,7 @@ Before libavutil version 53 the trailing character "c" to specify a number of
channels was optional, but now it is required, while a channel layout mask can
also be specified as a decimal number (if and only if not followed by "c" or "C").
See also the function @code{av_channel_layout_from_string} defined in
See also the function @code{av_get_channel_layout} defined in
@file{libavutil/channel_layout.h}.
@c man end SYNTAX
@@ -1085,13 +1063,13 @@ indication of the corresponding powers of 10 and of 2.
@item T
10^12 / 2^40
@item P
10^15 / 2^50
10^15 / 2^40
@item E
10^18 / 2^60
10^18 / 2^50
@item Z
10^21 / 2^70
10^21 / 2^60
@item Y
10^24 / 2^80
10^24 / 2^70
@end table
@c man end EXPRESSION EVALUATION

View File

@@ -418,4 +418,4 @@ done:
When all of this is done, you can submit your patch to the ffmpeg-devel
mailing-list for review. If you need any help, feel free to come on our IRC
channel, #ffmpeg-devel on irc.libera.chat.
channel, #ffmpeg-devel on irc.freenode.net.

2
ffbuild/.gitignore vendored
View File

@@ -1,6 +1,4 @@
/.config
/bin2c
/bin2c.exe
/config.fate
/config.log
/config.mak

View File

@@ -8,15 +8,10 @@ OBJS-$(HAVE_MIPSFPU) += $(MIPSFPU-OBJS) $(MIPSFPU-OBJS-yes)
OBJS-$(HAVE_MIPSDSP) += $(MIPSDSP-OBJS) $(MIPSDSP-OBJS-yes)
OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes)
OBJS-$(HAVE_MSA) += $(MSA-OBJS) $(MSA-OBJS-yes)
OBJS-$(HAVE_MMI) += $(MMI-OBJS) $(MMI-OBJS-yes)
OBJS-$(HAVE_LSX) += $(LSX-OBJS) $(LSX-OBJS-yes)
OBJS-$(HAVE_LASX) += $(LASX-OBJS) $(LASX-OBJS-yes)
OBJS-$(HAVE_MMI) += $(MMI-OBJS) $(MMI-OBJS-yes)
OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes)
OBJS-$(HAVE_VSX) += $(VSX-OBJS) $(VSX-OBJS-yes)
OBJS-$(HAVE_RV) += $(RV-OBJS) $(RV-OBJS-yes)
OBJS-$(HAVE_RVV) += $(RVV-OBJS) $(RVV-OBJS-yes)
OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes)
OBJS-$(HAVE_X86ASM) += $(X86ASM-OBJS) $(X86ASM-OBJS-yes)

View File

@@ -1,76 +0,0 @@
/*
* This file is part of FFmpeg.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <string.h>
#include <stdio.h>
int main(int argc, char **argv)
{
const char *name;
FILE *input, *output;
unsigned int length = 0;
unsigned char data;
if (argc < 3 || argc > 4)
return 1;
input = fopen(argv[1], "rb");
if (!input)
return -1;
output = fopen(argv[2], "wb");
if (!output)
return -1;
if (argc == 4) {
name = argv[3];
} else {
size_t arglen = strlen(argv[1]);
name = argv[1];
for (int i = 0; i < arglen; i++) {
if (argv[1][i] == '.')
argv[1][i] = '_';
else if (argv[1][i] == '/')
name = &argv[1][i+1];
}
}
fprintf(output, "const unsigned char ff_%s_data[] = { ", name);
while (fread(&data, 1, 1, input) > 0) {
fprintf(output, "0x%02x, ", data);
length++;
}
fprintf(output, "0x00 };\n");
fprintf(output, "const unsigned int ff_%s_len = %u;\n", name, length);
fclose(output);
if (ferror(input) || !feof(input))
return -1;
fclose(input);
return 0;
}

View File

@@ -12,13 +12,10 @@ endif
ifndef SUBDIR
BIN2CEXE = ffbuild/bin2c$(HOSTEXESUF)
BIN2C = $(BIN2CEXE)
ifndef V
Q = @
ECHO = printf "$(1)\t%s\n" $(2)
BRIEF = CC CXX OBJCC HOSTCC HOSTLD AS X86ASM AR LD STRIP CP WINDRES NVCC BIN2C
BRIEF = CC CXX OBJCC HOSTCC HOSTLD AS X86ASM AR LD STRIP CP WINDRES NVCC
SILENT = DEPCC DEPHOSTCC DEPAS DEPX86ASM RANLIB RM
MSG = $@
@@ -29,8 +26,7 @@ $(foreach VAR,$(SILENT),$(eval override $(VAR) = @$($(VAR))))
$(eval INSTALL = @$(call ECHO,INSTALL,$$(^:$(SRC_DIR)/%=%)); $(INSTALL))
endif
# Prepend to a recursively expanded variable without making it simply expanded.
PREPEND = $(eval $(1) = $(patsubst %,$$(%), $(2)) $(value $(1)))
ALLFFLIBS = avcodec avdevice avfilter avformat avresample avutil postproc swscale swresample
# NASM requires -I path terminated with /
IFLAGS := -I. -I$(SRC_LINK)/
@@ -40,9 +36,7 @@ CCFLAGS = $(CPPFLAGS) $(CFLAGS)
OBJCFLAGS += $(EOBJCFLAGS)
OBJCCFLAGS = $(CPPFLAGS) $(CFLAGS) $(OBJCFLAGS)
ASFLAGS := $(CPPFLAGS) $(ASFLAGS)
# Use PREPEND here so that later (target-dependent) additions to CPPFLAGS
# end up in CXXFLAGS.
$(call PREPEND,CXXFLAGS, CPPFLAGS CFLAGS)
CXXFLAGS := $(CPPFLAGS) $(CFLAGS) $(CXXFLAGS)
X86ASMFLAGS += $(IFLAGS:%=%/) -I$(<D)/ -Pconfig.asm
HOSTCCFLAGS = $(IFLAGS) $(HOSTCPPFLAGS) $(HOSTCFLAGS)
@@ -62,8 +56,6 @@ COMPILE_HOSTC = $(call COMPILE,HOSTCC)
COMPILE_NVCC = $(call COMPILE,NVCC)
COMPILE_MMI = $(call COMPILE,CC,MMIFLAGS)
COMPILE_MSA = $(call COMPILE,CC,MSAFLAGS)
COMPILE_LSX = $(call COMPILE,CC,LSXFLAGS)
COMPILE_LASX = $(call COMPILE,CC,LASXFLAGS)
%_mmi.o: %_mmi.c
$(COMPILE_MMI)
@@ -71,12 +63,6 @@ COMPILE_LASX = $(call COMPILE,CC,LASXFLAGS)
%_msa.o: %_msa.c
$(COMPILE_MSA)
%_lsx.o: %_lsx.c
$(COMPILE_LSX)
%_lasx.o: %_lasx.c
$(COMPILE_LASX)
%.o: %.c
$(COMPILE_C)
@@ -104,7 +90,7 @@ COMPILE_LASX = $(call COMPILE,CC,LASXFLAGS)
-$(if $(ASMSTRIPFLAGS), $(STRIP) $(ASMSTRIPFLAGS) $@)
%.o: %.rc
$(WINDRES) $(IFLAGS) $(foreach ARG,$(CC_DEPFLAGS),--preprocessor-arg "$(ARG)") -o $@ $<
$(WINDRES) $(IFLAGS) --preprocessor "$(DEPWINDRES) -E -xc-header -DRC_INVOKED $(CC_DEPFLAGS)" -o $@ $<
%.i: %.c
$(CC) $(CCFLAGS) $(CC_E) $<
@@ -112,35 +98,11 @@ COMPILE_LASX = $(call COMPILE,CC,LASXFLAGS)
%.h.c:
$(Q)echo '#include "$*.h"' >$@
$(BIN2CEXE): ffbuild/bin2c_host.o
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $^ $(HOSTEXTRALIBS)
%.metal.air: %.metal
$(METALCC) $< -o $@
%.metallib: %.metal.air
$(METALLIB) --split-module-without-linking $< -o $@
%.metallib.c: %.metallib $(BIN2CEXE)
$(BIN2C) $< $@ $(subst .,_,$(basename $(notdir $@)))
%.ptx: %.cu $(SRC_PATH)/compat/cuda/cuda_runtime.h
$(COMPILE_NVCC)
ifdef CONFIG_PTX_COMPRESSION
%.ptx.gz: TAG = GZIP
%.ptx.gz: %.ptx
$(M)gzip -c9 $(patsubst $(SRC_PATH)/%,$(SRC_LINK)/%,$<) >$@
%.ptx.c: %.ptx.gz $(BIN2CEXE)
$(BIN2C) $(patsubst $(SRC_PATH)/%,$(SRC_LINK)/%,$<) $@ $(subst .,_,$(basename $(notdir $@)))
else
%.ptx.c: %.ptx $(BIN2CEXE)
$(BIN2C) $(patsubst $(SRC_PATH)/%,$(SRC_LINK)/%,$<) $@ $(subst .,_,$(basename $(notdir $@)))
endif
clean::
$(RM) $(BIN2CEXE)
%.ptx.c: %.ptx
$(Q)sh $(SRC_PATH)/compat/cuda/ptx2c.sh $@ $(patsubst $(SRC_PATH)/%,$(SRC_LINK)/%,$<)
%.c %.h %.pc %.ver %.version: TAG = GEN
@@ -160,8 +122,6 @@ include $(SRC_PATH)/ffbuild/arch.mak
OBJS += $(OBJS-yes)
SLIBOBJS += $(SLIBOBJS-yes)
SHLIBOBJS += $(SHLIBOBJS-yes)
STLIBOBJS += $(STLIBOBJS-yes)
FFLIBS := $($(NAME)_FFLIBS) $(FFLIBS-yes) $(FFLIBS)
TESTPROGS += $(TESTPROGS-yes)
@@ -170,8 +130,6 @@ FFEXTRALIBS := $(LDLIBS:%=$(LD_LIB)) $(foreach lib,EXTRALIBS-$(NAME) $(FFLIBS:%=
OBJS := $(sort $(OBJS:%=$(SUBDIR)%))
SLIBOBJS := $(sort $(SLIBOBJS:%=$(SUBDIR)%))
SHLIBOBJS := $(sort $(SHLIBOBJS:%=$(SUBDIR)%))
STLIBOBJS := $(sort $(STLIBOBJS:%=$(SUBDIR)%))
TESTOBJS := $(TESTOBJS:%=$(SUBDIR)tests/%) $(TESTPROGS:%=$(SUBDIR)tests/%.o)
TESTPROGS := $(TESTPROGS:%=$(SUBDIR)tests/%$(EXESUF))
HOSTOBJS := $(HOSTPROGS:%=$(SUBDIR)%.o)
@@ -193,7 +151,7 @@ HOBJS = $(filter-out $(SKIPHEADERS:.h=.h.o),$(ALLHEADERS:.h=.h.o))
PTXOBJS = $(filter %.ptx.o,$(OBJS))
$(HOBJS): CCFLAGS += $(CFLAGS_HEADERS)
checkheaders: $(HOBJS)
.SECONDARY: $(HOBJS:.o=.c) $(PTXOBJS:.o=.c) $(PTXOBJS:.o=.gz) $(PTXOBJS:.o=)
.SECONDARY: $(HOBJS:.o=.c) $(PTXOBJS:.o=.c) $(PTXOBJS:.o=)
alltools: $(TOOLS)
@@ -207,14 +165,12 @@ $(OBJS): | $(sort $(dir $(OBJS)))
$(HOBJS): | $(sort $(dir $(HOBJS)))
$(HOSTOBJS): | $(sort $(dir $(HOSTOBJS)))
$(SLIBOBJS): | $(sort $(dir $(SLIBOBJS)))
$(SHLIBOBJS): | $(sort $(dir $(SHLIBOBJS)))
$(STLIBOBJS): | $(sort $(dir $(STLIBOBJS)))
$(TESTOBJS): | $(sort $(dir $(TESTOBJS)))
$(TOOLOBJS): | tools
OUTDIRS := $(OUTDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(SHLIBOBJS) $(STLIBOBJS) $(TESTOBJS))
OUTDIRS := $(OUTDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
CLEANSUFFIXES = *.d *.gcda *.gcno *.h.c *.ho *.map *.o *.pc *.ptx *.ptx.gz *.ptx.c *.ver *.version *$(DEFAULT_X86ASMD).asm *~ *.ilk *.pdb
CLEANSUFFIXES = *.d *.gcda *.gcno *.h.c *.ho *.map *.o *.pc *.ptx *.ptx.c *.ver *.version *$(DEFAULT_X86ASMD).asm *~ *.ilk *.pdb
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a
define RULES
@@ -224,4 +180,4 @@ endef
$(eval $(RULES))
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SHLIBOBJS:.o=.d) $(STLIBOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_X86ASMD).d)
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_X86ASMD).d)

View File

@@ -14,26 +14,10 @@ INSTHEADERS := $(INSTHEADERS) $(HEADERS:%=$(SUBDIR)%)
all-$(CONFIG_STATIC): $(SUBDIR)$(LIBNAME) $(SUBDIR)lib$(FULLNAME).pc
all-$(CONFIG_SHARED): $(SUBDIR)$(SLIBNAME) $(SUBDIR)lib$(FULLNAME).pc
LIBOBJS := $(OBJS) $(SHLIBOBJS) $(STLIBOBJS) $(SUBDIR)%.h.o $(TESTOBJS)
LIBOBJS := $(OBJS) $(SUBDIR)%.h.o $(TESTOBJS)
$(LIBOBJS) $(LIBOBJS:.o=.s) $(LIBOBJS:.o=.i): CPPFLAGS += -DHAVE_AV_CONFIG_H
ifdef CONFIG_SHARED
# In case both shared libs and static libs are enabled, it can happen
# that a user might want to link e.g. libavformat statically, but
# libavcodec and the other libs dynamically. In this case
# libavformat won't be able to access libavcodec's internal symbols,
# so that they have to be duplicated into the archive just like
# for purely shared builds.
# Test programs are always statically linked against their library
# to be able to access their library's internals, even with shared builds.
# Yet linking against dependend libraries still uses dynamic linking.
# This means that we are in the scenario described above.
# In case only static libs are used, the linker will only use
# one of these copies; this depends on the duplicated object files
# containing exactly the same symbols.
OBJS += $(SHLIBOBJS)
endif
$(SUBDIR)$(LIBNAME): $(OBJS) $(STLIBOBJS)
$(SUBDIR)$(LIBNAME): $(OBJS)
$(RM) $@
$(AR) $(ARFLAGS) $(AR_O) $^
$(RANLIB) $@
@@ -52,8 +36,8 @@ $(LIBOBJS): CPPFLAGS += -DBUILDING_$(NAME)
$(TESTPROGS) $(TOOLS): %$(EXESUF): %.o
$$(LD) $(LDFLAGS) $(LDEXEFLAGS) $$(LD_O) $$(filter %.o,$$^) $$(THISLIB) $(FFEXTRALIBS) $$(EXTRALIBS-$$(*F)) $$(ELIBS)
$(SUBDIR)lib$(NAME).version: $(SUBDIR)version.h $(SUBDIR)version_major.h | $(SUBDIR)
$$(M) $$(SRC_PATH)/ffbuild/libversion.sh $(NAME) $$^ > $$@
$(SUBDIR)lib$(NAME).version: $(SUBDIR)version.h | $(SUBDIR)
$$(M) $$(SRC_PATH)/ffbuild/libversion.sh $(NAME) $$< > $$@
$(SUBDIR)lib$(FULLNAME).pc: $(SUBDIR)version.h ffbuild/config.sh | $(SUBDIR)
$$(M) $$(SRC_PATH)/ffbuild/pkgconfig_generate.sh $(NAME) "$(DESC)"
@@ -64,7 +48,7 @@ $(SUBDIR)lib$(NAME).ver: $(SUBDIR)lib$(NAME).v $(OBJS)
$(SUBDIR)$(SLIBNAME): $(SUBDIR)$(SLIBNAME_WITH_MAJOR)
$(Q)cd ./$(SUBDIR) && $(LN_S) $(SLIBNAME_WITH_MAJOR) $(SLIBNAME)
$(SUBDIR)$(SLIBNAME_WITH_MAJOR): $(OBJS) $(SHLIBOBJS) $(SLIBOBJS) $(SUBDIR)lib$(NAME).ver
$(SUBDIR)$(SLIBNAME_WITH_MAJOR): $(OBJS) $(SLIBOBJS) $(SUBDIR)lib$(NAME).ver
$(SLIB_CREATE_DEF_CMD)
$$(LD) $(SHFLAGS) $(LDFLAGS) $(LDSOFLAGS) $$(LD_O) $$(filter %.o,$$^) $(FFEXTRALIBS)
$(SLIB_EXTRA_CMD)

View File

@@ -5,12 +5,8 @@ toupper(){
name=lib$1
ucname=$(toupper ${name})
file=$2
file2=$3
eval $(awk "/#define ${ucname}_VERSION_M/ { print \$2 \"=\" \$3 }" "$file")
if [ -f "$file2" ]; then
eval $(awk "/#define ${ucname}_VERSION_M/ { print \$2 \"=\" \$3 }" "$file2")
fi
eval ${ucname}_VERSION=\$${ucname}_VERSION_MAJOR.\$${ucname}_VERSION_MINOR.\$${ucname}_VERSION_MICRO
eval echo "${name}_VERSION=\$${ucname}_VERSION"
eval echo "${name}_VERSION_MAJOR=\$${ucname}_VERSION_MAJOR"

View File

@@ -9,24 +9,15 @@ AVBASENAMES = ffmpeg ffplay ffprobe
ALLAVPROGS = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF))
ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
OBJS-ffmpeg += \
fftools/ffmpeg_dec.o \
fftools/ffmpeg_demux.o \
fftools/ffmpeg_enc.o \
fftools/ffmpeg_filter.o \
fftools/ffmpeg_hw.o \
fftools/ffmpeg_mux.o \
fftools/ffmpeg_mux_init.o \
fftools/ffmpeg_opt.o \
fftools/objpool.o \
fftools/sync_queue.o \
fftools/thread_queue.o \
OBJS-ffmpeg += fftools/ffmpeg_opt.o fftools/ffmpeg_filter.o fftools/ffmpeg_hw.o
OBJS-ffmpeg-$(CONFIG_LIBMFX) += fftools/ffmpeg_qsv.o
ifndef CONFIG_VIDEOTOOLBOX
OBJS-ffmpeg-$(CONFIG_VDA) += fftools/ffmpeg_videotoolbox.o
endif
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += fftools/ffmpeg_videotoolbox.o
define DOFFTOOL
OBJS-$(1) += fftools/cmdutils.o fftools/opt_common.o fftools/$(1).o $(OBJS-$(1)-yes)
ifdef HAVE_GNU_WINDRES
OBJS-$(1) += fftools/fftoolsres.o
endif
OBJS-$(1) += fftools/cmdutils.o fftools/$(1).o $(OBJS-$(1)-yes)
$(1)$(PROGSSUF)_g$(EXESUF): $$(OBJS-$(1))
$$(OBJS-$(1)): | fftools
$$(OBJS-$(1)): CFLAGS += $(CFLAGS-$(1))

File diff suppressed because it is too large Load Diff

View File

@@ -44,16 +44,33 @@ extern const char program_name[];
*/
extern const int program_birth_year;
extern AVCodecContext *avcodec_opts[AVMEDIA_TYPE_NB];
extern AVFormatContext *avformat_opts;
extern AVDictionary *sws_dict;
extern AVDictionary *swr_opts;
extern AVDictionary *format_opts, *codec_opts;
extern AVDictionary *format_opts, *codec_opts, *resample_opts;
extern int hide_banner;
/**
* Register a program-specific cleanup routine.
*/
void register_exit(void (*cb)(int ret));
/**
* Wraps exit with a program-specific cleanup routine.
*/
void exit_program(int ret) av_noreturn;
/**
* Initialize dynamic library loading
*/
void init_dynload(void);
/**
* Initialize the cmdutils option system, in particular
* allocate the *_opts contexts.
*/
void init_opts(void);
/**
* Uninitialize the cmdutils option system, in particular
* free the *_opts contexts and their contents.
@@ -66,12 +83,28 @@ void uninit_opts(void);
*/
void log_callback_help(void* ptr, int level, const char* fmt, va_list vl);
/**
* Override the cpuflags.
*/
int opt_cpuflags(void *optctx, const char *opt, const char *arg);
/**
* Fallback for options that are not explicitly handled, these will be
* parsed through AVOptions.
*/
int opt_default(void *optctx, const char *opt, const char *arg);
/**
* Set the libav* libraries log level.
*/
int opt_loglevel(void *optctx, const char *opt, const char *arg);
int opt_report(void *optctx, const char *opt, const char *arg);
int opt_max_alloc(void *optctx, const char *opt, const char *arg);
int opt_codec_debug(void *optctx, const char *opt, const char *arg);
/**
* Limit the execution time.
*/
@@ -79,6 +112,8 @@ int opt_timelimit(void *optctx, const char *opt, const char *arg);
/**
* Parse a string and return its corresponding value as a double.
* Exit from the application if the string cannot be correctly
* parsed or the corresponding value is invalid.
*
* @param context the context of the value to be set (e.g. the
* corresponding command line option name)
@@ -88,8 +123,25 @@ int opt_timelimit(void *optctx, const char *opt, const char *arg);
* @param min the minimum valid accepted value
* @param max the maximum valid accepted value
*/
int parse_number(const char *context, const char *numstr, int type,
double min, double max, double *dst);
double parse_number_or_die(const char *context, const char *numstr, int type,
double min, double max);
/**
* Parse a string specifying a time and return its corresponding
* value as a number of microseconds. Exit from the application if
* the string cannot be correctly parsed.
*
* @param context the context of the value to be set (e.g. the
* corresponding command line option name)
* @param timestr the string to be parsed
* @param is_duration a flag which tells how to interpret timestr, if
* not zero timestr is interpreted as a duration, otherwise as a
* date
*
* @see av_parse_time()
*/
int64_t parse_time_or_die(const char *context, const char *timestr,
int is_duration);
typedef struct SpecifierOpt {
char *specifier; /**< stream/chapter/program/... specifier */
@@ -149,6 +201,47 @@ typedef struct OptionDef {
void show_help_options(const OptionDef *options, const char *msg, int req_flags,
int rej_flags, int alt_flags);
#if CONFIG_AVDEVICE
#define CMDUTILS_COMMON_OPTIONS_AVDEVICE \
{ "sources" , OPT_EXIT | HAS_ARG, { .func_arg = show_sources }, \
"list sources of the input device", "device" }, \
{ "sinks" , OPT_EXIT | HAS_ARG, { .func_arg = show_sinks }, \
"list sinks of the output device", "device" }, \
#else
#define CMDUTILS_COMMON_OPTIONS_AVDEVICE
#endif
#define CMDUTILS_COMMON_OPTIONS \
{ "L", OPT_EXIT, { .func_arg = show_license }, "show license" }, \
{ "h", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
{ "?", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
{ "help", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
{ "-help", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
{ "version", OPT_EXIT, { .func_arg = show_version }, "show version" }, \
{ "buildconf", OPT_EXIT, { .func_arg = show_buildconf }, "show build configuration" }, \
{ "formats", OPT_EXIT, { .func_arg = show_formats }, "show available formats" }, \
{ "muxers", OPT_EXIT, { .func_arg = show_muxers }, "show available muxers" }, \
{ "demuxers", OPT_EXIT, { .func_arg = show_demuxers }, "show available demuxers" }, \
{ "devices", OPT_EXIT, { .func_arg = show_devices }, "show available devices" }, \
{ "codecs", OPT_EXIT, { .func_arg = show_codecs }, "show available codecs" }, \
{ "decoders", OPT_EXIT, { .func_arg = show_decoders }, "show available decoders" }, \
{ "encoders", OPT_EXIT, { .func_arg = show_encoders }, "show available encoders" }, \
{ "bsfs", OPT_EXIT, { .func_arg = show_bsfs }, "show available bit stream filters" }, \
{ "protocols", OPT_EXIT, { .func_arg = show_protocols }, "show available protocols" }, \
{ "filters", OPT_EXIT, { .func_arg = show_filters }, "show available filters" }, \
{ "pix_fmts", OPT_EXIT, { .func_arg = show_pix_fmts }, "show available pixel formats" }, \
{ "layouts", OPT_EXIT, { .func_arg = show_layouts }, "show standard channel layouts" }, \
{ "sample_fmts", OPT_EXIT, { .func_arg = show_sample_fmts }, "show available audio sample formats" }, \
{ "colors", OPT_EXIT, { .func_arg = show_colors }, "show available color names" }, \
{ "loglevel", HAS_ARG, { .func_arg = opt_loglevel }, "set logging level", "loglevel" }, \
{ "v", HAS_ARG, { .func_arg = opt_loglevel }, "set logging level", "loglevel" }, \
{ "report", 0, { .func_arg = opt_report }, "generate a report" }, \
{ "max_alloc", HAS_ARG, { .func_arg = opt_max_alloc }, "set maximum size of a single allocated block", "bytes" }, \
{ "cpuflags", HAS_ARG | OPT_EXPERT, { .func_arg = opt_cpuflags }, "force specific cpu flags", "flags" }, \
{ "hide_banner", OPT_BOOL | OPT_EXPERT, {&hide_banner}, "do not show program banner", "hide_banner" }, \
CMDUTILS_COMMON_OPTIONS_AVDEVICE \
/**
* Show help for all options with given flags in class and all its
* children.
@@ -161,6 +254,11 @@ void show_help_children(const AVClass *class, int flags);
*/
void show_help_default(const char *opt, const char *arg);
/**
* Generic -h handler common to all fftools.
*/
int show_help(void *optctx, const char *opt, const char *arg);
/**
* Parse the command line arguments.
*
@@ -173,8 +271,8 @@ void show_help_default(const char *opt, const char *arg);
* argument without a leading option name flag. NULL if such arguments do
* not have to be processed.
*/
int parse_options(void *optctx, int argc, char **argv, const OptionDef *options,
int (* parse_arg_function)(void *optctx, const char*));
void parse_options(void *optctx, int argc, char **argv, const OptionDef *options,
void (* parse_arg_function)(void *optctx, const char*));
/**
* Parse one given option.
@@ -219,6 +317,7 @@ typedef struct OptionGroup {
AVDictionary *codec_opts;
AVDictionary *format_opts;
AVDictionary *resample_opts;
AVDictionary *sws_dict;
AVDictionary *swr_opts;
} OptionGroup;
@@ -312,12 +411,10 @@ int check_stream_specifier(AVFormatContext *s, AVStream *st, const char *spec);
* @param st A stream from s for which the options should be filtered.
* @param codec The particular codec for which the options should be filtered.
* If null, the default one is looked up according to the codec id.
* @param dst a pointer to the created dictionary
* @return a non-negative number on success, a negative error code on failure
* @return a pointer to the created dictionary
*/
int filter_codec_opts(const AVDictionary *opts, enum AVCodecID codec_id,
AVFormatContext *s, AVStream *st, const AVCodec *codec,
AVDictionary **dst);
AVDictionary *filter_codec_opts(AVDictionary *opts, enum AVCodecID codec_id,
AVFormatContext *s, AVStream *st, const AVCodec *codec);
/**
* Setup AVCodecContext options for avformat_find_stream_info().
@@ -326,10 +423,12 @@ int filter_codec_opts(const AVDictionary *opts, enum AVCodecID codec_id,
* contained in s.
* Each dictionary will contain the options from codec_opts which can
* be applied to the corresponding stream codec context.
*
* @return pointer to the created array of dictionaries, NULL if it
* cannot be created
*/
int setup_find_stream_info_opts(AVFormatContext *s,
AVDictionary *codec_opts,
AVDictionary ***dst);
AVDictionary **setup_find_stream_info_opts(AVFormatContext *s,
AVDictionary *codec_opts);
/**
* Print an error message to stderr, indicating filename and a human
@@ -349,6 +448,136 @@ void print_error(const char *filename, int err);
*/
void show_banner(int argc, char **argv, const OptionDef *options);
/**
* Print the version of the program to stdout. The version message
* depends on the current versions of the repository and of the libav*
* libraries.
* This option processing function does not utilize the arguments.
*/
int show_version(void *optctx, const char *opt, const char *arg);
/**
* Print the build configuration of the program to stdout. The contents
* depend on the definition of FFMPEG_CONFIGURATION.
* This option processing function does not utilize the arguments.
*/
int show_buildconf(void *optctx, const char *opt, const char *arg);
/**
* Print the license of the program to stdout. The license depends on
* the license of the libraries compiled into the program.
* This option processing function does not utilize the arguments.
*/
int show_license(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the formats supported by the
* program (including devices).
* This option processing function does not utilize the arguments.
*/
int show_formats(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the muxers supported by the
* program (including devices).
* This option processing function does not utilize the arguments.
*/
int show_muxers(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the demuxer supported by the
* program (including devices).
* This option processing function does not utilize the arguments.
*/
int show_demuxers(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the devices supported by the
* program.
* This option processing function does not utilize the arguments.
*/
int show_devices(void *optctx, const char *opt, const char *arg);
#if CONFIG_AVDEVICE
/**
* Print a listing containing autodetected sinks of the output device.
* Device name with options may be passed as an argument to limit results.
*/
int show_sinks(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing autodetected sources of the input device.
* Device name with options may be passed as an argument to limit results.
*/
int show_sources(void *optctx, const char *opt, const char *arg);
#endif
/**
* Print a listing containing all the codecs supported by the
* program.
* This option processing function does not utilize the arguments.
*/
int show_codecs(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the decoders supported by the
* program.
*/
int show_decoders(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the encoders supported by the
* program.
*/
int show_encoders(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the filters supported by the
* program.
* This option processing function does not utilize the arguments.
*/
int show_filters(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the bit stream filters supported by the
* program.
* This option processing function does not utilize the arguments.
*/
int show_bsfs(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the protocols supported by the
* program.
* This option processing function does not utilize the arguments.
*/
int show_protocols(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the pixel formats supported by the
* program.
* This option processing function does not utilize the arguments.
*/
int show_pix_fmts(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the standard channel layouts supported by
* the program.
* This option processing function does not utilize the arguments.
*/
int show_layouts(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the sample formats supported by the
* program.
*/
int show_sample_fmts(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the color names and values recognized
* by the program.
*/
int show_colors(void *optctx, const char *opt, const char *arg);
/**
* Return a positive value if a line read from standard input
* starts with [yY], otherwise return 0.
@@ -378,31 +607,20 @@ FILE *get_preset_file(char *filename, size_t filename_size,
/**
* Realloc array to hold new_size elements of elem_size.
* Calls exit() on failure.
*
* @param array pointer to the array to reallocate, will be updated
* with a new pointer on success
* @param array array to reallocate
* @param elem_size size in bytes of each element
* @param size new element count will be written here
* @param new_size number of elements to place in reallocated array
* @return a non-negative number on success, a negative error code on failure
* @return reallocated array
*/
int grow_array(void **array, int elem_size, int *size, int new_size);
void *grow_array(void *array, int elem_size, int *size, int new_size);
/**
* Atomically add a new element to an array of pointers, i.e. allocate
* a new entry, reallocate the array of pointers and make the new last
* member of this array point to the newly allocated buffer.
*
* @param array array of pointers to reallocate
* @param elem_size size of the new element to allocate
* @param nb_elems pointer to the number of elements of the array array;
* *nb_elems will be incremented by one by this function.
* @return pointer to the newly allocated entry or NULL on failure
*/
void *allocate_array_elem(void *array, size_t elem_size, int *nb_elems);
#define media_type_string av_get_media_type_string
#define GROW_ARRAY(array, nb_elems)\
grow_array((void**)&array, sizeof(*array), &nb_elems, nb_elems + 1)
array = grow_array(array, sizeof(*array), &nb_elems, nb_elems + 1)
#define GET_PIX_FMT_NAME(pix_fmt)\
const char *name = av_get_pix_fmt_name(pix_fmt);
@@ -417,6 +635,14 @@ void *allocate_array_elem(void *array, size_t elem_size, int *nb_elems);
char name[16];\
snprintf(name, sizeof(name), "%d", rate);
double get_rotation(const int32_t *displaymatrix);
#define GET_CH_LAYOUT_NAME(ch_layout)\
char name[16];\
snprintf(name, sizeof(name), "0x%"PRIx64, ch_layout);
#define GET_CH_LAYOUT_DESC(ch_layout)\
char name[128];\
av_get_channel_layout_string(name, sizeof(name), 0, ch_layout);
double get_rotation(AVStream *st);
#endif /* FFTOOLS_CMDUTILS_H */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,888 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <math.h>
#include <stdint.h>
#include "ffmpeg.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
#include "libavutil/avutil.h"
#include "libavutil/dict.h"
#include "libavutil/display.h"
#include "libavutil/eval.h"
#include "libavutil/frame.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/log.h"
#include "libavutil/pixdesc.h"
#include "libavutil/rational.h"
#include "libavutil/timestamp.h"
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
struct Encoder {
AVFrame *sq_frame;
// packet for receiving encoded output
AVPacket *pkt;
// combined size of all the packets received from the encoder
uint64_t data_size;
// number of packets received from the encoder
uint64_t packets_encoded;
int opened;
};
void enc_free(Encoder **penc)
{
Encoder *enc = *penc;
if (!enc)
return;
av_frame_free(&enc->sq_frame);
av_packet_free(&enc->pkt);
av_freep(penc);
}
int enc_alloc(Encoder **penc, const AVCodec *codec)
{
Encoder *enc;
*penc = NULL;
enc = av_mallocz(sizeof(*enc));
if (!enc)
return AVERROR(ENOMEM);
enc->pkt = av_packet_alloc();
if (!enc->pkt)
goto fail;
*penc = enc;
return 0;
fail:
enc_free(&enc);
return AVERROR(ENOMEM);
}
static int hw_device_setup_for_encode(OutputStream *ost, AVBufferRef *frames_ref)
{
const AVCodecHWConfig *config;
HWDevice *dev = NULL;
int i;
if (frames_ref &&
((AVHWFramesContext*)frames_ref->data)->format ==
ost->enc_ctx->pix_fmt) {
// Matching format, will try to use hw_frames_ctx.
} else {
frames_ref = NULL;
}
for (i = 0;; i++) {
config = avcodec_get_hw_config(ost->enc_ctx->codec, i);
if (!config)
break;
if (frames_ref &&
config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX &&
(config->pix_fmt == AV_PIX_FMT_NONE ||
config->pix_fmt == ost->enc_ctx->pix_fmt)) {
av_log(ost->enc_ctx, AV_LOG_VERBOSE, "Using input "
"frames context (format %s) with %s encoder.\n",
av_get_pix_fmt_name(ost->enc_ctx->pix_fmt),
ost->enc_ctx->codec->name);
ost->enc_ctx->hw_frames_ctx = av_buffer_ref(frames_ref);
if (!ost->enc_ctx->hw_frames_ctx)
return AVERROR(ENOMEM);
return 0;
}
if (!dev &&
config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX)
dev = hw_device_get_by_type(config->device_type);
}
if (dev) {
av_log(ost->enc_ctx, AV_LOG_VERBOSE, "Using device %s "
"(type %s) with %s encoder.\n", dev->name,
av_hwdevice_get_type_name(dev->type), ost->enc_ctx->codec->name);
ost->enc_ctx->hw_device_ctx = av_buffer_ref(dev->device_ref);
if (!ost->enc_ctx->hw_device_ctx)
return AVERROR(ENOMEM);
} else {
// No device required, or no device available.
}
return 0;
}
static int set_encoder_id(OutputFile *of, OutputStream *ost)
{
const char *cname = ost->enc_ctx->codec->name;
uint8_t *encoder_string;
int encoder_string_len;
if (av_dict_get(ost->st->metadata, "encoder", NULL, 0))
return 0;
encoder_string_len = sizeof(LIBAVCODEC_IDENT) + strlen(cname) + 2;
encoder_string = av_mallocz(encoder_string_len);
if (!encoder_string)
return AVERROR(ENOMEM);
if (!of->bitexact && !ost->bitexact)
av_strlcpy(encoder_string, LIBAVCODEC_IDENT " ", encoder_string_len);
else
av_strlcpy(encoder_string, "Lavc ", encoder_string_len);
av_strlcat(encoder_string, cname, encoder_string_len);
av_dict_set(&ost->st->metadata, "encoder", encoder_string,
AV_DICT_DONT_STRDUP_VAL | AV_DICT_DONT_OVERWRITE);
return 0;
}
int enc_open(OutputStream *ost, const AVFrame *frame)
{
InputStream *ist = ost->ist;
Encoder *e = ost->enc;
AVCodecContext *enc_ctx = ost->enc_ctx;
AVCodecContext *dec_ctx = NULL;
const AVCodec *enc = enc_ctx->codec;
OutputFile *of = output_files[ost->file_index];
FrameData *fd;
int ret;
if (e->opened)
return 0;
// frame is always non-NULL for audio and video
av_assert0(frame || (enc->type != AVMEDIA_TYPE_VIDEO && enc->type != AVMEDIA_TYPE_AUDIO));
if (frame) {
av_assert0(frame->opaque_ref);
fd = (FrameData*)frame->opaque_ref->data;
}
ret = set_encoder_id(output_files[ost->file_index], ost);
if (ret < 0)
return ret;
if (ist) {
dec_ctx = ist->dec_ctx;
}
// the timebase is chosen by filtering code
if (ost->type == AVMEDIA_TYPE_AUDIO || ost->type == AVMEDIA_TYPE_VIDEO) {
enc_ctx->time_base = frame->time_base;
enc_ctx->framerate = fd->frame_rate_filter;
ost->st->avg_frame_rate = fd->frame_rate_filter;
}
switch (enc_ctx->codec_type) {
case AVMEDIA_TYPE_AUDIO:
enc_ctx->sample_fmt = frame->format;
enc_ctx->sample_rate = frame->sample_rate;
ret = av_channel_layout_copy(&enc_ctx->ch_layout, &frame->ch_layout);
if (ret < 0)
return ret;
if (ost->bits_per_raw_sample)
enc_ctx->bits_per_raw_sample = ost->bits_per_raw_sample;
else
enc_ctx->bits_per_raw_sample = FFMIN(fd->bits_per_raw_sample,
av_get_bytes_per_sample(enc_ctx->sample_fmt) << 3);
break;
case AVMEDIA_TYPE_VIDEO: {
enc_ctx->width = frame->width;
enc_ctx->height = frame->height;
enc_ctx->sample_aspect_ratio = ost->st->sample_aspect_ratio =
ost->frame_aspect_ratio.num ? // overridden by the -aspect cli option
av_mul_q(ost->frame_aspect_ratio, (AVRational){ enc_ctx->height, enc_ctx->width }) :
frame->sample_aspect_ratio;
enc_ctx->pix_fmt = frame->format;
if (ost->bits_per_raw_sample)
enc_ctx->bits_per_raw_sample = ost->bits_per_raw_sample;
else
enc_ctx->bits_per_raw_sample = FFMIN(fd->bits_per_raw_sample,
av_pix_fmt_desc_get(enc_ctx->pix_fmt)->comp[0].depth);
enc_ctx->color_range = frame->color_range;
enc_ctx->color_primaries = frame->color_primaries;
enc_ctx->color_trc = frame->color_trc;
enc_ctx->colorspace = frame->colorspace;
enc_ctx->chroma_sample_location = frame->chroma_location;
if (enc_ctx->flags & (AV_CODEC_FLAG_INTERLACED_DCT | AV_CODEC_FLAG_INTERLACED_ME) ||
(frame->flags & AV_FRAME_FLAG_INTERLACED)
#if FFMPEG_OPT_TOP
|| ost->top_field_first >= 0
#endif
) {
int top_field_first =
#if FFMPEG_OPT_TOP
ost->top_field_first >= 0 ?
ost->top_field_first :
#endif
!!(frame->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST);
if (enc->id == AV_CODEC_ID_MJPEG)
enc_ctx->field_order = top_field_first ? AV_FIELD_TT : AV_FIELD_BB;
else
enc_ctx->field_order = top_field_first ? AV_FIELD_TB : AV_FIELD_BT;
} else
enc_ctx->field_order = AV_FIELD_PROGRESSIVE;
break;
}
case AVMEDIA_TYPE_SUBTITLE:
if (ost->enc_timebase.num)
av_log(ost, AV_LOG_WARNING,
"-enc_time_base not supported for subtitles, ignoring\n");
enc_ctx->time_base = AV_TIME_BASE_Q;
if (!enc_ctx->width) {
enc_ctx->width = ost->ist->par->width;
enc_ctx->height = ost->ist->par->height;
}
if (dec_ctx && dec_ctx->subtitle_header) {
/* ASS code assumes this buffer is null terminated so add extra byte. */
enc_ctx->subtitle_header = av_mallocz(dec_ctx->subtitle_header_size + 1);
if (!enc_ctx->subtitle_header)
return AVERROR(ENOMEM);
memcpy(enc_ctx->subtitle_header, dec_ctx->subtitle_header,
dec_ctx->subtitle_header_size);
enc_ctx->subtitle_header_size = dec_ctx->subtitle_header_size;
}
break;
default:
av_assert0(0);
break;
}
if (ost->bitexact)
enc_ctx->flags |= AV_CODEC_FLAG_BITEXACT;
if (!av_dict_get(ost->encoder_opts, "threads", NULL, 0))
av_dict_set(&ost->encoder_opts, "threads", "auto", 0);
if (enc->capabilities & AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE) {
ret = av_dict_set(&ost->encoder_opts, "flags", "+copy_opaque", AV_DICT_MULTIKEY);
if (ret < 0)
return ret;
}
av_dict_set(&ost->encoder_opts, "flags", "+frame_duration", AV_DICT_MULTIKEY);
ret = hw_device_setup_for_encode(ost, frame ? frame->hw_frames_ctx : NULL);
if (ret < 0) {
av_log(ost, AV_LOG_ERROR,
"Encoding hardware device setup failed: %s\n", av_err2str(ret));
return ret;
}
if ((ret = avcodec_open2(ost->enc_ctx, enc, &ost->encoder_opts)) < 0) {
if (ret != AVERROR_EXPERIMENTAL)
av_log(ost, AV_LOG_ERROR, "Error while opening encoder - maybe "
"incorrect parameters such as bit_rate, rate, width or height.\n");
return ret;
}
e->opened = 1;
if (ost->sq_idx_encode >= 0) {
e->sq_frame = av_frame_alloc();
if (!e->sq_frame)
return AVERROR(ENOMEM);
}
if (ost->enc_ctx->frame_size) {
av_assert0(ost->sq_idx_encode >= 0);
sq_frame_samples(output_files[ost->file_index]->sq_encode,
ost->sq_idx_encode, ost->enc_ctx->frame_size);
}
ret = check_avoptions(ost->encoder_opts);
if (ret < 0)
return ret;
if (ost->enc_ctx->bit_rate && ost->enc_ctx->bit_rate < 1000 &&
ost->enc_ctx->codec_id != AV_CODEC_ID_CODEC2 /* don't complain about 700 bit/s modes */)
av_log(ost, AV_LOG_WARNING, "The bitrate parameter is set too low."
" It takes bits/s as argument, not kbits/s\n");
ret = avcodec_parameters_from_context(ost->par_in, ost->enc_ctx);
if (ret < 0) {
av_log(ost, AV_LOG_FATAL,
"Error initializing the output stream codec context.\n");
return ret;
}
/*
* Add global input side data. For now this is naive, and copies it
* from the input stream's global side data. All side data should
* really be funneled over AVFrame and libavfilter, then added back to
* packet side data, and then potentially using the first packet for
* global side data.
*/
if (ist) {
int i;
for (i = 0; i < ist->st->codecpar->nb_coded_side_data; i++) {
AVPacketSideData *sd_src = &ist->st->codecpar->coded_side_data[i];
if (sd_src->type != AV_PKT_DATA_CPB_PROPERTIES) {
AVPacketSideData *sd_dst = av_packet_side_data_new(&ost->par_in->coded_side_data,
&ost->par_in->nb_coded_side_data,
sd_src->type, sd_src->size, 0);
if (!sd_dst)
return AVERROR(ENOMEM);
memcpy(sd_dst->data, sd_src->data, sd_src->size);
if (ist->autorotate && sd_src->type == AV_PKT_DATA_DISPLAYMATRIX)
av_display_rotation_set((int32_t *)sd_dst->data, 0);
}
}
}
// copy timebase while removing common factors
if (ost->st->time_base.num <= 0 || ost->st->time_base.den <= 0)
ost->st->time_base = av_add_q(ost->enc_ctx->time_base, (AVRational){0, 1});
ret = of_stream_init(of, ost);
if (ret < 0)
return ret;
return 0;
}
static int check_recording_time(OutputStream *ost, int64_t ts, AVRational tb)
{
OutputFile *of = output_files[ost->file_index];
if (of->recording_time != INT64_MAX &&
av_compare_ts(ts, tb, of->recording_time, AV_TIME_BASE_Q) >= 0) {
close_output_stream(ost);
return 0;
}
return 1;
}
int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
{
Encoder *e = ost->enc;
int subtitle_out_max_size = 1024 * 1024;
int subtitle_out_size, nb, i, ret;
AVCodecContext *enc;
AVPacket *pkt = e->pkt;
int64_t pts;
if (sub->pts == AV_NOPTS_VALUE) {
av_log(ost, AV_LOG_ERROR, "Subtitle packets must have a pts\n");
return exit_on_error ? AVERROR(EINVAL) : 0;
}
if (ost->finished ||
(of->start_time != AV_NOPTS_VALUE && sub->pts < of->start_time))
return 0;
enc = ost->enc_ctx;
/* Note: DVB subtitle need one packet to draw them and one other
packet to clear them */
/* XXX: signal it in the codec context ? */
if (enc->codec_id == AV_CODEC_ID_DVB_SUBTITLE)
nb = 2;
else if (enc->codec_id == AV_CODEC_ID_ASS)
nb = FFMAX(sub->num_rects, 1);
else
nb = 1;
/* shift timestamp to honor -ss and make check_recording_time() work with -t */
pts = sub->pts;
if (output_files[ost->file_index]->start_time != AV_NOPTS_VALUE)
pts -= output_files[ost->file_index]->start_time;
for (i = 0; i < nb; i++) {
AVSubtitle local_sub = *sub;
if (!check_recording_time(ost, pts, AV_TIME_BASE_Q))
return 0;
ret = av_new_packet(pkt, subtitle_out_max_size);
if (ret < 0)
return AVERROR(ENOMEM);
local_sub.pts = pts;
// start_display_time is required to be 0
local_sub.pts += av_rescale_q(sub->start_display_time, (AVRational){ 1, 1000 }, AV_TIME_BASE_Q);
local_sub.end_display_time -= sub->start_display_time;
local_sub.start_display_time = 0;
if (enc->codec_id == AV_CODEC_ID_DVB_SUBTITLE && i == 1)
local_sub.num_rects = 0;
else if (enc->codec_id == AV_CODEC_ID_ASS && sub->num_rects > 0) {
local_sub.num_rects = 1;
local_sub.rects += i;
}
ost->frames_encoded++;
subtitle_out_size = avcodec_encode_subtitle(enc, pkt->data, pkt->size, &local_sub);
if (subtitle_out_size < 0) {
av_log(ost, AV_LOG_FATAL, "Subtitle encoding failed\n");
return subtitle_out_size;
}
av_shrink_packet(pkt, subtitle_out_size);
pkt->time_base = AV_TIME_BASE_Q;
pkt->pts = sub->pts;
pkt->duration = av_rescale_q(sub->end_display_time, (AVRational){ 1, 1000 }, pkt->time_base);
if (enc->codec_id == AV_CODEC_ID_DVB_SUBTITLE) {
/* XXX: the pts correction is handled here. Maybe handling
it in the codec would be better */
if (i == 0)
pkt->pts += av_rescale_q(sub->start_display_time, (AVRational){ 1, 1000 }, pkt->time_base);
else
pkt->pts += av_rescale_q(sub->end_display_time, (AVRational){ 1, 1000 }, pkt->time_base);
}
pkt->dts = pkt->pts;
ret = of_output_packet(of, ost, pkt);
if (ret < 0)
return ret;
}
return 0;
}
void enc_stats_write(OutputStream *ost, EncStats *es,
const AVFrame *frame, const AVPacket *pkt,
uint64_t frame_num)
{
Encoder *e = ost->enc;
AVIOContext *io = es->io;
AVRational tb = frame ? frame->time_base : pkt->time_base;
int64_t pts = frame ? frame->pts : pkt->pts;
AVRational tbi = (AVRational){ 0, 1};
int64_t ptsi = INT64_MAX;
const FrameData *fd = NULL;
if (frame ? frame->opaque_ref : pkt->opaque_ref) {
fd = (const FrameData*)(frame ? frame->opaque_ref->data : pkt->opaque_ref->data);
tbi = fd->dec.tb;
ptsi = fd->dec.pts;
}
for (size_t i = 0; i < es->nb_components; i++) {
const EncStatsComponent *c = &es->components[i];
switch (c->type) {
case ENC_STATS_LITERAL: avio_write (io, c->str, c->str_len); continue;
case ENC_STATS_FILE_IDX: avio_printf(io, "%d", ost->file_index); continue;
case ENC_STATS_STREAM_IDX: avio_printf(io, "%d", ost->index); continue;
case ENC_STATS_TIMEBASE: avio_printf(io, "%d/%d", tb.num, tb.den); continue;
case ENC_STATS_TIMEBASE_IN: avio_printf(io, "%d/%d", tbi.num, tbi.den); continue;
case ENC_STATS_PTS: avio_printf(io, "%"PRId64, pts); continue;
case ENC_STATS_PTS_IN: avio_printf(io, "%"PRId64, ptsi); continue;
case ENC_STATS_PTS_TIME: avio_printf(io, "%g", pts * av_q2d(tb)); continue;
case ENC_STATS_PTS_TIME_IN: avio_printf(io, "%g", ptsi == INT64_MAX ?
INFINITY : ptsi * av_q2d(tbi)); continue;
case ENC_STATS_FRAME_NUM: avio_printf(io, "%"PRIu64, frame_num); continue;
case ENC_STATS_FRAME_NUM_IN: avio_printf(io, "%"PRIu64, fd ? fd->dec.frame_num : -1); continue;
}
if (frame) {
switch (c->type) {
case ENC_STATS_SAMPLE_NUM: avio_printf(io, "%"PRIu64, ost->samples_encoded); continue;
case ENC_STATS_NB_SAMPLES: avio_printf(io, "%d", frame->nb_samples); continue;
default: av_assert0(0);
}
} else {
switch (c->type) {
case ENC_STATS_DTS: avio_printf(io, "%"PRId64, pkt->dts); continue;
case ENC_STATS_DTS_TIME: avio_printf(io, "%g", pkt->dts * av_q2d(tb)); continue;
case ENC_STATS_PKT_SIZE: avio_printf(io, "%d", pkt->size); continue;
case ENC_STATS_BITRATE: {
double duration = FFMAX(pkt->duration, 1) * av_q2d(tb);
avio_printf(io, "%g", 8.0 * pkt->size / duration);
continue;
}
case ENC_STATS_AVG_BITRATE: {
double duration = pkt->dts * av_q2d(tb);
avio_printf(io, "%g", duration > 0 ? 8.0 * e->data_size / duration : -1.);
continue;
}
default: av_assert0(0);
}
}
}
avio_w8(io, '\n');
avio_flush(io);
}
static inline double psnr(double d)
{
return -10.0 * log10(d);
}
static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_vstats)
{
Encoder *e = ost->enc;
const uint8_t *sd = av_packet_get_side_data(pkt, AV_PKT_DATA_QUALITY_STATS,
NULL);
AVCodecContext *enc = ost->enc_ctx;
enum AVPictureType pict_type;
int64_t frame_number;
double ti1, bitrate, avg_bitrate;
double psnr_val = -1;
ost->quality = sd ? AV_RL32(sd) : -1;
pict_type = sd ? sd[4] : AV_PICTURE_TYPE_NONE;
if ((enc->flags & AV_CODEC_FLAG_PSNR) && sd && sd[5]) {
// FIXME the scaling assumes 8bit
double error = AV_RL64(sd + 8) / (enc->width * enc->height * 255.0 * 255.0);
if (error >= 0 && error <= 1)
psnr_val = psnr(error);
}
if (!write_vstats)
return 0;
/* this is executed just the first time update_video_stats is called */
if (!vstats_file) {
vstats_file = fopen(vstats_filename, "w");
if (!vstats_file) {
perror("fopen");
return AVERROR(errno);
}
}
frame_number = e->packets_encoded;
if (vstats_version <= 1) {
fprintf(vstats_file, "frame= %5"PRId64" q= %2.1f ", frame_number,
ost->quality / (float)FF_QP2LAMBDA);
} else {
fprintf(vstats_file, "out= %2d st= %2d frame= %5"PRId64" q= %2.1f ", ost->file_index, ost->index, frame_number,
ost->quality / (float)FF_QP2LAMBDA);
}
if (psnr_val >= 0)
fprintf(vstats_file, "PSNR= %6.2f ", psnr_val);
fprintf(vstats_file,"f_size= %6d ", pkt->size);
/* compute pts value */
ti1 = pkt->dts * av_q2d(pkt->time_base);
if (ti1 < 0.01)
ti1 = 0.01;
bitrate = (pkt->size * 8) / av_q2d(enc->time_base) / 1000.0;
avg_bitrate = (double)(e->data_size * 8) / ti1 / 1000.0;
fprintf(vstats_file, "s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s ",
(double)e->data_size / 1024, ti1, bitrate, avg_bitrate);
fprintf(vstats_file, "type= %c\n", av_get_picture_type_char(pict_type));
return 0;
}
static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame)
{
Encoder *e = ost->enc;
AVCodecContext *enc = ost->enc_ctx;
AVPacket *pkt = e->pkt;
const char *type_desc = av_get_media_type_string(enc->codec_type);
const char *action = frame ? "encode" : "flush";
int ret;
if (frame) {
if (ost->enc_stats_pre.io)
enc_stats_write(ost, &ost->enc_stats_pre, frame, NULL,
ost->frames_encoded);
ost->frames_encoded++;
ost->samples_encoded += frame->nb_samples;
if (debug_ts) {
av_log(ost, AV_LOG_INFO, "encoder <- type:%s "
"frame_pts:%s frame_pts_time:%s time_base:%d/%d\n",
type_desc,
av_ts2str(frame->pts), av_ts2timestr(frame->pts, &enc->time_base),
enc->time_base.num, enc->time_base.den);
}
if (frame->sample_aspect_ratio.num && !ost->frame_aspect_ratio.num)
enc->sample_aspect_ratio = frame->sample_aspect_ratio;
}
update_benchmark(NULL);
ret = avcodec_send_frame(enc, frame);
if (ret < 0 && !(ret == AVERROR_EOF && !frame)) {
av_log(ost, AV_LOG_ERROR, "Error submitting %s frame to the encoder\n",
type_desc);
return ret;
}
while (1) {
av_packet_unref(pkt);
ret = avcodec_receive_packet(enc, pkt);
update_benchmark("%s_%s %d.%d", action, type_desc,
ost->file_index, ost->index);
pkt->time_base = enc->time_base;
/* if two pass, output log on success and EOF */
if ((ret >= 0 || ret == AVERROR_EOF) && ost->logfile && enc->stats_out)
fprintf(ost->logfile, "%s", enc->stats_out);
if (ret == AVERROR(EAGAIN)) {
av_assert0(frame); // should never happen during flushing
return 0;
} else if (ret == AVERROR_EOF) {
ret = of_output_packet(of, ost, NULL);
return ret < 0 ? ret : AVERROR_EOF;
} else if (ret < 0) {
av_log(ost, AV_LOG_ERROR, "%s encoding failed\n", type_desc);
return ret;
}
if (enc->codec_type == AVMEDIA_TYPE_VIDEO) {
ret = update_video_stats(ost, pkt, !!vstats_filename);
if (ret < 0)
return ret;
}
if (ost->enc_stats_post.io)
enc_stats_write(ost, &ost->enc_stats_post, NULL, pkt,
e->packets_encoded);
if (debug_ts) {
av_log(ost, AV_LOG_INFO, "encoder -> type:%s "
"pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s "
"duration:%s duration_time:%s\n",
type_desc,
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, &enc->time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, &enc->time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, &enc->time_base));
}
if ((ret = trigger_fix_sub_duration_heartbeat(ost, pkt)) < 0) {
av_log(NULL, AV_LOG_ERROR,
"Subtitle heartbeat logic failed in %s! (%s)\n",
__func__, av_err2str(ret));
return ret;
}
e->data_size += pkt->size;
e->packets_encoded++;
ret = of_output_packet(of, ost, pkt);
if (ret < 0)
return ret;
}
av_assert0(0);
}
static int submit_encode_frame(OutputFile *of, OutputStream *ost,
AVFrame *frame)
{
Encoder *e = ost->enc;
int ret;
if (ost->sq_idx_encode < 0)
return encode_frame(of, ost, frame);
if (frame) {
ret = av_frame_ref(e->sq_frame, frame);
if (ret < 0)
return ret;
frame = e->sq_frame;
}
ret = sq_send(of->sq_encode, ost->sq_idx_encode,
SQFRAME(frame));
if (ret < 0) {
if (frame)
av_frame_unref(frame);
if (ret != AVERROR_EOF)
return ret;
}
while (1) {
AVFrame *enc_frame = e->sq_frame;
ret = sq_receive(of->sq_encode, ost->sq_idx_encode,
SQFRAME(enc_frame));
if (ret == AVERROR_EOF) {
enc_frame = NULL;
} else if (ret < 0) {
return (ret == AVERROR(EAGAIN)) ? 0 : ret;
}
ret = encode_frame(of, ost, enc_frame);
if (enc_frame)
av_frame_unref(enc_frame);
if (ret < 0) {
if (ret == AVERROR_EOF)
close_output_stream(ost);
return ret;
}
}
}
static int do_audio_out(OutputFile *of, OutputStream *ost,
AVFrame *frame)
{
AVCodecContext *enc = ost->enc_ctx;
int ret;
if (!(enc->codec->capabilities & AV_CODEC_CAP_PARAM_CHANGE) &&
enc->ch_layout.nb_channels != frame->ch_layout.nb_channels) {
av_log(ost, AV_LOG_ERROR,
"Audio channel count changed and encoder does not support parameter changes\n");
return 0;
}
if (!check_recording_time(ost, frame->pts, frame->time_base))
return 0;
ret = submit_encode_frame(of, ost, frame);
return (ret < 0 && ret != AVERROR_EOF) ? ret : 0;
}
static enum AVPictureType forced_kf_apply(void *logctx, KeyframeForceCtx *kf,
AVRational tb, const AVFrame *in_picture)
{
double pts_time;
if (kf->ref_pts == AV_NOPTS_VALUE)
kf->ref_pts = in_picture->pts;
pts_time = (in_picture->pts - kf->ref_pts) * av_q2d(tb);
if (kf->index < kf->nb_pts &&
av_compare_ts(in_picture->pts, tb, kf->pts[kf->index], AV_TIME_BASE_Q) >= 0) {
kf->index++;
goto force_keyframe;
} else if (kf->pexpr) {
double res;
kf->expr_const_values[FKF_T] = pts_time;
res = av_expr_eval(kf->pexpr,
kf->expr_const_values, NULL);
av_log(logctx, AV_LOG_TRACE,
"force_key_frame: n:%f n_forced:%f prev_forced_n:%f t:%f prev_forced_t:%f -> res:%f\n",
kf->expr_const_values[FKF_N],
kf->expr_const_values[FKF_N_FORCED],
kf->expr_const_values[FKF_PREV_FORCED_N],
kf->expr_const_values[FKF_T],
kf->expr_const_values[FKF_PREV_FORCED_T],
res);
kf->expr_const_values[FKF_N] += 1;
if (res) {
kf->expr_const_values[FKF_PREV_FORCED_N] = kf->expr_const_values[FKF_N] - 1;
kf->expr_const_values[FKF_PREV_FORCED_T] = kf->expr_const_values[FKF_T];
kf->expr_const_values[FKF_N_FORCED] += 1;
goto force_keyframe;
}
} else if (kf->type == KF_FORCE_SOURCE && (in_picture->flags & AV_FRAME_FLAG_KEY)) {
goto force_keyframe;
}
return AV_PICTURE_TYPE_NONE;
force_keyframe:
av_log(logctx, AV_LOG_DEBUG, "Forced keyframe at time %f\n", pts_time);
return AV_PICTURE_TYPE_I;
}
/* May modify/reset frame */
static int do_video_out(OutputFile *of, OutputStream *ost, AVFrame *in_picture)
{
int ret;
AVCodecContext *enc = ost->enc_ctx;
if (!check_recording_time(ost, in_picture->pts, ost->enc_ctx->time_base))
return 0;
in_picture->quality = enc->global_quality;
in_picture->pict_type = forced_kf_apply(ost, &ost->kf, enc->time_base, in_picture);
#if FFMPEG_OPT_TOP
if (ost->top_field_first >= 0) {
in_picture->flags &= ~AV_FRAME_FLAG_TOP_FIELD_FIRST;
in_picture->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * (!!ost->top_field_first);
}
#endif
ret = submit_encode_frame(of, ost, in_picture);
return (ret == AVERROR_EOF) ? 0 : ret;
}
int enc_frame(OutputStream *ost, AVFrame *frame)
{
OutputFile *of = output_files[ost->file_index];
int ret;
ret = enc_open(ost, frame);
if (ret < 0)
return ret;
return ost->enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO ?
do_video_out(of, ost, frame) : do_audio_out(of, ost, frame);
}
int enc_flush(void)
{
int ret;
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
OutputFile *of = output_files[ost->file_index];
if (ost->sq_idx_encode >= 0)
sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
}
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
Encoder *e = ost->enc;
AVCodecContext *enc = ost->enc_ctx;
OutputFile *of = output_files[ost->file_index];
if (!enc || !e->opened ||
(enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO))
continue;
ret = submit_encode_frame(of, ost, NULL);
if (ret != AVERROR_EOF)
return ret;
}
return 0;
}

Some files were not shown because too many files have changed in this diff Show More