Compare commits

..

146 Commits
n4.3 ... n4.1.3

Author SHA1 Message Date
Michael Niedermayer
4154f89678 Changelog: update
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-04-01 10:33:02 +02:00
Michael Niedermayer
6c75df556f avcodec/rscc: Check that the to be uncompressed input is large enough
Fixes: Out of array access
Fixes: 13984/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RSCC_fuzzer-5734128093233152

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3a0ec1511e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-04-01 10:32:08 +02:00
James Almer
58cd70201e avformat/movenc: free eac3 private data only when closing the stream
This makes sure the data is available when writing the moov atom during the
second pass triggered by the faststart movflag.

Fixes ticket #7780

Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 27c94c57dc)
2019-03-31 20:36:41 -03:00
Michael Niedermayer
1d720b37f0 Update for 4.1.3
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-31 23:31:47 +02:00
Michael Niedermayer
f1ecebcdb7 avcodec/hevcdec: Avoid only partly skiping duplicate first slices
Fixes: NULL pointer dereference and out of array access
Fixes: 13871/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-5746167087890432
Fixes: 13845/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HEVC_fuzzer-5650370728034304

This also fixes the return code for explode mode

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 54655623a8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-31 23:30:09 +02:00
Carl Eugen Hoyos
daca529112 lavc/bmp: Avoid a heap buffer overwrite for 1bpp input.
Found by Mingi Cho, Seoyoung Kim, and Taekyoung Kwon
of the Information Security Lab, Yonsei University.

(cherry picked from commit 1e34014010)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-31 23:30:09 +02:00
Michael Niedermayer
65f94b732a avcodec/mpegpicture: Check size of edge_emu_buffer
Fixes: OOM
Fixes: 13710/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-5633152942342144

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 635067b75f)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-31 23:30:09 +02:00
Michael Niedermayer
ad0f4a7d10 avformat/mov: Fix potential integer overflow in entry check in mov_read_trun()
No testcase

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ff13a92a6f)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-31 23:30:09 +02:00
Michael Niedermayer
cb4768e7f2 avcodec/truemotion2: Fix integer overflow in tm2_null_res_block()
Fixes: signed integer overflow: 1111638592 - -2122219136 cannot be represented in type 'int'
Fixes: 13441/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TRUEMOTION2_fuzzer-5732769815068672

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1223696c72)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-31 23:30:09 +02:00
James Almer
6972b353b4 avcodec/cbs_av1: fix range of values for Mastering Display Color Volume Metadata OBUs
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 40490b3a63)
2019-03-25 19:59:28 -03:00
James Almer
abf36b76de avcodec/av1_parser: don't abort parsing the first frame if extradata parsing fails
The first frame contains the sequence header, which is needed to parse every
following frame.

This fixes parsing streams with broken extradata but correct packet data.

Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 699d0c2a30)
2019-03-25 19:59:22 -03:00
Michael Niedermayer
a7cb7a2e43 Changelog: update
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-21 09:02:44 +01:00
Michael Niedermayer
b429df281d avcodec/dfa: Check the chunk header is not truncated
Fixes: Timeout (11sec -> 3sec)
Fixes: 13218/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DFA_fuzzer-5661074316066816

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f20760fadb)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-21 09:01:42 +01:00
Michael Niedermayer
7ce56329e7 avcodec/clearvideo: Check remaining data in P frames
Fixes: Timeout (19sec -> 419msec)
Fixes: 13411/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CLEARVIDEO_fuzzer-5733153811988480

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 41f93f9411)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-21 09:01:42 +01:00
James Almer
dbef08b60f avcodec/hevcdec: decode at most one slice reporting being the first in the picture
Fixes deadlocks when decoding packets containing more than one of the aforementioned
slices when using frame threads.

Tested-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 70c8c8a818)
2019-03-20 20:28:04 -03:00
Michael Niedermayer
77d244e7a9 Update for 4.1.2
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 17:31:54 +01:00
Michael Niedermayer
8cee4190f3 avcodec/dvbsubdec: Check object position
Reference: ETSI EN 300 743 V1.2.1  7.2.2 Region composition segment

Fixes: Timeout
Fixes: 13325/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DVBSUB_fuzzer-5143979392237568

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit a8c5ae4511)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 16:54:31 +01:00
Michael Niedermayer
04ce4cc072 avcodec/cdgraphics: Use ff_set_dimensions()
Fixes: Timeout (17 sec -> 65 milli sec)
Fixes: 13264/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CDGRAPHICS_fuzzer-5711167941509120

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9a9f0e239c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 16:54:10 +01:00
Michael Niedermayer
5d208aac52 avformat/gdv: Check fps
Fixes: Division by 0
Fixes: ffmpeg_zero_division.bin

Found-by: Anatoly Trosinenko <anatoly.trosinenko@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 38381400fc)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 16:53:57 +01:00
Guo, Yejun
83bfd4f3b5 configure: use vpx_codec_vp8_dx/cx for libvpx-vp8 checking
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit d9b2668766)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 11:51:09 +01:00
Guo, Yejun
9bf40978c6 configure: add missing pthreads extralibs dependency for libvpx-vp9
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 402bf26237)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 11:49:55 +01:00
Michael Niedermayer
1e50a327c6 avcodec/mpeg4videodec: Check idx in mpeg4_decode_studio_block()
Fixes: Out of array access
Fixes: 13500/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-5769760178962432

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Kieran Kunhya <kierank@obe.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d227ed5d59)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
ad12d9df1e avcodec/dxv: Correct integer overflow in get_opcodes()
Fixes: 13099/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DXV_fuzzer-5665598896340992
Fixes: signed integer overflow: 2147483647 + 7 cannot be represented in type 'int'

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6e0b5d3a20)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
67d030787e avcodec/scpr: Fix use of uninitialized variable
Fixes: Undefined shift
Fixes: 12911/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SCPR_fuzzer-5677102915911680

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 53248acfb3)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
c90836cc3d avcodec/qpeg: Limit copy in qpeg_decode_intra() to the available bytes
Fixes: Timeout (27 sec -> 39 milli sec)
Fixes: 13151/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_QPEG_fuzzer-5717536023248896

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b819472995)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
6c0124d392 avcodec/aic: Check remaining bits in aic_decode_coeffs()
Fixes: Timeout (78 seconds -> 2 seconds)
Fixes: 13186/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_AIC_fuzzer-5639516533030912

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 951bb7632f)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
29619a8ac2 avcodec/gdv: Check for truncated tags in decompress_5()
Testcase: 13169/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5666354038833152

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 5cf42f65b6)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
09683e1f4e avcodec/bethsoftvideo: Check block_type
Fixes: Timeout (17 seconds -> 1 second)
Fixes: 13184/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BETHSOFTVID_fuzzer-5711446296494080

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b8ecadec05)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
662b6351c8 avcodec/jpeg2000dwt: Fix integer overflow in dwt_decode97_int()
Fixes: runtime error: signed integer overflow: 2147483598 + 128 cannot be represented in type 'int'
Fixes: 12926/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_JPEG2000_fuzzer-5705100733972480

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4801eea0d4)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
b8dd1d2d4b avcodec/error_resilience: Use a symmetric check for skipping MV estimation
This speeds up the testcase by a factor of 4

Fixes: Timeout
Fixes: 13100/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WMV2_fuzzer-5767533905313792

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e4289cb253)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
92335fc02b avcodec/mlpdec: Insuffient typo
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit fc32e08941)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
ff491b1544 avcodec/zmbv: obtain frame later
The frame is not needed that early so obtaining it later avoids
the costly operation in case other checks fail.

Fixes: Timeout (14sec -> 4sec)
Fixes: 13140/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ZMBV_fuzzer-5738330308739072

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 177b40890c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
4e624c89fd avcodec/jvdec: Check available input space before decode8x8()
Fixes: Timeout (78 sec -> 15 millisec)
Fixes: 13147/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_JV_fuzzer-5727107827630080

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 61523683c5)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
9495228df0 avcodec/h264_direct: Fix overflow in POC comparission
Fixes: runtime error: signed integer overflow: 2147421862 - -33624063 cannot be represented in type 'int'
Fixes: 12885/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-5733516975800320

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 5ccf296e74)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
339f40f618 avformat/webmdashenc: Check id in adaption_sets
Fixes: out of array access

Found-by: Wenxiang Qian
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b687b549aa)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Wenxiang Qian
ec22b46a4d avformat/http: Fix Out-of-Bounds access in process_line()
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 85f91ed760)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Wenxiang Qian
11375cd101 avformat/ftp: Fix Out-of-Bounds Access and Information Leak in ftp.c:393
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit a142ffdcae)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Kevin Backhouse via RT
f7f3937494 avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for handling braces
Fixes: [Semmle Security Reports #19439]
Fixes: dos_sscanf2.mkv

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 894995c41e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Kevin Backhouse via RT
cc5361ed18 avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for tag scaning
Fixes: [Semmle Security Reports #19438]
Fixes: dos_sscanf1.mkv

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1f00c97bc3)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
4d1fcd734e avformat/matroskadec: Do not leak queued packets on sync errors
Fixes: memleak
Fixes: clusterfuzz-testcase-minimized-audio_decoder_fuzzer-5649187601121280

Reported-by: Chris Cunningham <chcunningham@google.com>
Tested-by: Chris Cunningham <chcunningham@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d1afa7284c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
8066cb3556 avcodec/mpeg4videodec: Clear interlaced_dct for studio profile
Fixes: Out of array access
Fixes: 13090/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-5408668986638336

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Kieran Kunhya <kierank@obe.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1f686d023b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
d25f388584 avformat/mov: Do not use reference stream in mov_read_sidx() if there is no reference stream
Fixes: NULL pointer dereference
Fixes: clusterfuzz-testcase-minimized-audio_decoder_fuzzer-5634316373721088

Reported-by: Chris Cunningham <chcunningham@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b0d8b7cb8e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
1a82246cae avcodec/sbrdsp_fixed.c: remove input value limit for sbr_sum_square_c()
Fixes: 1377/clusterfuzz-testcase-minimized-5487049807233024
Fixes: assertion failure in sbr_sum_square_c()

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4cde7e62db)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Alex Mogurenko
7e204f7260 avcodec/prores_ks: Fix luma quantization if q >= MAX_STORED_Q
The problem occurs in slice quant estimation and slice encoding:

If the slice quant is larger than  MAX_STORED_Q we don't use pre-calculated
quant matrices, but generate a new one, but both qmat and qmat_chroma both
point to the same table, so the luma table ends up having chroma table
values.

Add custom_chroma_q the same way as custom_q.

Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
(cherry picked from commit e4788ae31b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Charles Liu
53f3f5233f avformat/mov: fix hang while seek on a kind of fragmented mp4
Binary searching would hang if the fragment items do NOT have timestamp for the
specified stream.

For example, a fmp4 consists of separated 'moof' boxes for each track, and
separated 'sidx' for each segment, but no 'mfra' box.  Then every fragment item
only have the timestamp for one of its tracks.

Example:
ffmpeg -f lavfi -i testsrc -f lavfi -i sine -movflags dash+frag_keyframe+skip_trailer+separate_moof -t 1 out.mp4
ffmpeg -ss 0.5 -i out.mp4 -f null none

Also fixes the hang in ticket #7572, but not the reason for having
AV_NOPTS_VALUE timestamps there.

Signed-off-by: Charles Liu <liuchh83@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit aa25198f1b)
2019-02-11 22:07:54 +01:00
Marton Balint
110eff79ca avformat/async: fix assertion condition when draining buffer
Fixes some random assertion failures with

ffprobe -show_packets async:samples/ffmpeg-bugs/trac/ticket6132/Samsung_HDR_-_Chasing_the_Light.ts > /dev/null

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit 4b46d1ee46)
2019-02-11 22:07:06 +01:00
James Almer
33c8009773 avcodec/cbs_av1: don't call cbs_av1_read_trailing_bits() when no bits remain in the OBU
Reviewed-by: jkqxz
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 3e8b8b6b50)
2019-02-10 21:02:06 -03:00
Michael Niedermayer
74700e50bf Changelog: update
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-09 18:33:21 +01:00
chcunningham
00cdf4e4e5 avformat/mov: validate chunk_count vs stsc_data
Bad content may contain stsc boxes with a first_chunk index that
exceeds stco.entries (chunk_count). This ammends the existing check to
include cases where chunk_count == 0. It also patches up the case
when stsc refers to unknown chunks, but stts has no samples (so we
can simply ignore stsc).

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1c15449ca9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-08 12:22:37 +01:00
chcunningham
bcc71f30ad avformat/mov.c: require tfhd to begin parsing trun
Detecting missing tfhd avoids re-using tfhd track info from the previous
moof. For files with multiple tracks, this may make a mess of the
avindex and fragindex, which can later trigger av_assert0 in
mov_read_trun().

Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3ea87e5d9e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-08 12:22:13 +01:00
Michael Niedermayer
31a1d2aa83 Changelog: update
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-04 00:51:42 +01:00
Michael Niedermayer
7816497ba0 avcodec/pgssubdec: Check for duplicate display segments
In such a duplication the previous gets overwritten and leaks

Fixes: memleak
Fixes: 12510/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PGSSUB_fuzzer-5694439226343424

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e35c3d887b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-04 00:32:09 +01:00
Michael Niedermayer
953f97979f avformat/rtsp: Check number of streams in sdp_parse_line()
Fixes: OOM

Found-by: Michael Hanselmann <public@hansmi.ch>
Reviewed-by: Michael Hanselmann <public@hansmi.ch>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 497c9b0cce)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 18:03:35 +01:00
Michael Niedermayer
e75a73d629 avformat/rtsp: Clear reply in every iteration in ff_rtsp_connect()
Fixes: Infinite loop

Found-by: Michael Hanselmann <public@hansmi.ch>
Reviewed-by: Michael Hanselmann <public@hansmi.ch>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 0b50f27635)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:29:41 +01:00
Michael Niedermayer
b482e94e59 avcodec/rasc: Move ff_get_buffer() after frame checks
If the frame1/2 checks fail this avoids doing the allocation of a new frame

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9f4af97aff)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:29:05 +01:00
Michael Niedermayer
0f1332309a avcodec/rasc: Check uncompressed dlta size
We assume that if the compressed size is bigger than if each byte is encoded in a single raw packet
that the data is invalid.

Fixes: Out of memory
Fixes: 12208/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RASC_fuzzer-5648916473708544

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f4079d5174)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:28:23 +01:00
Michael Niedermayer
f5c9753bfd avcodec/fic: Check that there is input left in fic_decode_block()
Fixes: Timeout
Fixes: 12450/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FIC_fuzzer-5661984622641152

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit db1c4acd02)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:23:01 +01:00
Michael Niedermayer
d8b8b27dc3 avcodec/ilbcdec: Fix undefined integer overflow lsf2poly()
The addition is moved up into the context where the variable is unsigned avoiding
the undefined behavior

Fixes: runtime error: signed integer overflow: 2147481972 + 4096 cannot be represented in type 'int'
Fixes: 12444/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ILBC_fuzzer-5755706244857856

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4523cc5e75)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:20:38 +01:00
Michael Niedermayer
62f5325ca3 avcodec/ilbcdec: Fix integer overflow in construct_vector()
webrtc contains explicit code to ignore the undefined behavior (RTC_NO_SANITIZE / OverflowingAddS32S32ToS32())

Probably fixes: Integer overflow (unreproducable here)
Probably fixes: 12215/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ILBC_fuzzer-5767142427852800

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c95d0fb239)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:20:24 +01:00
Michael Niedermayer
bcfd82b0be Update for 4.1.1
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 08:34:57 +01:00
Michael Niedermayer
31fa50f3d9 avcodec/prosumer: Error out if decompress() stops reading data
if 0 is encountered in the LUT then decompress() will continue to output 0 bytes but never read more data.
Without a specification it is impossible to say if this is invalid or a feature.
None of the valid prosumer files tested cause a 0 to be read, so it is likely
not a intended feature.

Fixes: Timeout
Fixes: 11266/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PROSUMER_fuzzer-5681827423977472

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 62f8d27ef1)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
552733d48b avcodec/tiff: Check for 12bit gray fax
Fixes: Assertion failure
Fixes: 11898/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TIFF_fuzzer-5759794191794176

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ec28a85107)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
a8b5990f45 avutil/imgutils: Optimize memset_bytes() by using av_memcpy_backptr()
This is strongly based on code by Marton Balint, and depends on the previous commit

Fixes: Timeout
Fixes: 11502/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664893810769920
Before: Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664893810769920 in 11209 ms
After:  Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664893810769920 in  4104 ms

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f64c0dffa1)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
cb6af7dfa1 avutil/mem: Optimize fill32() by unrolling and using 64bit
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 12b1338be3)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
James Almer
29d978c91e configure: bump year
Happy new year!

(cherry picked from commit 3209d7b393)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
3a52cae2c7 avcodec/tests/rangecoder: initialize array to avoid valgrind warning
Found-by: jamrial
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c15972f0af)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
792df36f42 avcodec/gdv: Optimize and factorize scaling loops
Fixes: Timeout
Fixes: 11067/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5686623711264768

Before change: Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5686623711264768 in 34386 ms
After  change: Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5686623711264768 in 24327 ms

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6e23736aef)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
c694273feb avcodec/h264_slice: Fix integer overflow in implicit_weight_table()
Fixes: signed integer overflow: 2 * 2132811760 cannot be represented in type 'int'
Fixes: 11156/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-6237685933408256

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 77e56d74f9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
9239d58b36 avcodec/exr: set layer_match in all branches
Otherwise it is left to the value from the previous iteration

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 433d2ae435)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
1623f42d99 avcodec/exr: Check for duplicate channel index
Fixes: Out of memory
Fixes: 11582/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_EXR_fuzzer-5730204559867904

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f9728feaf9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
99576bf034 avfilter/vf_tonemap_opencl: Make static tables const
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 47c3a10b16)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
e385fc45dd doc/indevs: fix upto typo
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b33de55747)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
15857674c5 avcodec/4xm: Fix returned error codes
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 07607a1db8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
6b6c854658 avformat/libopenmpt: Fix successfull typo
Reviewed-by: Lou Logan <lou@lrcd.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 571af98a59)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
41ee513c81 avcodec/v4l2_m2m: fix cant typo
Reviewed-by: Lou Logan <lou@lrcd.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 062bf56393)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
33b4aba5bd avcodec/mjpegbdec: Fix some misplaced {} and spaces
Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 11a8d2ccab)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
David Bryant
ea279bd160 avformat/wvdec: detect and error out on WavPack DSD files
Not currently supported.

(cherry picked from commit db109373d8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
gxw
929b5519d8 avcodec/mips: Fix failed case: hevc-conformance-AMP_A_Samsung_* when enable msa
The AV_INPUT_BUFFER_PADDING_SIZE has been increased to 64, but the value is still 32
in function ff_hevc_sao_edge_filter_8_msa. So, use AV_INPUT_BUFFER_PADDING_SIZE directly.
Also, use MAX_PB_SIZE directly instead of 64. Fate tests passed.

Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f652c7a45c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
5ed024e40b avcodec/fic: Fail on invalid slice size/off
Fixes: Timeout
Fixes: 11486/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FIC_fuzzer-5677133863583744

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 30a7a81cdc)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
5550946ff4 avcodec/ilbcdec: fix integer overflow in energy
webrtc uses a int32_t like the existing code in ilbcdec

Fixes: signed integer overflow: 2080245063 + 257939661 cannot be represented in type 'int'
Fixes: 11037/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ILBC_fuzzer-5682976612941824

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit fbf409cd91)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
daef9d4382 postproc/postprocess_template: remove FF_REG_sp from clobber list
Future gcc may no longer support this

Tested-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c1cbeb87db)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
69f50eb915 postproc/postprocess_template: Avoid using %4 for the threshold compare
This avoids problems if %4 is the stack pointer
the constraints do not allow %4 to be the stack pointer but gcc 9 may
no longer support specifying such constraints

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4325527e1c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Jacob Trimble
73c90818b1 libavformat/mov: Fix NULL-dereference read for some encrypted content.
When reading frames, we need to use the fragment for the correct
stream.  Sometimes the "current" fragment is not the same as the one
the frame is for.

Found by Chromium's ClusterFuzz:
https://crbug.com/906392 and https://crbug.com/915524

Signed-off-by: Jacob Trimble <modmaker@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 555f332e7a)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
c22b67feaa avcodec/rpza: Check that there is enough data for all the blocks
Fixes: Timeout
Fixes: 11547/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RPZA_fuzzer-5678435842654208

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e63517e00a)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
4c0be3a60c avcodec/rpza: Move frame allocation to a later point
This will allow performing some fast checks before the slow allocation

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 8a708aa99c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
42357b37cb avcodec/avcodec: Document the data type for AV_PKT_DATA_MPEGTS_STREAM_ID
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 68e011e410)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
e3fbbb7d18 avformat/mpegts: Fix side data type for stream id
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ab1319d82f)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
2f75965c47 tests/fate/filter-video: increase fuzz for fate-filter-refcmp-psnr-rgb
Fixes: test failure on powerpc

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f8f762c300)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
e1f40f0dae avcodec/mjpegdec: Fix indention of ljpeg_decode_yuv_scan()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ea30ac1e40)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
chcunningham
45f5f2086e lavf/id3v2: fail read_apic on EOF reading mimetype
avio_read may return EOF, leaving the mimetype array unitialized. fail
early when this occurs to avoid using the array in an unitialized state.

Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ee1e39a576)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
321c418b87 avcodec/rasc: Check that the number of moves is less than or equal the number of pixels
Fixes: OOM
Fixes: 10307/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RASC_fuzzer-5393974559244288

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 092cb17983)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
f5859d4a8e avformat/nutenc: Document trailer index assert better
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3a95b73abc)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
chcunningham
54fbdacc37 lavf/mov: ensure only one tkhd per trak
Chromium fuzzing produced a whacky file with extra tkhds. This caused
an AVStream that was already in use to be corrupted by assigning it a
new id, which blows up later in mov_read_trun because the
MOVFragmentStreamInfo.index_entry now points OOB.

Reviewed-by: Baptiste Coudurier <baptiste.coudurier@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c9f7b6f7a9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
228f17ced3 avcodec/clearvideo: Check remaining input bits in P macro block loop
Fixes: Timeout
Fixes: 11083/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CLEARVIDEO_fuzzer-5657180351496192

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 7aaab127be)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
9b5a6bb67b avcodec/rasc: Check input space before reading chunk
Fixes: Timeout
Fixes: 11118/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RASC_fuzzer-5652564066959360

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 52ba824c65)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
219cbc5527 avcodec/dxv: Check that there is enough data to decompress
Fixes: Timeout
Fixes: 10979/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DXV_fuzzer-6178582203203584

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2bc3811c0d)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
55c36d2498 avcodec/ppc/hevcdsp: Fix build failures with powerpc-linux-gnu-gcc-4.8 with --disable-optimizations
The affected functions could also be changed into macros, this is the
smaller change to fix it though. And avoids (probably) less readable macros
The extra code should be optimized out when optimizations are done as all values
are known at build after inlining.

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2c64a6bcd2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
558ba71de5 avcodec/msvideo1: Check for too small dimensions
Such low resolution would result in empty output as a minimum of 4x4 is needed
We could also check for multiple of 4 dimensions but that is not needed

Fixes: Timeout
Fixes: 11191/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MSVIDEO1_fuzzer-5739529588178944

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 953bd58861)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
1a5db666ac avcodec/wmv2dec: Skip I frame if its smaller than 1/8 of the minimal size
Frames that small are not valid and of limited use for error concealment, while
being very computationally intensive to process.

Fixes: Timeout
Fixes: 11168/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WMV2_fuzzer-5733782032744448

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d6f4341522)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
eee0cf487a avcodec/msmpeg4dec: Skip frame if its smaller than 1/8 of the minimal size
Frames that small are not valid and of limited use for error concealment, while
being very computationally intensive to process.

Fixes: Timeout
Fixes: 11318/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MSMPEG4V1_fuzzer-5710884555456512

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 09ec182864)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
90db1e441f avcodec/truemotion2rt: Fix rounding in input size check
Fixes: Timeout
Fixes: 11332/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TRUEMOTION2RT_fuzzer-5678456612847616

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 7f22a4ebc9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
4fe90900d8 avcodec/diracdec: Check component quant
Fixes: Timeout
Fixes: 10708/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DIRAC_fuzzer-5730140957442048

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 28c96c2ce2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
ee349bd0fd avcodec/tiff: Limit filtering to decoded data
Fixes: Timeout
Fixes: 11068/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TIFF_fuzzer-5698456681709568

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 90ac0e5f29)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
ab744447e1 avcodec/truemotion2: fix integer overflows in tm2_low_chroma()
Fixes: 11295/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TRUEMOTION2_fuzzer-4888953459572736

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2ae39d7956)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
89d65915cf avcodec/pngdec: Check compression method
method 0 (inflate/deflate) is the only specified in the specification and the only supported

Fixes: Timeout
Fixes: 10976/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PNG_fuzzer-5729372588736512

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1f99674ddd)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
e69bb0fb05 fftools/ffmpeg: Repair reinit_filter feature
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3504004879)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
98a9d868d1 avcodec/shorten: Fix integer overflow with offset
Fixes: signed integer overflow: -1625810908 - 582229060 cannot be represented in type 'int'
Fixes: 10977/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SHORTEN_fuzzer-5732602018267136

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2f888771cd)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
b66152a4e5 avcodec/imm4: Use ff_set_dimensions()
Fixes: Out of memory
Fixes: 10970/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_IMM4_fuzzer-5698750043914240

Reviewed-by: Paul B Mahol <onemda@gmail.com>
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c305e134ce)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Andreas Rheinhardt
ac50246cc4 h264_redundant_pps: Fix logging context
The first element of H264RedundantPPSContext is not a pointer to an
AVClass as required.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6dafcb6fdb)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Marton Balint
ddc284300e avfilter/af_asetnsamples: fix last frame props
Frame properties were not copied, so e.g. PTS was not set for the last frame.

Regression since ef3babb2c7.

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit f9e947845f)
2019-01-01 20:39:44 +01:00
Mark Thompson
b420f23566 cbs_av1: Fix reading of overlong uvlc codes
The specification allows 2^32-1 to be encoded as any number of zeroes
greater than 31, followed by a one.  This previously failed because the
trace code would overflow the array containing the string representation
of the bits if there were more than 63 zeroes.  Fix that by splitting the
trace output into batches, and at the same time move it out of the default
path.

(While this seems likely to be a specification error, libaom does support
it so we probably should as well.)

From a test case by keval shah <skeval65@gmail.com>.

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b97a4b6588)
2018-12-22 18:28:41 +00:00
James Almer
5356e61001 avcodec/cbs_av1: fix parsing delta_frame_id_minus1
delta_frame_id_minus1 is not a single value in the bitstream, and can
store values up to 17 bits wide.

Fixes parsing files with frame ids.

Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 064f9505f4)
2018-12-20 18:29:42 -03:00
Paul B Mahol
a4ddc3c9fc avfilter/vf_overlay: fix filtering with negative y
(cherry picked from commit 8440835dbe)
2018-12-14 23:56:21 +01:00
Paul B Mahol
59e30c05d7 avformat/movenc: get number of written bytes from bitstream writer
Update fate test.

(cherry picked from commit 97d1ee437b)
2018-11-26 15:36:12 +01:00
Paul B Mahol
fcffed470a avformat/movenc: fix size calculation in mov_write_eac3_tag()
Otherwise it would assert when flushing bits.

(cherry picked from commit 027f032bbc)
2018-11-26 15:36:05 +01:00
Paul B Mahol
9efc591cb7 avfilter/vf_overlay: fix crash with negative y
(cherry picked from commit 57815cfad5)
2018-11-25 12:46:56 +01:00
Marton Balint
d4c5f515f0 avcodec/mpeg_er: fix clearing chroma blocks for 422 and 444
Fixes ticket #7494.

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit e3a9630982)
2018-11-19 23:29:30 +01:00
Marton Balint
bb01cd3cc0 avfilter/af_afade: fix duration maximum
Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit aecd63b926)
2018-11-15 22:34:53 +01:00
Mark Harris
fed94c2f22 avfilter/vf_fade: fix start/duration max value
A fade out (usually at the end of a video) can easily start beyond
INT32_MAX (about 36 minutes).  Regression since d40dc64173.

(cherry picked from commit ae4323548a)
2018-11-15 22:34:34 +01:00
James Almer
a9e9303f26 avcodec/cbs_av1: fix parsing signed integer values
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit f0f2832a5c)
2018-11-14 20:53:44 -03:00
James Almer
49bc641e89 avcodec/cbs_av1: fix storage size for segmentation_params feature_value fields
The valid range is -255 to 255.

Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 79831f4531)
2018-11-14 20:53:40 -03:00
Mark Thompson
4f1e07090a configure: Add missing xlib dependency for VAAPI X11 code
Fixes #7538.

(cherry picked from commit 2ce3a48f30)
2018-11-14 23:24:51 +00:00
Mark Wu
11dff170ef avcodec/hevcdec: fix non-ref frame judgement
After inspecting the source code of x265, mpv and ffmpeg, I've found that
ffmpeg mistakenly regards EVC_NAL_BLA_N_LP and HEVC_NAL_IDR_N_LP as non-
reference frames, which are acutally reference frames according to the
specification in x265, and drops them.

This patch should address the problem. I have tested it with mpv.

Signed-off-by: Mark Wu <wfwf1997@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 10bc4c3a7d)
2018-11-10 14:38:25 -03:00
Mark Thompson
10506de9ad cbs_av1: Support redundant frame headers
(cherry picked from commit f5894178fb)
2018-11-05 23:11:03 +00:00
Mark Thompson
af3fccfeff cbs_av1: Fix header writing when already aligned
(cherry picked from commit 6bdb7712ae)
2018-11-05 23:10:57 +00:00
Mark Thompson
ec1b5216fc configure: Add missing V4L2 M2M decoder BSF dependencies
(cherry picked from commit e9d2e3fdaa)
2018-11-05 23:10:49 +00:00
Mark Thompson
066ff02621 configure: Add missing IVF muxer BSF dependency
(cherry picked from commit a4fb2b1150)
2018-11-05 23:10:41 +00:00
James Almer
398a70309e avcodec/cbs_av1: fix decoder/encoder_buffer_delay variable types
buffer_delay_length_minus_1 is five bits long, meaning decode_buffer_delay and
encoder_buffer_delay can have values up to 32 bits long.

Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 89a0d33e3a)
2018-11-04 22:06:20 -03:00
Mark Thompson
acd13f1255 configure: Fix av1_metadata BSF dependency
(cherry picked from commit 34429182b9)
2018-11-04 22:06:11 -03:00
James Almer
1c98cf4ddd avformat/ivfenc: use the av1_metadata bsf to insert Temporal Delimiter OBUs if needed
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 2d2af23349)
2018-11-04 22:06:08 -03:00
Marton Balint
63c1e291ef avformat/ftp: allow nonstandard 202 reply to OPTS UTF8
Fixes ticket #7481.

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit 8e5a2495a8)
2018-11-04 22:55:09 +01:00
Michael Niedermayer
7ebc27e1fa avcodec/cavsdec: Propagate error codes inside decode_mb_i()
Fixes: Timeout
Fixes: 10702/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CAVS_fuzzer-5669940938407936

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c1cee05656)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
bc5777bdab avcodec/mpeg4videodec: Clear partitioned frame in decode_studio_vop_header()
partitioned_frame is also set/cleared in decode_vop_header()

Fixes: out of array read
Fixes: 9789/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-5638681627983872

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 074187d599)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
7d23ccac8d avcodec/mpegaudio_parser: Consume more than 0 bytes in case of the unsupported mp3adu case
Fixes: Timeout
Fixes: 10966/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MP3ADU_fuzzer-5348695024336896
Fixes: 10969/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MP3ADUFLOAT_fuzzer-5691669402877952

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit df91af140c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
2f04b78b95 avcodec/prosumer: Simplify bit juggling of the c variable in decompress()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 66425add27)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
fd05e20650 avcodec/prosumer: Remove always true check in decompress()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1dfa0b6f36)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
a163384467 avcodec/prosumer: Remove unneeded ()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 506839a3e9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
b9875b7583 avcodec/prosumer: Check for bytestream eof in decompress()
Fixes: Infinite loop
Fixes: 10685/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PROSUMER_fuzzer-5652236881887232

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9acdf17b2c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Philip Langdale
ebc1c49e41 avfilter/vf_cuda_yadif: Avoid new syntax for vector initialisation
This requires a newer version of CUDA than we want to require.

(cherry picked from commit 8e50215b5e)
2018-11-03 15:50:31 -07:00
Philip Langdale
6feec11e48 avcodec/nvdec: Increase frame pool size to help deinterlacing
With the cuda yadif filter in use, the number of mapped decoder
frames could increase by two, as the filter holds on to additional
frames.

(cherry picked from commit 1b41115ef7)
2018-11-03 15:50:25 -07:00
Philip Langdale
67126555fc avfilter/vf_yadif_cuda: CUDA accelerated yadif deinterlacer
This is a cuda implementation of yadif, which gives us a way to
do deinterlacing when using the nvdec hwaccel. In that scenario
we don't have access to the nvidia deinterlacer.

(cherry picked from commit d5272e94ab)
2018-11-03 15:50:12 -07:00
Philip Langdale
041231fcd6 libavfilter/vf_yadif: Make frame management logic and options shareable
I'm writing a cuda implementation of yadif, and while this
obviously has a very different implementation of the actual
filtering, all the frame management is unchanged. To avoid
duplicating that logic, let's make it shareable.

From the perspective of the existing filter, the only real change
is introducing a function pointer for the filter() function so it
can be specified for the specific filter.

(cherry picked from commit 598f0f3927)
2018-11-03 15:45:55 -07:00
Josh de Kock
765fb1f224 fate/api-h264-slice-test: use cleaner error handling
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 1052578dad)
2018-11-03 12:57:51 -03:00
Josh de Kock
5060a615c7 fate/api-h264-slice-test: don't use ssize_t
Fixes ticket #7521

Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 8096f52049)
2018-11-03 12:57:37 -03:00
Michael Niedermayer
1665ac6a44 RELEASE_NOTES: Based on the version from 4.0
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-02 01:36:21 +01:00
Michael Niedermayer
3c7e973430 Update for 4.1
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-02 01:33:08 +01:00
2499 changed files with 44620 additions and 149810 deletions

1
.gitignore vendored
View File

@@ -36,4 +36,3 @@
/lcov/
/src
/mapfile
/tools/python/__pycache__/

View File

@@ -1,21 +0,0 @@
<james.darnley@gmail.com> <jdarnley@obe.tv>
<jeebjp@gmail.com> <jan.ekstrom@aminocom.com>
<sw@jkqxz.net> <mrt@jkqxz.net>
<u@pkh.me> <cboesch@gopro.com>
<zhilizhao@tencent.com> <quinkblack@foxmail.com>
<zhilizhao@tencent.com> <wantlamy@gmail.com>
<modmaker@google.com> <modmaker-at-google.com@ffmpeg.org>
<stebbins@jetheaddev.com> <jstebbins@jetheaddev.com>
<barryjzhao@tencent.com> <mypopydev@gmail.com>
<barryjzhao@tencent.com> <jun.zhao@intel.com>
<josh@itanimul.li> <joshdk@obe.tv>
<michael@niedermayer.cc> <michaelni@gmx.at>
<linjie.fu@intel.com> <fulinjie@zju.edu.cn>
<ceffmpeg@gmail.com> <cehoyos@ag.or.at>
<ceffmpeg@gmail.com> <cehoyos@rainbow.studorg.tuwien.ac.at>
<ffmpeg@gyani.pro> <gyandoshi@gmail.com>
<atomnuker@gmail.com> <rpehlivanov@obe.tv>
<zhong.li@intel.com> <zhongli_dev@126.com>
<andreas.rheinhardt@gmail.com> <andreas.rheinhardt@googlemail.com>
rcombs <rcombs@rcombs.me> <rodger.combs@gmail.com>
<thilo.borgmann@mail.de> <thilo.borgmann@googlemail.com>

View File

@@ -19,7 +19,7 @@ cache:
directories:
- ffmpeg-samples
before_install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update; fi
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update --all; fi
install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install nasm; fi
script:

229
Changelog
View File

@@ -1,118 +1,127 @@
Entries are sorted chronologically from oldest to youngest within each release,
releases are sorted from youngest to oldest.
version 4.3:
- v360 filter
- Intel QSV-accelerated MJPEG decoding
- Intel QSV-accelerated VP9 decoding
- Support for TrueHD in mp4
- Support AMD AMF encoder on Linux (via Vulkan)
- IMM5 video decoder
- ZeroMQ protocol
- support Sipro ACELP.KELVIN decoding
- streamhash muxer
- sierpinski video source
- scroll video filter
- photosensitivity filter
- anlms filter
- arnndn filter
- bilateral filter
- maskedmin and maskedmax filters
- VDPAU VP9 hwaccel
- median filter
- QSV-accelerated VP9 encoding
- AV1 encoding support via librav1e
- AV1 frame merge bitstream filter
- AV1 Annex B demuxer
- axcorrelate filter
- mvdv decoder
- mvha decoder
- MPEG-H 3D Audio support in mp4
- thistogram filter
- freezeframes filter
- Argonaut Games ADPCM decoder
- Argonaut Games ASF demuxer
- xfade video filter
- xfade_opencl filter
- afirsrc audio filter source
- pad_opencl filter
- Simon & Schuster Interactive ADPCM decoder
- Real War KVAG demuxer
- CDToons video decoder
- siren audio decoder
- Rayman 2 ADPCM decoder
- Rayman 2 APM demuxer
- cas video filter
- High Voltage Software ADPCM decoder
- LEGO Racers ALP (.tun & .pcm) demuxer
- AMQP 0-9-1 protocol (RabbitMQ)
- Vulkan support
- avgblur_vulkan, overlay_vulkan, scale_vulkan and chromaber_vulkan filters
- ADPCM IMA MTF decoder
- FWSE demuxer
- DERF DPCM decoder
- DERF demuxer
- CRI HCA decoder
- CRI HCA demuxer
- overlay_cuda filter
- switch from AvxSynth to AviSynth+ on Linux
- mv30 decoder
- Expanded styling support for 3GPP Timed Text Subtitles (movtext)
- WebP parser
- tmedian filter
- maskedthreshold filter
- Support for muxing pcm and pgs in m2ts
- Cunning Developments ADPCM decoder
- asubboost filter
- Pro Pinball Series Soundbank demuxer
- pcm_rechunk bitstream filter
- scdet filter
- NotchLC decoder
- gradients source video filter
- MediaFoundation encoder wrapper
- untile filter
- Simon & Schuster Interactive ADPCM encoder
- PFM decoder
- dblur video filter
- Real War KVAG muxer
version 4.1.3:
- avcodec/rscc: Check that the to be uncompressed input is large enough
- avformat/movenc: free eac3 private data only when closing the stream
- avcodec/hevcdec: Avoid only partly skiping duplicate first slices
- lavc/bmp: Avoid a heap buffer overwrite for 1bpp input.
- avcodec/mpegpicture: Check size of edge_emu_buffer
- avformat/mov: Fix potential integer overflow in entry check in mov_read_trun()
- avcodec/truemotion2: Fix integer overflow in tm2_null_res_block()
- avcodec/cbs_av1: fix range of values for Mastering Display Color Volume Metadata OBUs
- avcodec/av1_parser: don't abort parsing the first frame if extradata parsing fails
version 4.1.2:
- avcodec/dfa: Check the chunk header is not truncated
- avcodec/clearvideo: Check remaining data in P frames
- avcodec/hevcdec: decode at most one slice reporting being the first in the picture
- avcodec/dvbsubdec: Check object position
- avcodec/cdgraphics: Use ff_set_dimensions()
- avformat/gdv: Check fps
- configure: use vpx_codec_vp8_dx/cx for libvpx-vp8 checking
- configure: add missing pthreads extralibs dependency for libvpx-vp9
- avcodec/mpeg4videodec: Check idx in mpeg4_decode_studio_block()
- avcodec/dxv: Correct integer overflow in get_opcodes()
- avcodec/scpr: Fix use of uninitialized variable
- avcodec/qpeg: Limit copy in qpeg_decode_intra() to the available bytes
- avcodec/aic: Check remaining bits in aic_decode_coeffs()
- avcodec/gdv: Check for truncated tags in decompress_5()
- avcodec/bethsoftvideo: Check block_type
- avcodec/jpeg2000dwt: Fix integer overflow in dwt_decode97_int()
- avcodec/error_resilience: Use a symmetric check for skipping MV estimation
- avcodec/mlpdec: Insuffient typo
- avcodec/zmbv: obtain frame later
- avcodec/jvdec: Check available input space before decode8x8()
- avcodec/h264_direct: Fix overflow in POC comparission
- avformat/webmdashenc: Check id in adaption_sets
- avformat/http: Fix Out-of-Bounds access in process_line()
- avformat/ftp: Fix Out-of-Bounds Access and Information Leak in ftp.c:393
- avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for handling braces
- avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for tag scaning
- avformat/matroskadec: Do not leak queued packets on sync errors
- avcodec/mpeg4videodec: Clear interlaced_dct for studio profile
- avformat/mov: Do not use reference stream in mov_read_sidx() if there is no reference stream
- avcodec/sbrdsp_fixed.c: remove input value limit for sbr_sum_square_c()
- avcodec/prores_ks: Fix luma quantization if q >= MAX_STORED_Q
- avformat/mov: fix hang while seek on a kind of fragmented mp4
- avformat/async: fix assertion condition when draining buffer
- avcodec/cbs_av1: don't call cbs_av1_read_trailing_bits() when no bits remain in the OBU
version 4.2:
- tpad filter
- AV1 decoding support through libdav1d
- dedot filter
- chromashift and rgbashift filters
- freezedetect filter
- truehd_core bitstream filter
- dhav demuxer
- PCM-DVD encoder
- GIF parser
- vividas demuxer
- hymt decoder
- anlmdn filter
- maskfun filter
- hcom demuxer and decoder
- ARBC decoder
- libaribb24 based ARIB STD-B24 caption support (profiles A and C)
- Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
- removed libndi-newtek
- agm decoder
- KUX demuxer
- AV1 frame split bitstream filter
- lscr decoder
- lagfun filter
- asoftclip filter
- Support decoding of HEVC 4:4:4 content in vdpau
- colorhold filter
- xmedian filter
- asr filter
- showspatial multimedia filter
- VP4 video decoder
- IFV demuxer
- derain filter
- deesser filter
- mov muxer writes tracks with unspecified language instead of English by default
- add support for using clang to compile CUDA kernels
version 4.1.1:
- avformat/mov: validate chunk_count vs stsc_data
- avformat/mov: require tfhd to begin parsing trun
- avcodec/pgssubdec: Check for duplicate display segments
- avformat/rtsp: Check number of streams in sdp_parse_line()
- avformat/rtsp: Clear reply in every iteration in ff_rtsp_connect()
- avcodec/rasc: Move ff_get_buffer() after frame checks
- avcodec/rasc: Check uncompressed dlta size
- avcodec/fic: Check that there is input left in fic_decode_block()
- avcodec/ilbcdec: Fix undefined integer overflow lsf2poly()
- avcodec/ilbcdec: Fix integer overflow in construct_vector()
- avcodec/prosumer: Error out if decompress() stops reading data
- avcodec/tiff: Check for 12bit gray fax
- avutil/imgutils: Optimize memset_bytes() by using av_memcpy_backptr()
- avutil/mem: Optimize fill32() by unrolling and using 64bit
- configure: bump year
- avcodec/tests/rangecoder: initialize array to avoid valgrind warning
- avcodec/gdv: Optimize and factorize scaling loops
- avcodec/h264_slice: Fix integer overflow in implicit_weight_table()
- avcodec/exr: set layer_match in all branches
- avcodec/exr: Check for duplicate channel index
- avfilter/vf_tonemap_opencl: Make static tables const
- doc/indevs: fix upto typo
- avcodec/4xm: Fix returned error codes
- avformat/libopenmpt: Fix successfull typo
- avcodec/v4l2_m2m: fix cant typo
- avcodec/mjpegbdec: Fix some misplaced {} and spaces
- avformat/wvdec: detect and error out on WavPack DSD files
- avcodec/mips: Fix failed case: hevc-conformance-AMP_A_Samsung_* when enable msa
- avcodec/fic: Fail on invalid slice size/off
- avcodec/ilbcdec: fix integer overflow in energy
- postproc/postprocess_template: remove FF_REG_sp from clobber list
- postproc/postprocess_template: Avoid using %4 for the threshold compare
- libavformat/mov: Fix NULL-dereference read for some encrypted content.
- avcodec/rpza: Check that there is enough data for all the blocks
- avcodec/rpza: Move frame allocation to a later point
- avcodec/avcodec: Document the data type for AV_PKT_DATA_MPEGTS_STREAM_ID
- avformat/mpegts: Fix side data type for stream id
- tests/fate/filter-video: increase fuzz for fate-filter-refcmp-psnr-rgb
- avcodec/mjpegdec: Fix indention of ljpeg_decode_yuv_scan()
- lavf/id3v2: fail read_apic on EOF reading mimetype
- avcodec/rasc: Check that the number of moves is less than or equal the number of pixels
- avformat/nutenc: Document trailer index assert better
- lavf/mov: ensure only one tkhd per trak
- avcodec/clearvideo: Check remaining input bits in P macro block loop
- avcodec/rasc: Check input space before reading chunk
- avcodec/dxv: Check that there is enough data to decompress
- avcodec/ppc/hevcdsp: Fix build failures with powerpc-linux-gnu-gcc-4.8 with --disable-optimizations
- avcodec/msvideo1: Check for too small dimensions
- avcodec/wmv2dec: Skip I frame if its smaller than 1/8 of the minimal size
- avcodec/msmpeg4dec: Skip frame if its smaller than 1/8 of the minimal size
- avcodec/truemotion2rt: Fix rounding in input size check
- avcodec/diracdec: Check component quant
- avcodec/tiff: Limit filtering to decoded data
- avcodec/truemotion2: fix integer overflows in tm2_low_chroma()
- avcodec/pngdec: Check compression method
- fftools/ffmpeg: Repair reinit_filter feature
- avcodec/shorten: Fix integer overflow with offset
- avcodec/imm4: Use ff_set_dimensions()
- h264_redundant_pps: Fix logging context
- avfilter/af_asetnsamples: fix last frame props
- cbs_av1: Fix reading of overlong uvlc codes
- avcodec/cbs_av1: fix parsing delta_frame_id_minus1
- avfilter/vf_overlay: fix filtering with negative y
- avformat/movenc: get number of written bytes from bitstream writer
- avformat/movenc: fix size calculation in mov_write_eac3_tag()
- avfilter/vf_overlay: fix crash with negative y
- avcodec/mpeg_er: fix clearing chroma blocks for 422 and 444
- avfilter/af_afade: fix duration maximum
- avfilter/vf_fade: fix start/duration max value
- avcodec/cbs_av1: fix parsing signed integer values
- avcodec/cbs_av1: fix storage size for segmentation_params feature_value fields
- configure: Add missing xlib dependency for VAAPI X11 code
- avcodec/hevcdec: fix non-ref frame judgement
version 4.1:

View File

@@ -1,4 +1,4 @@
## Installing FFmpeg
#Installing FFmpeg:
1. Type `./configure` to create the configuration. A list of configure
options is printed by running `configure --help`.

View File

@@ -21,11 +21,10 @@ Specifically, the GPL parts of FFmpeg are:
- `compat/solaris/make_sunver.pl`
- `doc/t2h.pm`
- `doc/texi2pod.pl`
- `libswresample/tests/swresample.c`
- `libswresample/swresample-test.c`
- `tests/checkasm/*`
- `tests/tiny_ssim.c`
- the following filters in libavfilter:
- `signature_lookup.c`
- `vf_blackframe.c`
- `vf_boxblur.c`
- `vf_colormatrix.c`
@@ -35,13 +34,13 @@ Specifically, the GPL parts of FFmpeg are:
- `vf_eq.c`
- `vf_find_rect.c`
- `vf_fspp.c`
- `vf_geq.c`
- `vf_histeq.c`
- `vf_hqdn3d.c`
- `vf_interlace.c`
- `vf_kerndeint.c`
- `vf_lensfun.c` (GPL version 3 or later)
- `vf_mcdeint.c`
- `vf_mpdecimate.c`
- `vf_nnedi.c`
- `vf_owdenoise.c`
- `vf_perspective.c`
- `vf_phase.c`
@@ -50,14 +49,12 @@ Specifically, the GPL parts of FFmpeg are:
- `vf_pullup.c`
- `vf_repeatfields.c`
- `vf_sab.c`
- `vf_signature.c`
- `vf_smartblur.c`
- `vf_spp.c`
- `vf_stereo3d.c`
- `vf_super2xsai.c`
- `vf_tinterlace.c`
- `vf_uspp.c`
- `vf_vaguedenoiser.c`
- `vsrc_mptestsrc.c`
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
@@ -83,47 +80,41 @@ affect the licensing of binaries resulting from the combination.
### Compatible libraries
The following libraries are under GPL version 2:
- avisynth
The following libraries are under GPL:
- frei0r
- libcdio
- libdavs2
- librubberband
- libvidstab
- libx264
- libx265
- libxavs
- libxavs2
- libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
passing `--enable-gpl` to configure.
The following libraries are under LGPL version 3:
- gmp
- libaribb24
- liblensfun
When combining them with FFmpeg, use the configure option `--enable-version3` to
upgrade FFmpeg to the LGPL v3.
The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn libraries are under the Apache License
2.0. That license is incompatible with the LGPL v2.1 and the GPL v2, but not with
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
version 3 of those licenses. So to combine these libraries with FFmpeg, the
license version needs to be upgraded by passing `--enable-version3` to configure.
The smbclient library is under the GPL v3, to combine it with FFmpeg,
the options `--enable-gpl` and `--enable-version3` have to be passed to
configure to upgrade FFmpeg to the GPL v3.
### Incompatible libraries
There are certain libraries you can combine with FFmpeg whose licenses are not
compatible with the GPL and/or the LGPL. If you wish to enable these
libraries, even in circumstances that their license may be incompatible, pass
`--enable-nonfree` to configure. This will cause the resulting binary to be
`--enable-nonfree` to configure. But note that if you enable any of these
libraries the resulting binary will be under a complex license mix that is
more restrictive than the LGPL and that may result in additional obligations.
It is possible that these restrictions cause the resulting binary to be
unredistributable.
The Fraunhofer FDK AAC and OpenSSL libraries are under licenses which are
incompatible with the GPLv2 and v3. To the best of our knowledge, they are
compatible with the LGPL.
The NVENC library, while its header file is licensed under the compatible MIT
license, requires a proprietary binary blob at run time, and is deemed to be
incompatible with the GPL. We are not certain if it is compatible with the
LGPL, but we require `--enable-nonfree` even with LGPL configurations in case
it is not.

View File

@@ -39,7 +39,7 @@ QuickTime faststart:
Miscellaneous Areas
===================
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Gyan Doshi
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Lou Logan, Gyan Doshi
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov
presets Robert Swain
metadata subsystem Aurelien Jacobs
@@ -52,9 +52,9 @@ Communication
website Deby Barbara Lepage
fate.ffmpeg.org Timothy Gu
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos
Patchwork Andriy Gelman
mailing lists Baptiste Coudurier
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
mailing lists Baptiste Coudurier, Lou Logan
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
Twitter Lou Logan, Reynaldo H. Verdejo Pinochet
Launchpad Timothy Gu
ffmpeg-security Andreas Cadhalpun, Carl Eugen Hoyos, Clément Bœsch, Michael Niedermayer, Reimar Doeffinger, Rodger Combs, wm4
@@ -78,7 +78,6 @@ Other:
float_dsp Loren Merritt
hash Reimar Doeffinger
hwcontext_cuda* Timo Rothenpieler
hwcontext_vulkan* Lynne
intfloat* Michael Niedermayer
integer.c, integer.h Michael Niedermayer
lzo Reimar Doeffinger
@@ -89,7 +88,6 @@ Other:
rational.c, rational.h Michael Niedermayer
rc4 Reimar Doeffinger
ripemd.c, ripemd.h James Almer
tx* Lynne
libavcodec
@@ -145,7 +143,6 @@ Codecs:
asv* Michael Niedermayer
atrac3plus* Maxim Poliakovski
audiotoolbox* Rodger Combs
avs2* Huiwen Ren
bgmc.c, bgmc.h Thilo Borgmann
binkaudio.c Peter Ross
cavs* Stefan Gehrer
@@ -170,6 +167,7 @@ Codecs:
eacmv*, eaidct*, eat* Peter Ross
evrc* Paul B Mahol
exif.c, exif.h Thilo Borgmann
exr.c Martin Vignali
ffv1* Michael Niedermayer
ffwavesynth.c Nicolas George
fifo.c Jan Sebechlebsky
@@ -191,17 +189,14 @@ Codecs:
libcelt_dec.c Nicolas George
libcodec2.c Tomas Härdin
libdirac* David Conrad
libdavs2.c Huiwen Ren
libgsm.c Michel Bardiaux
libkvazaar.c Arttu Ylä-Outinen
libopenh264enc.c Martin Storsjo, Linjie Fu
libopenjpeg.c Jaikrishnan Menon
libopenjpegenc.c Michael Bradshaw
libtheoraenc.c David Conrad
libvorbis.c David Conrad
libvpx* James Zern
libxavs.c Stefan Gehrer
libxavs2.c Huiwen Ren
libzvbi-teletextdec.c Marton Balint
lzo.h, lzo.c Reimar Doeffinger
mdec.c Michael Niedermayer
@@ -217,7 +212,6 @@ Codecs:
msvideo1.c Mike Melanson
nuv.c Reimar Doeffinger
nvdec*, nvenc* Timo Rothenpieler
omx.c Martin Storsjo, Aman Gupta
opus* Rostislav Pehlivanov
paf.* Paul B Mahol
pcx.c Ivo van Poorten
@@ -366,15 +360,12 @@ Filters:
vf_ssim.c Paul B Mahol
vf_stereo3d.c Paul B Mahol
vf_telecine.c Paul B Mahol
vf_tonemap_opencl.c Ruiling Song
vf_yadif.c Michael Niedermayer
vf_zoompan.c Paul B Mahol
Sources:
vsrc_mandelbrot.c Michael Niedermayer
dnn Yejun Guo
libavformat
===========
@@ -436,9 +427,9 @@ Muxers/Demuxers:
lmlm4.c Ivo van Poorten
lvfdec.c Paul B Mahol
lxfdec.c Tomas Härdin
matroska.c Aurelien Jacobs, Andreas Rheinhardt
matroskadec.c Aurelien Jacobs, Andreas Rheinhardt
matroskaenc.c David Conrad, Andreas Rheinhardt
matroska.c Aurelien Jacobs
matroskadec.c Aurelien Jacobs
matroskaenc.c David Conrad
matroska subtitles (matroskaenc.c) John Peebles
metadata* Aurelien Jacobs
mgsts.c Paul B Mahol
@@ -453,7 +444,7 @@ Muxers/Demuxers:
mpegtsenc.c Baptiste Coudurier
msnwc_tcp.c Ramiro Polla
mtv.c Reynaldo H. Verdejo Pinochet
mxf* Baptiste Coudurier, Tomas Härdin
mxf* Baptiste Coudurier
nistspheredec.c Paul B Mahol
nsvdec.c Francois Revol
nut* Michael Niedermayer
@@ -461,6 +452,7 @@ Muxers/Demuxers:
oggdec.c, oggdec.h David Conrad
oggenc.c Baptiste Coudurier
oggparse*.c David Conrad
oggparsedaala* Rostislav Pehlivanov
oma.c Maxim Poliakovski
paf.c Paul B Mahol
psxstr.c Mike Melanson
@@ -508,7 +500,6 @@ Protocols:
ftp.c Lukasz Marek
http.c Ronald S. Bultje
libssh.c Lukasz Marek
libzmq.c Andriy Gelman
mms*.c Ronald S. Bultje
udp.c Luca Abeni
icecast.c Marvin Scholz
@@ -535,7 +526,6 @@ Alpha Falk Hueffner
MIPS Manojkumar Bhosale, Shiyou Yin
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
Amiga / PowerPC Colin Ward
Linux / PowerPC Lauri Kasanen
Windows MinGW Alex Beregszaszi, Ramiro Polla
Windows Cygwin Victor Paesa
Windows MSVC Matthew Oliver, Hendrik Leppkes
@@ -564,7 +554,6 @@ Joakim Plate
Jun Zhao
Kieran Kunhya
Kirill Gavrilov
Limin Wang
Martin Storsjö
Panagiotis Issaris
Pedro Arthur
@@ -608,14 +597,12 @@ James Almer 7751 2E8C FD94 A169 57E6 9A7A 1463 01AD 7376 59E0
Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A
Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE
Lou Logan (llogan) 7D68 DC73 CBEF EABB 671A B6CF 621C 2E28 82F8 DC3A
Lynne FE50 139C 6805 72CA FD52 1F8D A2FE A5F0 3F03 4464
Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
Nikolay Aleksandrov 8978 1D8C FB71 588E 4B27 EAA8 C4F0 B5FC E011 13B1
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
Philip Langdale 5DC5 8D66 5FBA 3A43 18EC 045E F8D6 B194 6A75 682E
Ramiro Polla 7859 C65B 751B 1179 792E DAE8 8E95 8B2F 9B6C 5700
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
@@ -624,7 +611,6 @@ Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
Steinar H. Gunderson C2E9 004F F028 C18E 4EAD DB83 7F61 7561 7797 8F76
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
Thilo Borgmann (thilo) CE1D B7F4 4D20 FC3A DD9F FE5A 257C 5B8F 1D20 B92F
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
Tomas Härdin (thardin) A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551

View File

@@ -50,12 +50,6 @@ $(TOOLS): %$(EXESUF): %.o
target_dec_%_fuzzer$(EXESUF): target_dec_%_fuzzer.o $(FF_DEP_LIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH)
tools/target_bsf_%_fuzzer$(EXESUF): tools/target_bsf_%_fuzzer.o $(FF_DEP_LIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH)
tools/target_dem_fuzzer$(EXESUF): tools/target_dem_fuzzer.o $(FF_DEP_LIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH)
tools/sofa2wavs$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
tools/uncoded_frame$(EXESUF): ELIBS = $(FF_EXTRALIBS)
@@ -141,7 +135,7 @@ uninstall-data:
clean::
$(RM) $(CLEANSUFFIXES)
$(RM) $(addprefix compat/,$(CLEANSUFFIXES)) $(addprefix compat/*/,$(CLEANSUFFIXES)) $(addprefix compat/*/*/,$(CLEANSUFFIXES))
$(RM) $(addprefix compat/,$(CLEANSUFFIXES)) $(addprefix compat/*/,$(CLEANSUFFIXES))
$(RM) -r coverage-html
$(RM) -rf coverage.info coverage.info.in lcov
@@ -151,7 +145,6 @@ distclean:: clean
version.h libavutil/ffversion.h libavcodec/codec_names.h \
libavcodec/bsf_list.c libavformat/protocol_list.c \
libavcodec/codec_list.c libavcodec/parser_list.c \
libavfilter/filter_list.c libavdevice/indev_list.c libavdevice/outdev_list.c \
libavformat/muxer_list.c libavformat/demuxer_list.c
ifeq ($(SRC_LINK),src)
$(RM) src
@@ -166,7 +159,7 @@ check: all alltools examples testprogs fate
include $(SRC_PATH)/tests/Makefile
$(sort $(OUTDIRS)):
$(sort $(OBJDIRS)):
$(Q)mkdir -p $@
# Dummy rule to stop make trying to rebuild removed or renamed headers

View File

@@ -1 +1 @@
4.3
4.1.3

View File

@@ -1,10 +1,10 @@
┌────────────────────────────────────┐
│ RELEASE NOTES for FFmpeg 4.3 "4:3" │
└────────────────────────────────────┘
┌─────────────────────────────────────────────
│ RELEASE NOTES for FFmpeg 4.1 "al-Khwarizmi" │
└─────────────────────────────────────────────
The FFmpeg Project proudly presents FFmpeg 4.3 "4:3", about 10
months after the release of FFmpeg 4.2.
The FFmpeg Project proudly presents FFmpeg 4.1 "al-Khwarizmi", about 6
months after the release of FFmpeg 4.0.
A complete Changelog is available at the root of the project, and the
complete Git history on https://git.ffmpeg.org/gitweb/ffmpeg.git

1064
compat/avisynth/avisynth_c.h Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,62 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_CAPI_H
#define AVS_CAPI_H
#ifdef __cplusplus
# define EXTERN_C extern "C"
#else
# define EXTERN_C
#endif
#ifndef AVSC_USE_STDCALL
# define AVSC_CC __cdecl
#else
# define AVSC_CC __stdcall
#endif
#define AVSC_INLINE static __inline
#ifdef BUILDING_AVSCORE
# define AVSC_EXPORT EXTERN_C
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
#else
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
# ifndef AVSC_NO_DECLSPEC
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
# else
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
# endif
#endif
#endif //AVS_CAPI_H

View File

@@ -0,0 +1,55 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_CONFIG_H
#define AVS_CONFIG_H
// Undefine this to get cdecl calling convention
#define AVSC_USE_STDCALL 1
// NOTE TO PLUGIN AUTHORS:
// Because FRAME_ALIGN can be substantially higher than the alignment
// a plugin actually needs, plugins should not use FRAME_ALIGN to check for
// alignment. They should always request the exact alignment value they need.
// This is to make sure that plugins work over the widest range of AviSynth
// builds possible.
#define FRAME_ALIGN 32
#if defined(_M_AMD64) || defined(__x86_64)
# define X86_64
#elif defined(_M_IX86) || defined(__i386__)
# define X86_32
#else
# error Unsupported CPU architecture.
#endif
#endif //AVS_CONFIG_H

View File

@@ -0,0 +1,51 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_TYPES_H
#define AVS_TYPES_H
// Define all types necessary for interfacing with avisynth.dll
// Raster types used by VirtualDub & Avisynth
typedef unsigned int Pixel32;
typedef unsigned char BYTE;
// Audio Sample information
typedef float SFLOAT;
#ifdef __GNUC__
typedef long long int INT64;
#else
typedef __int64 INT64;
#endif
#endif //AVS_TYPES_H

View File

@@ -0,0 +1,728 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
// MA 02110-1301 USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef __AVXSYNTH_C__
#define __AVXSYNTH_C__
#include "windowsPorts/windows2linux.h"
#include <stdarg.h>
#ifdef __cplusplus
# define EXTERN_C extern "C"
#else
# define EXTERN_C
#endif
#define AVSC_USE_STDCALL 1
#ifndef AVSC_USE_STDCALL
# define AVSC_CC __cdecl
#else
# define AVSC_CC __stdcall
#endif
#define AVSC_INLINE static __inline
#ifdef AVISYNTH_C_EXPORTS
# define AVSC_EXPORT EXTERN_C
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
#else
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
# ifndef AVSC_NO_DECLSPEC
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
# else
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
# endif
#endif
#ifdef __GNUC__
typedef long long int INT64;
#else
typedef __int64 INT64;
#endif
/////////////////////////////////////////////////////////////////////
//
// Constants
//
#ifndef __AVXSYNTH_H__
enum { AVISYNTH_INTERFACE_VERSION = 3 };
#endif
enum {AVS_SAMPLE_INT8 = 1<<0,
AVS_SAMPLE_INT16 = 1<<1,
AVS_SAMPLE_INT24 = 1<<2,
AVS_SAMPLE_INT32 = 1<<3,
AVS_SAMPLE_FLOAT = 1<<4};
enum {AVS_PLANAR_Y=1<<0,
AVS_PLANAR_U=1<<1,
AVS_PLANAR_V=1<<2,
AVS_PLANAR_ALIGNED=1<<3,
AVS_PLANAR_Y_ALIGNED=AVS_PLANAR_Y|AVS_PLANAR_ALIGNED,
AVS_PLANAR_U_ALIGNED=AVS_PLANAR_U|AVS_PLANAR_ALIGNED,
AVS_PLANAR_V_ALIGNED=AVS_PLANAR_V|AVS_PLANAR_ALIGNED};
// Colorspace properties.
enum {AVS_CS_BGR = 1<<28,
AVS_CS_YUV = 1<<29,
AVS_CS_INTERLEAVED = 1<<30,
AVS_CS_PLANAR = 1<<31};
// Specific colorformats
enum {
AVS_CS_UNKNOWN = 0,
AVS_CS_BGR24 = 1<<0 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
AVS_CS_BGR32 = 1<<1 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
AVS_CS_YUY2 = 1<<2 | AVS_CS_YUV | AVS_CS_INTERLEAVED,
AVS_CS_YV12 = 1<<3 | AVS_CS_YUV | AVS_CS_PLANAR, // y-v-u, planar
AVS_CS_I420 = 1<<4 | AVS_CS_YUV | AVS_CS_PLANAR, // y-u-v, planar
AVS_CS_IYUV = 1<<4 | AVS_CS_YUV | AVS_CS_PLANAR // same as above
};
enum {
AVS_IT_BFF = 1<<0,
AVS_IT_TFF = 1<<1,
AVS_IT_FIELDBASED = 1<<2};
enum {
AVS_FILTER_TYPE=1,
AVS_FILTER_INPUT_COLORSPACE=2,
AVS_FILTER_OUTPUT_TYPE=9,
AVS_FILTER_NAME=4,
AVS_FILTER_AUTHOR=5,
AVS_FILTER_VERSION=6,
AVS_FILTER_ARGS=7,
AVS_FILTER_ARGS_INFO=8,
AVS_FILTER_ARGS_DESCRIPTION=10,
AVS_FILTER_DESCRIPTION=11};
enum { //SUBTYPES
AVS_FILTER_TYPE_AUDIO=1,
AVS_FILTER_TYPE_VIDEO=2,
AVS_FILTER_OUTPUT_TYPE_SAME=3,
AVS_FILTER_OUTPUT_TYPE_DIFFERENT=4};
enum {
AVS_CACHE_NOTHING=0,
AVS_CACHE_RANGE=1,
AVS_CACHE_ALL=2,
AVS_CACHE_AUDIO=3,
AVS_CACHE_AUDIO_NONE=4,
AVS_CACHE_AUDIO_AUTO=5
};
#define AVS_FRAME_ALIGN 16
typedef struct AVS_Clip AVS_Clip;
typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment;
/////////////////////////////////////////////////////////////////////
//
// AVS_VideoInfo
//
// AVS_VideoInfo is layed out identicly to VideoInfo
typedef struct AVS_VideoInfo {
int width, height; // width=0 means no video
unsigned fps_numerator, fps_denominator;
int num_frames;
int pixel_type;
int audio_samples_per_second; // 0 means no audio
int sample_type;
INT64 num_audio_samples;
int nchannels;
// Imagetype properties
int image_type;
} AVS_VideoInfo;
// useful functions of the above
AVSC_INLINE int avs_has_video(const AVS_VideoInfo * p)
{ return (p->width!=0); }
AVSC_INLINE int avs_has_audio(const AVS_VideoInfo * p)
{ return (p->audio_samples_per_second!=0); }
AVSC_INLINE int avs_is_rgb(const AVS_VideoInfo * p)
{ return !!(p->pixel_type&AVS_CS_BGR); }
AVSC_INLINE int avs_is_rgb24(const AVS_VideoInfo * p)
{ return (p->pixel_type&AVS_CS_BGR24)==AVS_CS_BGR24; } // Clear out additional properties
AVSC_INLINE int avs_is_rgb32(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_BGR32) == AVS_CS_BGR32 ; }
AVSC_INLINE int avs_is_yuv(const AVS_VideoInfo * p)
{ return !!(p->pixel_type&AVS_CS_YUV ); }
AVSC_INLINE int avs_is_yuy2(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_YUY2) == AVS_CS_YUY2; }
AVSC_INLINE int avs_is_yv12(const AVS_VideoInfo * p)
{ return ((p->pixel_type & AVS_CS_YV12) == AVS_CS_YV12)||((p->pixel_type & AVS_CS_I420) == AVS_CS_I420); }
AVSC_INLINE int avs_is_color_space(const AVS_VideoInfo * p, int c_space)
{ return ((p->pixel_type & c_space) == c_space); }
AVSC_INLINE int avs_is_property(const AVS_VideoInfo * p, int property)
{ return ((p->pixel_type & property)==property ); }
AVSC_INLINE int avs_is_planar(const AVS_VideoInfo * p)
{ return !!(p->pixel_type & AVS_CS_PLANAR); }
AVSC_INLINE int avs_is_field_based(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_FIELDBASED); }
AVSC_INLINE int avs_is_parity_known(const AVS_VideoInfo * p)
{ return ((p->image_type & AVS_IT_FIELDBASED)&&(p->image_type & (AVS_IT_BFF | AVS_IT_TFF))); }
AVSC_INLINE int avs_is_bff(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_BFF); }
AVSC_INLINE int avs_is_tff(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_TFF); }
AVSC_INLINE int avs_bits_per_pixel(const AVS_VideoInfo * p)
{
switch (p->pixel_type) {
case AVS_CS_BGR24: return 24;
case AVS_CS_BGR32: return 32;
case AVS_CS_YUY2: return 16;
case AVS_CS_YV12:
case AVS_CS_I420: return 12;
default: return 0;
}
}
AVSC_INLINE int avs_bytes_from_pixels(const AVS_VideoInfo * p, int pixels)
{ return pixels * (avs_bits_per_pixel(p)>>3); } // Will work on planar images, but will return only luma planes
AVSC_INLINE int avs_row_size(const AVS_VideoInfo * p)
{ return avs_bytes_from_pixels(p,p->width); } // Also only returns first plane on planar images
AVSC_INLINE int avs_bmp_size(const AVS_VideoInfo * vi)
{ if (avs_is_planar(vi)) {int p = vi->height * ((avs_row_size(vi)+3) & ~3); p+=p>>1; return p; } return vi->height * ((avs_row_size(vi)+3) & ~3); }
AVSC_INLINE int avs_samples_per_second(const AVS_VideoInfo * p)
{ return p->audio_samples_per_second; }
AVSC_INLINE int avs_bytes_per_channel_sample(const AVS_VideoInfo * p)
{
switch (p->sample_type) {
case AVS_SAMPLE_INT8: return sizeof(signed char);
case AVS_SAMPLE_INT16: return sizeof(signed short);
case AVS_SAMPLE_INT24: return 3;
case AVS_SAMPLE_INT32: return sizeof(signed int);
case AVS_SAMPLE_FLOAT: return sizeof(float);
default: return 0;
}
}
AVSC_INLINE int avs_bytes_per_audio_sample(const AVS_VideoInfo * p)
{ return p->nchannels*avs_bytes_per_channel_sample(p);}
AVSC_INLINE INT64 avs_audio_samples_from_frames(const AVS_VideoInfo * p, INT64 frames)
{ return ((INT64)(frames) * p->audio_samples_per_second * p->fps_denominator / p->fps_numerator); }
AVSC_INLINE int avs_frames_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
{ return (int)(samples * (INT64)p->fps_numerator / (INT64)p->fps_denominator / (INT64)p->audio_samples_per_second); }
AVSC_INLINE INT64 avs_audio_samples_from_bytes(const AVS_VideoInfo * p, INT64 bytes)
{ return bytes / avs_bytes_per_audio_sample(p); }
AVSC_INLINE INT64 avs_bytes_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
{ return samples * avs_bytes_per_audio_sample(p); }
AVSC_INLINE int avs_audio_channels(const AVS_VideoInfo * p)
{ return p->nchannels; }
AVSC_INLINE int avs_sample_type(const AVS_VideoInfo * p)
{ return p->sample_type;}
// useful mutator
AVSC_INLINE void avs_set_property(AVS_VideoInfo * p, int property)
{ p->image_type|=property; }
AVSC_INLINE void avs_clear_property(AVS_VideoInfo * p, int property)
{ p->image_type&=~property; }
AVSC_INLINE void avs_set_field_based(AVS_VideoInfo * p, int isfieldbased)
{ if (isfieldbased) p->image_type|=AVS_IT_FIELDBASED; else p->image_type&=~AVS_IT_FIELDBASED; }
AVSC_INLINE void avs_set_fps(AVS_VideoInfo * p, unsigned numerator, unsigned denominator)
{
unsigned x=numerator, y=denominator;
while (y) { // find gcd
unsigned t = x%y; x = y; y = t;
}
p->fps_numerator = numerator/x;
p->fps_denominator = denominator/x;
}
AVSC_INLINE int avs_is_same_colorspace(AVS_VideoInfo * x, AVS_VideoInfo * y)
{
return (x->pixel_type == y->pixel_type)
|| (avs_is_yv12(x) && avs_is_yv12(y));
}
/////////////////////////////////////////////////////////////////////
//
// AVS_VideoFrame
//
// VideoFrameBuffer holds information about a memory block which is used
// for video data. For efficiency, instances of this class are not deleted
// when the refcount reaches zero; instead they're stored in a linked list
// to be reused. The instances are deleted when the corresponding AVS
// file is closed.
// AVS_VideoFrameBuffer is layed out identicly to VideoFrameBuffer
// DO NOT USE THIS STRUCTURE DIRECTLY
typedef struct AVS_VideoFrameBuffer {
unsigned char * data;
int data_size;
// sequence_number is incremented every time the buffer is changed, so
// that stale views can tell they're no longer valid.
long sequence_number;
long refcount;
} AVS_VideoFrameBuffer;
// VideoFrame holds a "window" into a VideoFrameBuffer.
// AVS_VideoFrame is layed out identicly to IVideoFrame
// DO NOT USE THIS STRUCTURE DIRECTLY
typedef struct AVS_VideoFrame {
int refcount;
AVS_VideoFrameBuffer * vfb;
int offset, pitch, row_size, height, offsetU, offsetV, pitchUV; // U&V offsets are from top of picture.
} AVS_VideoFrame;
// Access functions for AVS_VideoFrame
AVSC_INLINE int avs_get_pitch(const AVS_VideoFrame * p) {
return p->pitch;}
AVSC_INLINE int avs_get_pitch_p(const AVS_VideoFrame * p, int plane) {
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V: return p->pitchUV;}
return p->pitch;}
AVSC_INLINE int avs_get_row_size(const AVS_VideoFrame * p) {
return p->row_size; }
AVSC_INLINE int avs_get_row_size_p(const AVS_VideoFrame * p, int plane) {
int r;
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV) return p->row_size>>1;
else return 0;
case AVS_PLANAR_U_ALIGNED: case AVS_PLANAR_V_ALIGNED:
if (p->pitchUV) {
r = ((p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)) )>>1; // Aligned rowsize
if (r < p->pitchUV)
return r;
return p->row_size>>1;
} else return 0;
case AVS_PLANAR_Y_ALIGNED:
r = (p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)); // Aligned rowsize
if (r <= p->pitch)
return r;
return p->row_size;
}
return p->row_size;
}
AVSC_INLINE int avs_get_height(const AVS_VideoFrame * p) {
return p->height;}
AVSC_INLINE int avs_get_height_p(const AVS_VideoFrame * p, int plane) {
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV) return p->height>>1;
return 0;
}
return p->height;}
AVSC_INLINE const unsigned char* avs_get_read_ptr(const AVS_VideoFrame * p) {
return p->vfb->data + p->offset;}
AVSC_INLINE const unsigned char* avs_get_read_ptr_p(const AVS_VideoFrame * p, int plane)
{
switch (plane) {
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
default: return p->vfb->data + p->offset;}
}
AVSC_INLINE int avs_is_writable(const AVS_VideoFrame * p) {
return (p->refcount == 1 && p->vfb->refcount == 1);}
AVSC_INLINE unsigned char* avs_get_write_ptr(const AVS_VideoFrame * p)
{
if (avs_is_writable(p)) {
++p->vfb->sequence_number;
return p->vfb->data + p->offset;
} else
return 0;
}
AVSC_INLINE unsigned char* avs_get_write_ptr_p(const AVS_VideoFrame * p, int plane)
{
if (plane==AVS_PLANAR_Y && avs_is_writable(p)) {
++p->vfb->sequence_number;
return p->vfb->data + p->offset;
} else if (plane==AVS_PLANAR_Y) {
return 0;
} else {
switch (plane) {
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
default: return p->vfb->data + p->offset;
}
}
}
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(void, avs_release_video_frame)(AVS_VideoFrame *);
// makes a shallow copy of a video frame
AVSC_API(AVS_VideoFrame *, avs_copy_video_frame)(AVS_VideoFrame *);
#if defined __cplusplus
}
#endif // __cplusplus
#ifndef AVSC_NO_DECLSPEC
AVSC_INLINE void avs_release_frame(AVS_VideoFrame * f)
{avs_release_video_frame(f);}
AVSC_INLINE AVS_VideoFrame * avs_copy_frame(AVS_VideoFrame * f)
{return avs_copy_video_frame(f);}
#endif
/////////////////////////////////////////////////////////////////////
//
// AVS_Value
//
// Treat AVS_Value as a fat pointer. That is use avs_copy_value
// and avs_release_value appropiaty as you would if AVS_Value was
// a pointer.
// To maintain source code compatibility with future versions of the
// avisynth_c API don't use the AVS_Value directly. Use the helper
// functions below.
// AVS_Value is layed out identicly to AVSValue
typedef struct AVS_Value AVS_Value;
struct AVS_Value {
short type; // 'a'rray, 'c'lip, 'b'ool, 'i'nt, 'f'loat, 's'tring, 'v'oid, or 'l'ong
// for some function e'rror
short array_size;
union {
void * clip; // do not use directly, use avs_take_clip
char boolean;
int integer;
INT64 integer64; // match addition of __int64 to avxplugin.h
float floating_pt;
const char * string;
const AVS_Value * array;
} d;
};
// AVS_Value should be initilized with avs_void.
// Should also set to avs_void after the value is released
// with avs_copy_value. Consider it the equalvent of setting
// a pointer to NULL
static const AVS_Value avs_void = {'v'};
AVSC_API(void, avs_copy_value)(AVS_Value * dest, AVS_Value src);
AVSC_API(void, avs_release_value)(AVS_Value);
AVSC_INLINE int avs_defined(AVS_Value v) { return v.type != 'v'; }
AVSC_INLINE int avs_is_clip(AVS_Value v) { return v.type == 'c'; }
AVSC_INLINE int avs_is_bool(AVS_Value v) { return v.type == 'b'; }
AVSC_INLINE int avs_is_int(AVS_Value v) { return v.type == 'i'; }
AVSC_INLINE int avs_is_float(AVS_Value v) { return v.type == 'f' || v.type == 'i'; }
AVSC_INLINE int avs_is_string(AVS_Value v) { return v.type == 's'; }
AVSC_INLINE int avs_is_array(AVS_Value v) { return v.type == 'a'; }
AVSC_INLINE int avs_is_error(AVS_Value v) { return v.type == 'e'; }
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(AVS_Clip *, avs_take_clip)(AVS_Value, AVS_ScriptEnvironment *);
AVSC_API(void, avs_set_to_clip)(AVS_Value *, AVS_Clip *);
#if defined __cplusplus
}
#endif // __cplusplus
AVSC_INLINE int avs_as_bool(AVS_Value v)
{ return v.d.boolean; }
AVSC_INLINE int avs_as_int(AVS_Value v)
{ return v.d.integer; }
AVSC_INLINE const char * avs_as_string(AVS_Value v)
{ return avs_is_error(v) || avs_is_string(v) ? v.d.string : 0; }
AVSC_INLINE double avs_as_float(AVS_Value v)
{ return avs_is_int(v) ? v.d.integer : v.d.floating_pt; }
AVSC_INLINE const char * avs_as_error(AVS_Value v)
{ return avs_is_error(v) ? v.d.string : 0; }
AVSC_INLINE const AVS_Value * avs_as_array(AVS_Value v)
{ return v.d.array; }
AVSC_INLINE int avs_array_size(AVS_Value v)
{ return avs_is_array(v) ? v.array_size : 1; }
AVSC_INLINE AVS_Value avs_array_elt(AVS_Value v, int index)
{ return avs_is_array(v) ? v.d.array[index] : v; }
// only use these functions on am AVS_Value that does not already have
// an active value. Remember, treat AVS_Value as a fat pointer.
AVSC_INLINE AVS_Value avs_new_value_bool(int v0)
{ AVS_Value v = {0}; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
AVSC_INLINE AVS_Value avs_new_value_int(int v0)
{ AVS_Value v = {0}; v.type = 'i'; v.d.integer = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_string(const char * v0)
{ AVS_Value v = {0}; v.type = 's'; v.d.string = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_float(float v0)
{ AVS_Value v = {0}; v.type = 'f'; v.d.floating_pt = v0; return v;}
AVSC_INLINE AVS_Value avs_new_value_error(const char * v0)
{ AVS_Value v = {0}; v.type = 'e'; v.d.string = v0; return v; }
#ifndef AVSC_NO_DECLSPEC
AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0)
{ AVS_Value v = {0}; avs_set_to_clip(&v, v0); return v; }
#endif
AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size)
{ AVS_Value v = {0}; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
/////////////////////////////////////////////////////////////////////
//
// AVS_Clip
//
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(void, avs_release_clip)(AVS_Clip *);
AVSC_API(AVS_Clip *, avs_copy_clip)(AVS_Clip *);
AVSC_API(const char *, avs_clip_get_error)(AVS_Clip *); // return 0 if no error
AVSC_API(const AVS_VideoInfo *, avs_get_video_info)(AVS_Clip *);
AVSC_API(int, avs_get_version)(AVS_Clip *);
AVSC_API(AVS_VideoFrame *, avs_get_frame)(AVS_Clip *, int n);
// The returned video frame must be released with avs_release_video_frame
AVSC_API(int, avs_get_parity)(AVS_Clip *, int n);
// return field parity if field_based, else parity of first field in frame
AVSC_API(int, avs_get_audio)(AVS_Clip *, void * buf,
INT64 start, INT64 count);
// start and count are in samples
AVSC_API(int, avs_set_cache_hints)(AVS_Clip *,
int cachehints, size_t frame_range);
#if defined __cplusplus
}
#endif // __cplusplus
// This is the callback type used by avs_add_function
typedef AVS_Value (AVSC_CC * AVS_ApplyFunc)
(AVS_ScriptEnvironment *, AVS_Value args, void * user_data);
typedef struct AVS_FilterInfo AVS_FilterInfo;
struct AVS_FilterInfo
{
// these members should not be modified outside of the AVS_ApplyFunc callback
AVS_Clip * child;
AVS_VideoInfo vi;
AVS_ScriptEnvironment * env;
AVS_VideoFrame * (AVSC_CC * get_frame)(AVS_FilterInfo *, int n);
int (AVSC_CC * get_parity)(AVS_FilterInfo *, int n);
int (AVSC_CC * get_audio)(AVS_FilterInfo *, void * buf,
INT64 start, INT64 count);
int (AVSC_CC * set_cache_hints)(AVS_FilterInfo *, int cachehints,
int frame_range);
void (AVSC_CC * free_filter)(AVS_FilterInfo *);
// Should be set when ever there is an error to report.
// It is cleared before any of the above methods are called
const char * error;
// this is to store whatever and may be modified at will
void * user_data;
};
// Create a new filter
// fi is set to point to the AVS_FilterInfo so that you can
// modify it once it is initilized.
// store_child should generally be set to true. If it is not
// set than ALL methods (the function pointers) must be defined
// If it is set than you do not need to worry about freeing the child
// clip.
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(AVS_Clip *, avs_new_c_filter)(AVS_ScriptEnvironment * e,
AVS_FilterInfo * * fi,
AVS_Value child, int store_child);
#if defined __cplusplus
}
#endif // __cplusplus
/////////////////////////////////////////////////////////////////////
//
// AVS_ScriptEnvironment
//
// For GetCPUFlags. These are backwards-compatible with those in VirtualDub.
enum {
/* slowest CPU to support extension */
AVS_CPU_FORCE = 0x01, // N/A
AVS_CPU_FPU = 0x02, // 386/486DX
AVS_CPU_MMX = 0x04, // P55C, K6, PII
AVS_CPU_INTEGER_SSE = 0x08, // PIII, Athlon
AVS_CPU_SSE = 0x10, // PIII, Athlon XP/MP
AVS_CPU_SSE2 = 0x20, // PIV, Hammer
AVS_CPU_3DNOW = 0x40, // K6-2
AVS_CPU_3DNOW_EXT = 0x80, // Athlon
AVS_CPU_X86_64 = 0xA0, // Hammer (note: equiv. to 3DNow + SSE2,
// which only Hammer will have anyway)
};
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(const char *, avs_get_error)(AVS_ScriptEnvironment *); // return 0 if no error
AVSC_API(long, avs_get_cpu_flags)(AVS_ScriptEnvironment *);
AVSC_API(int, avs_check_version)(AVS_ScriptEnvironment *, int version);
AVSC_API(char *, avs_save_string)(AVS_ScriptEnvironment *, const char* s, int length);
AVSC_API(char *, avs_sprintf)(AVS_ScriptEnvironment *, const char * fmt, ...);
AVSC_API(char *, avs_vsprintf)(AVS_ScriptEnvironment *, const char * fmt, va_list val);
// note: val is really a va_list; I hope everyone typedefs va_list to a pointer
AVSC_API(int, avs_add_function)(AVS_ScriptEnvironment *,
const char * name, const char * params,
AVS_ApplyFunc apply, void * user_data);
AVSC_API(int, avs_function_exists)(AVS_ScriptEnvironment *, const char * name);
AVSC_API(AVS_Value, avs_invoke)(AVS_ScriptEnvironment *, const char * name,
AVS_Value args, const char** arg_names);
// The returned value must be be released with avs_release_value
AVSC_API(AVS_Value, avs_get_var)(AVS_ScriptEnvironment *, const char* name);
// The returned value must be be released with avs_release_value
AVSC_API(int, avs_set_var)(AVS_ScriptEnvironment *, const char* name, AVS_Value val);
AVSC_API(int, avs_set_global_var)(AVS_ScriptEnvironment *, const char* name, const AVS_Value val);
//void avs_push_context(AVS_ScriptEnvironment *, int level=0);
//void avs_pop_context(AVS_ScriptEnvironment *);
AVSC_API(AVS_VideoFrame *, avs_new_video_frame_a)(AVS_ScriptEnvironment *,
const AVS_VideoInfo * vi, int align);
// align should be at least 16
#if defined __cplusplus
}
#endif // __cplusplus
#ifndef AVSC_NO_DECLSPEC
AVSC_INLINE
AVS_VideoFrame * avs_new_video_frame(AVS_ScriptEnvironment * env,
const AVS_VideoInfo * vi)
{return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
AVSC_INLINE
AVS_VideoFrame * avs_new_frame(AVS_ScriptEnvironment * env,
const AVS_VideoInfo * vi)
{return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
#endif
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(int, avs_make_writable)(AVS_ScriptEnvironment *, AVS_VideoFrame * * pvf);
AVSC_API(void, avs_bit_blt)(AVS_ScriptEnvironment *, unsigned char* dstp, int dst_pitch, const unsigned char* srcp, int src_pitch, int row_size, int height);
typedef void (AVSC_CC *AVS_ShutdownFunc)(void* user_data, AVS_ScriptEnvironment * env);
AVSC_API(void, avs_at_exit)(AVS_ScriptEnvironment *, AVS_ShutdownFunc function, void * user_data);
AVSC_API(AVS_VideoFrame *, avs_subframe)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height);
// The returned video frame must be be released
AVSC_API(int, avs_set_memory_max)(AVS_ScriptEnvironment *, int mem);
AVSC_API(int, avs_set_working_dir)(AVS_ScriptEnvironment *, const char * newdir);
// avisynth.dll exports this; it's a way to use it as a library, without
// writing an AVS script or without going through AVIFile.
AVSC_API(AVS_ScriptEnvironment *, avs_create_script_environment)(int version);
#if defined __cplusplus
}
#endif // __cplusplus
// this symbol is the entry point for the plugin and must
// be defined
AVSC_EXPORT
const char * AVSC_CC avisynth_c_plugin_init(AVS_ScriptEnvironment* env);
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(void, avs_delete_script_environment)(AVS_ScriptEnvironment *);
AVSC_API(AVS_VideoFrame *, avs_subframe_planar)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height, int rel_offsetU, int rel_offsetV, int new_pitchUV);
// The returned video frame must be be released
#if defined __cplusplus
}
#endif // __cplusplus
#endif //__AVXSYNTH_C__

View File

@@ -0,0 +1,85 @@
#ifndef __DATA_TYPE_CONVERSIONS_H__
#define __DATA_TYPE_CONVERSIONS_H__
#include <stdint.h>
#include <wchar.h>
#ifdef __cplusplus
namespace avxsynth {
#endif // __cplusplus
typedef int64_t __int64;
typedef int32_t __int32;
#ifdef __cplusplus
typedef bool BOOL;
#else
typedef uint32_t BOOL;
#endif // __cplusplus
typedef void* HMODULE;
typedef void* LPVOID;
typedef void* PVOID;
typedef PVOID HANDLE;
typedef HANDLE HWND;
typedef HANDLE HINSTANCE;
typedef void* HDC;
typedef void* HBITMAP;
typedef void* HICON;
typedef void* HFONT;
typedef void* HGDIOBJ;
typedef void* HBRUSH;
typedef void* HMMIO;
typedef void* HACMSTREAM;
typedef void* HACMDRIVER;
typedef void* HIC;
typedef void* HACMOBJ;
typedef HACMSTREAM* LPHACMSTREAM;
typedef void* HACMDRIVERID;
typedef void* LPHACMDRIVER;
typedef unsigned char BYTE;
typedef BYTE* LPBYTE;
typedef char TCHAR;
typedef TCHAR* LPTSTR;
typedef const TCHAR* LPCTSTR;
typedef char* LPSTR;
typedef LPSTR LPOLESTR;
typedef const char* LPCSTR;
typedef LPCSTR LPCOLESTR;
typedef wchar_t WCHAR;
typedef unsigned short WORD;
typedef unsigned int UINT;
typedef UINT MMRESULT;
typedef uint32_t DWORD;
typedef DWORD COLORREF;
typedef DWORD FOURCC;
typedef DWORD HRESULT;
typedef DWORD* LPDWORD;
typedef DWORD* DWORD_PTR;
typedef int32_t LONG;
typedef int32_t* LONG_PTR;
typedef LONG_PTR LRESULT;
typedef uint32_t ULONG;
typedef uint32_t* ULONG_PTR;
//typedef __int64_t intptr_t;
typedef uint64_t _fsize_t;
//
// Structures
//
typedef struct _GUID {
DWORD Data1;
WORD Data2;
WORD Data3;
BYTE Data4[8];
} GUID;
typedef GUID REFIID;
typedef GUID CLSID;
typedef CLSID* LPCLSID;
typedef GUID IID;
#ifdef __cplusplus
}; // namespace avxsynth
#endif // __cplusplus
#endif // __DATA_TYPE_CONVERSIONS_H__

View File

@@ -0,0 +1,77 @@
#ifndef __WINDOWS2LINUX_H__
#define __WINDOWS2LINUX_H__
/*
* LINUX SPECIFIC DEFINITIONS
*/
//
// Data types conversions
//
#include <stdlib.h>
#include <string.h>
#include "basicDataTypeConversions.h"
#ifdef __cplusplus
namespace avxsynth {
#endif // __cplusplus
//
// purposefully define the following MSFT definitions
// to mean nothing (as they do not mean anything on Linux)
//
#define __stdcall
#define __cdecl
#define noreturn
#define __declspec(x)
#define STDAPI extern "C" HRESULT
#define STDMETHODIMP HRESULT __stdcall
#define STDMETHODIMP_(x) x __stdcall
#define STDMETHOD(x) virtual HRESULT x
#define STDMETHOD_(a, x) virtual a x
#ifndef TRUE
#define TRUE true
#endif
#ifndef FALSE
#define FALSE false
#endif
#define S_OK (0x00000000)
#define S_FALSE (0x00000001)
#define E_NOINTERFACE (0X80004002)
#define E_POINTER (0x80004003)
#define E_FAIL (0x80004005)
#define E_OUTOFMEMORY (0x8007000E)
#define INVALID_HANDLE_VALUE ((HANDLE)((LONG_PTR)-1))
#define FAILED(hr) ((hr) & 0x80000000)
#define SUCCEEDED(hr) (!FAILED(hr))
//
// Functions
//
#define MAKEDWORD(a,b,c,d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d))
#define MAKEWORD(a,b) (((a) << 8) | (b))
#define lstrlen strlen
#define lstrcpy strcpy
#define lstrcmpi strcasecmp
#define _stricmp strcasecmp
#define InterlockedIncrement(x) __sync_fetch_and_add((x), 1)
#define InterlockedDecrement(x) __sync_fetch_and_sub((x), 1)
// Windows uses (new, old) ordering but GCC has (old, new)
#define InterlockedCompareExchange(x,y,z) __sync_val_compare_and_swap(x,z,y)
#define UInt32x32To64(a, b) ( (uint64_t) ( ((uint64_t)((uint32_t)(a))) * ((uint32_t)(b)) ) )
#define Int64ShrlMod32(a, b) ( (uint64_t) ( (uint64_t)(a) >> (b) ) )
#define Int32x32To64(a, b) ((__int64)(((__int64)((long)(a))) * ((long)(b))))
#define MulDiv(nNumber, nNumerator, nDenominator) (int32_t) (((int64_t) (nNumber) * (int64_t) (nNumerator) + (int64_t) ((nDenominator)/2)) / (int64_t) (nDenominator))
#ifdef __cplusplus
}; // namespace avxsynth
#endif // __cplusplus
#endif // __WINDOWS2LINUX_H__

View File

@@ -1,131 +0,0 @@
/*
* Minimum CUDA compatibility definitions header
*
* Copyright (c) 2019 Rodger Combs
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_CUDA_CUDA_RUNTIME_H
#define COMPAT_CUDA_CUDA_RUNTIME_H
// Common macros
#define __global__ __attribute__((global))
#define __device__ __attribute__((device))
#define __device_builtin__ __attribute__((device_builtin))
#define __align__(N) __attribute__((aligned(N)))
#define __inline__ __inline__ __attribute__((always_inline))
#define max(a, b) ((a) > (b) ? (a) : (b))
#define min(a, b) ((a) < (b) ? (a) : (b))
#define abs(x) ((x) < 0 ? -(x) : (x))
#define atomicAdd(a, b) (__atomic_fetch_add(a, b, __ATOMIC_SEQ_CST))
// Basic typedefs
typedef __device_builtin__ unsigned long long cudaTextureObject_t;
typedef struct __device_builtin__ __align__(2) uchar2
{
unsigned char x, y;
} uchar2;
typedef struct __device_builtin__ __align__(4) ushort2
{
unsigned short x, y;
} ushort2;
typedef struct __device_builtin__ uint3
{
unsigned int x, y, z;
} uint3;
typedef struct uint3 dim3;
typedef struct __device_builtin__ __align__(8) int2
{
int x, y;
} int2;
typedef struct __device_builtin__ __align__(4) uchar4
{
unsigned char x, y, z, w;
} uchar4;
typedef struct __device_builtin__ __align__(8) ushort4
{
unsigned char x, y, z, w;
} ushort4;
typedef struct __device_builtin__ __align__(16) int4
{
int x, y, z, w;
} int4;
// Accessors for special registers
#define GETCOMP(reg, comp) \
asm("mov.u32 %0, %%" #reg "." #comp ";" : "=r"(tmp)); \
ret.comp = tmp;
#define GET(name, reg) static inline __device__ uint3 name() {\
uint3 ret; \
unsigned tmp; \
GETCOMP(reg, x) \
GETCOMP(reg, y) \
GETCOMP(reg, z) \
return ret; \
}
GET(getBlockIdx, ctaid)
GET(getBlockDim, ntid)
GET(getThreadIdx, tid)
// Instead of externs for these registers, we turn access to them into calls into trivial ASM
#define blockIdx (getBlockIdx())
#define blockDim (getBlockDim())
#define threadIdx (getThreadIdx())
// Basic initializers (simple macros rather than inline functions)
#define make_uchar2(a, b) ((uchar2){.x = a, .y = b})
#define make_ushort2(a, b) ((ushort2){.x = a, .y = b})
#define make_uchar4(a, b, c, d) ((uchar4){.x = a, .y = b, .z = c, .w = d})
#define make_ushort4(a, b, c, d) ((ushort4){.x = a, .y = b, .z = c, .w = d})
// Conversions from the tex instruction's 4-register output to various types
#define TEX2D(type, ret) static inline __device__ void conv(type* out, unsigned a, unsigned b, unsigned c, unsigned d) {*out = (ret);}
TEX2D(unsigned char, a & 0xFF)
TEX2D(unsigned short, a & 0xFFFF)
TEX2D(uchar2, make_uchar2(a & 0xFF, b & 0xFF))
TEX2D(ushort2, make_ushort2(a & 0xFFFF, b & 0xFFFF))
TEX2D(uchar4, make_uchar4(a & 0xFF, b & 0xFF, c & 0xFF, d & 0xFF))
TEX2D(ushort4, make_ushort4(a & 0xFFFF, b & 0xFFFF, c & 0xFFFF, d & 0xFFFF))
// Template calling tex instruction and converting the output to the selected type
template <class T>
static inline __device__ T tex2D(cudaTextureObject_t texObject, float x, float y)
{
T ret;
unsigned ret1, ret2, ret3, ret4;
asm("tex.2d.v4.u32.f32 {%0, %1, %2, %3}, [%4, {%5, %6}];" :
"=r"(ret1), "=r"(ret2), "=r"(ret3), "=r"(ret4) :
"l"(texObject), "f"(x), "f"(y));
conv(&ret, ret1, ret2, ret3, ret4);
return ret;
}
#endif /* COMPAT_CUDA_CUDA_RUNTIME_H */

View File

@@ -16,8 +16,8 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_CUDA_DYNLINK_LOADER_H
#define COMPAT_CUDA_DYNLINK_LOADER_H
#ifndef AV_COMPAT_CUDA_DYNLINK_LOADER_H
#define AV_COMPAT_CUDA_DYNLINK_LOADER_H
#include "libavutil/log.h"
#include "compat/w32dlfcn.h"
@@ -30,4 +30,4 @@
#include <ffnvcodec/dynlink_loader.h>
#endif /* COMPAT_CUDA_DYNLINK_LOADER_H */
#endif

View File

@@ -27,8 +27,10 @@ IN="$2"
NAME="$(basename "$IN" | sed 's/\..*//')"
printf "const char %s_ptx[] = \\" "$NAME" > "$OUT"
echo >> "$OUT"
sed -e "$(printf 's/\r//g')" -e 's/["\\]/\\&/g' -e "$(printf 's/^/\t"/')" -e 's/$/\\n"/' < "$IN" >> "$OUT"
echo ";" >> "$OUT"
while read LINE
do
printf "\n\t\"%s\\\n\"" "$(printf "%s" "$LINE" | sed -e 's/\r//g' -e 's/["\\]/\\&/g')" >> "$OUT"
done < "$IN"
printf ";\n" >> "$OUT"
exit 0

View File

@@ -1,47 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <math.h>
#define FUN(name, type, op) \
type name(type x, type y) \
{ \
if (fpclassify(x) == FP_NAN) return y; \
if (fpclassify(y) == FP_NAN) return x; \
return x op y ? x : y; \
}
FUN(fmin, double, <)
FUN(fmax, double, >)
FUN(fminf, float, <)
FUN(fmaxf, float, >)
long double fmodl(long double x, long double y)
{
return fmod(x, y);
}
long double scalbnl(long double x, int exp)
{
return scalbn(x, exp);
}
long double copysignl(long double x, long double y)
{
return copysign(x, y);
}

View File

@@ -1,25 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
double fmin(double, double);
double fmax(double, double);
float fminf(float, float);
float fmaxf(float, float);
long double fmodl(long double, long double);
long double scalbnl(long double, int);
long double copysignl(long double, long double);

View File

@@ -27,19 +27,15 @@
#define COMPAT_OS2THREADS_H
#define INCL_DOS
#define INCL_DOSERRORS
#include <os2.h>
#undef __STRICT_ANSI__ /* for _beginthread() */
#include <stdlib.h>
#include <time.h>
#include <sys/builtin.h>
#include <sys/fmutex.h>
#include "libavutil/attributes.h"
#include "libavutil/common.h"
#include "libavutil/time.h"
typedef struct {
TID tid;
@@ -167,28 +163,6 @@ static av_always_inline int pthread_cond_broadcast(pthread_cond_t *cond)
return 0;
}
static av_always_inline int pthread_cond_timedwait(pthread_cond_t *cond,
pthread_mutex_t *mutex,
const struct timespec *abstime)
{
int64_t abs_milli = abstime->tv_sec * 1000LL + abstime->tv_nsec / 1000000;
ULONG t = av_clip64(abs_milli - av_gettime() / 1000, 0, ULONG_MAX);
__atomic_increment(&cond->wait_count);
pthread_mutex_unlock(mutex);
APIRET ret = DosWaitEventSem(cond->event_sem, t);
__atomic_decrement(&cond->wait_count);
DosPostEventSem(cond->ack_sem);
pthread_mutex_lock(mutex);
return (ret == ERROR_TIMEOUT) ? ETIMEDOUT : 0;
}
static av_always_inline int pthread_cond_wait(pthread_cond_t *cond,
pthread_mutex_t *mutex)
{

View File

@@ -38,13 +38,11 @@
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <process.h>
#include <time.h>
#include "libavutil/attributes.h"
#include "libavutil/common.h"
#include "libavutil/internal.h"
#include "libavutil/mem.h"
#include "libavutil/time.h"
typedef struct pthread_t {
void *handle;
@@ -63,9 +61,6 @@ typedef CONDITION_VARIABLE pthread_cond_t;
#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
#define PTHREAD_CANCEL_ENABLE 1
#define PTHREAD_CANCEL_DISABLE 0
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
{
pthread_t *h = (pthread_t*)arg;
@@ -161,31 +156,10 @@ static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex
return 0;
}
static inline int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex,
const struct timespec *abstime)
{
int64_t abs_milli = abstime->tv_sec * 1000LL + abstime->tv_nsec / 1000000;
DWORD t = av_clip64(abs_milli - av_gettime() / 1000, 0, UINT32_MAX);
if (!SleepConditionVariableSRW(cond, mutex, t, 0)) {
DWORD err = GetLastError();
if (err == ERROR_TIMEOUT)
return ETIMEDOUT;
else
return EINVAL;
}
return 0;
}
static inline int pthread_cond_signal(pthread_cond_t *cond)
{
WakeConditionVariable(cond);
return 0;
}
static inline int pthread_setcancelstate(int state, int *oldstate)
{
return 0;
}
#endif /* COMPAT_W32PTHREADS_H */

View File

@@ -48,7 +48,7 @@ trap 'rm -f -- $libname' EXIT
if [ -n "$AR" ]; then
$AR rcs ${libname} $@ >/dev/null
else
lib.exe -out:${libname} $@ >/dev/null
lib -out:${libname} $@ >/dev/null
fi
if [ $? != 0 ]; then
echo "Could not create temporary library." >&2
@@ -108,7 +108,7 @@ if [ -n "$NM" ]; then
cut -d' ' -f3 |
sed -e "s/^${prefix}//")
else
dump=$(dumpbin.exe -linkermember:1 ${libname} |
dump=$(dumpbin -linkermember:1 ${libname} |
sed -e '/public symbols/,$!d' -e '/^ \{1,\}Summary/,$d' -e "s/ \{1,\}${prefix}/ /" -e 's/ \{1,\}/ /g' |
tail -n +2 |
cut -d' ' -f3)

View File

@@ -4,6 +4,6 @@ LINK_EXE_PATH=$(dirname "$(command -v cl)")/link
if [ -x "$LINK_EXE_PATH" ]; then
"$LINK_EXE_PATH" $@
else
link.exe $@
link $@
fi
exit $?

538
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -15,151 +15,6 @@ libavutil: 2017-10-21
API changes, most recent first:
2020-06-05 - ec39c2276a - lavu 56.50.100 - buffer.h
Passing NULL as alloc argument to av_buffer_pool_init2() is now allowed.
2020-05-27 - ba6cada92e - lavc 58.88.100 - avcodec.h codec.h
Move AVCodec-related public API to new header codec.h.
2020-05-23 - 064b875e89 - lavu 56.49.100 - video_enc_params.h
Add AV_VIDEO_ENC_PARAMS_H264.
2020-05-23 - 2e08b39444 - lavu 56.48.100 - hwcontext.h
Add av_hwdevice_ctx_create_derived_opts.
2020-05-23 - 6b65c4ec54 - lavu 56.47.100 - rational.h
Add av_gcd_q().
2020-05-22 - af9e622776 - lavu 56.46.101 - opt.h
Add AV_OPT_FLAG_CHILD_CONSTS.
2020-05-22 - 9d443c3e68 - lavc 58.87.100 - avcodec.h codec_par.h
Move AVBitstreamFilter-related public API to new header bsf.h.
Move AVCodecParameters-related public API to new header codec_par.h.
2020-05-21 - 13b1bbff0b - lavc 58.86.101 - avcodec.h
Deprecated AV_CODEC_CAP_INTRA_ONLY and AV_CODEC_CAP_LOSSLESS.
2020-05-17 - 84af196c65 - lavu 56.46.100 - common.h
Add av_sat_add64() and av_sat_sub64()
2020-05-12 - 991d417692 - lavu 56.45.100 - video_enc_params.h
lavc 58.84.100 - avcodec.h
Add a new API for exporting video encoding information.
Replaces the deprecated API for exporting QP tables from decoders.
Add AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS to request this information from
decoders.
2020-05-10 - dccd07f66d - lavu 56.44.100 - hwcontext_vulkan.h
Add enabled_inst_extensions, num_enabled_inst_extensions, enabled_dev_extensions
and num_enabled_dev_extensions fields to AVVulkanDeviceContext
2020-04-22 - 0e1db79e37 - lavc 58.81.100 - packet.h
- lavu 56.43.100 - dovi_meta.h
Add AV_PKT_DATA_DOVI_CONF and AVDOVIDecoderConfigurationRecord.
2020-04-15 - 22b25b3ea5 - lavc 58.79.100 - avcodec.h
Add formal support for calling avcodec_flush_buffers() on encoders.
Encoders that set the cap AV_CODEC_CAP_ENCODER_FLUSH will be flushed.
For all other encoders, the call is now a no-op rather than undefined
behaviour.
2020-04-10 - 672946c7fe - lavc 58.78.100 - avcodec.h codec_desc.h codec_id.h packet.h
Move AVCodecDesc-related public API to new header codec_desc.h.
Move AVCodecID enum to new header codec_id.h.
Move AVPacket-related public API to new header packet.h.
2020-03-29 - 4cb0dda555 - lavf 58.42.100 - avformat.h
av_read_frame() now guarantees to handle uninitialized input packets
and to return refcounted packets on success.
2020-03-27 - c52ec0367d - lavc 58.77.100 - avcodec.h
av_packet_ref() now guarantees to return the destination packet
in a blank state on error.
2020-03-10 - 05d27f342b - lavc 58.75.100 - avcodec.h
Add AV_PKT_DATA_ICC_PROFILE.
2020-02-21 - d005a7cdfd - lavc 58.73.101 - avcodec.h
Add AV_CODEC_EXPORT_DATA_PRFT.
2020-02-21 - c666689491 - lavc 58.73.100 - avcodec.h
Add AVCodecContext.export_side_data and AV_CODEC_EXPORT_DATA_MVS.
2020-02-13 - e8f054b095 - lavu 56.41.100 - tx.h
Add AV_TX_INT32_FFT and AV_TX_INT32_MDCT
2020-02-12 - 3182114f88 - lavu 56.40.100 - log.h
Add av_log_once().
2020-02-04 - a88449ffb2 - lavu 56.39.100 - hwcontext.h
Add AV_PIX_FMT_VULKAN
Add AV_HWDEVICE_TYPE_VULKAN and implementation.
2020-01-30 - 27529eeb27 - lavf 58.37.100 - avio.h
Add avio_protocol_get_class().
2020-01-15 - 717b2074ec - lavc 58.66.100 - avcodec.h
Add AV_PKT_DATA_PRFT and AVProducerReferenceTime.
2019-12-27 - 45259a0ee4 - lavu 56.38.100 - eval.h
Add av_expr_count_func().
2019-12-26 - 16685114d5 - lavu 56.37.100 - buffer.h
Add av_buffer_pool_buffer_get_opaque().
2019-11-17 - 1c23abc88f - lavu 56.36.100 - eval API
Add av_expr_count_vars().
2019-10-14 - f3746d31f9 - lavu 56.35.101 - opt.h
Add AV_OPT_FLAG_RUNTIME_PARAM.
2019-09-25 - f8406ab4b9 - lavc 58.59.100 - avcodec.h
Add max_samples
2019-09-04 - 2a9d461abc - lavu 56.35.100 - hwcontext_videotoolbox.h
Add av_map_videotoolbox_format_from_pixfmt2() for full range pixfmt
2019-09-01 - 8821d1f56e - lavu 56.34.100 - pixfmt.h
Add EBU Tech. 3213-E AVColorPrimaries value
2019-08-17 - 95fa73a2b4 - lavf 58.31.101 - avio.h
4K limit removed from avio_printf.
2019-08-17 - a82f8f2f10 - lavf 58.31.100 - avio.h
Add avio_print_string_array and avio_print.
2019-07-27 - 42e2319ba9 - lavu 56.33.100 - tx.h
Add AV_TX_DOUBLE_FFT and AV_TX_DOUBLE_MDCT
-------- 8< --------- FFmpeg 4.2 was cut here -------- 8< ---------
2019-06-21 - a30e44098a - lavu 56.30.100 - frame.h
Add FF_DECODE_ERROR_DECODE_SLICES
2019-06-14 - edfced8c04 - lavu 56.29.100 - frame.h
Add FF_DECODE_ERROR_CONCEALMENT_ACTIVE
2019-05-15 - b79b29ddb1 - lavu 56.28.100 - tx.h
Add av_tx_init(), av_tx_uninit() and related definitions.
2019-04-20 - 3153a6502a - lavc 58.52.100 - avcodec.h
Add AV_CODEC_FLAG_DROPCHANGED to allow avcodec_receive_frame to drop
frames whose parameters differ from first decoded frame in stream.
2019-04-12 - abfeba9724 - lavf 58.27.102
Rename hls,applehttp demuxer to hls
2019-01-27 - 5bcefceec8 - lavc 58.46.100 - avcodec.h
Add discard_damaged_percentage
2019-01-08 - 1ef4828276 - lavu 56.26.100 - frame.h
Add AV_FRAME_DATA_REGIONS_OF_INTEREST
2018-12-21 - 2744d6b364 - lavu 56.25.100 - hdr_dynamic_metadata.h
Add AV_FRAME_DATA_DYNAMIC_HDR_PLUS enum value, av_dynamic_hdr_plus_alloc(),
av_dynamic_hdr_plus_create_side_data() functions, and related structs.
-------- 8< --------- FFmpeg 4.1 was cut here -------- 8< ---------
2018-10-27 - 718044dc19 - lavu 56.21.100 - pixdesc.h

View File

@@ -38,7 +38,7 @@ PROJECT_NAME = FFmpeg
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = 4.3
PROJECT_NUMBER = 4.1.3
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a

View File

@@ -87,9 +87,6 @@ the timing info in the sequence header.
Set the number of ticks in each picture, to indicate that the stream
has a fixed framerate. Ignored if @option{tick_rate} is not also set.
@item delete_padding
Deletes Padding OBUs.
@end table
@section chomp
@@ -103,9 +100,7 @@ DTS-HD.
@section dump_extra
Add extradata to the beginning of the filtered packets except when
said packets already exactly begin with the extradata that is intended
to be added.
Add extradata to the beginning of the filtered packets.
@table @option
@item freq
@@ -122,7 +117,7 @@ add extradata to all packets
@end table
@end table
If not specified it is assumed @samp{k}.
If not specified it is assumed @samp{e}.
For example the following @command{ffmpeg} command forces a global
header (thus disabling individual packet headers) in the H.264 packets
@@ -224,10 +219,6 @@ Insert or remove AUD NAL units in all access units of the stream.
@item sample_aspect_ratio
Set the sample aspect ratio of the stream in the VUI parameters.
@item overscan_appropriate_flag
Set whether the stream is suitable for display using overscan
or not (see H.264 section E.2.1).
@item video_format
@item video_full_range_flag
Set the video format in the stream (see H.264 section E.2.1 and
@@ -367,15 +358,6 @@ will replace the current ones if the stream is already cropped.
These fields are set in pixels. Note that some sizes may not be
representable if the chroma is subsampled (H.265 section 7.4.3.2.1).
@item level
Set the level in the VPS and SPS. See H.265 section A.4 and tables
A.6 and A.7.
The argument must be the name of a level (for example, @samp{5.1}), a
@emph{general_level_idc} value (for example, @samp{153} for level 5.1),
or the special name @samp{auto} indicating that the filter should
attempt to guess the level from the input stream properties.
@end table
@section hevc_mp4toannexb
@@ -548,111 +530,6 @@ ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
@section null
This bitstream filter passes the packets through unchanged.
@section pcm_rechunk
Repacketize PCM audio to a fixed number of samples per packet or a fixed packet
rate per second. This is similar to the @ref{asetnsamples,,asetnsamples audio
filter,ffmpeg-filters} but works on audio packets instead of audio frames.
@table @option
@item nb_out_samples, n
Set the number of samples per each output audio packet. The number is intended
as the number of samples @emph{per each channel}. Default value is 1024.
@item pad, p
If set to 1, the filter will pad the last audio packet with silence, so that it
will contain the same number of samples (or roughly the same number of samples,
see @option{frame_rate}) as the previous ones. Default value is 1.
@item frame_rate, r
This option makes the filter output a fixed number of packets per second instead
of a fixed number of samples per packet. If the audio sample rate is not
divisible by the frame rate then the number of samples will not be constant but
will vary slightly so that each packet will start as close to the frame
boundary as possible. Using this option has precedence over @option{nb_out_samples}.
@end table
You can generate the well known 1602-1601-1602-1601-1602 pattern of 48kHz audio
for NTSC frame rate using the @option{frame_rate} option.
@example
ffmpeg -f lavfi -i sine=r=48000:d=1 -c pcm_s16le -bsf pcm_rechunk=r=30000/1001 -f framecrc -
@end example
@section prores_metadata
Modify color property metadata embedded in prores stream.
@table @option
@item color_primaries
Set the color primaries.
Available values are:
@table @samp
@item auto
Keep the same color primaries property (default).
@item unknown
@item bt709
@item bt470bg
BT601 625
@item smpte170m
BT601 525
@item bt2020
@item smpte431
DCI P3
@item smpte432
P3 D65
@end table
@item transfer_characteristics
Set the color transfer.
Available values are:
@table @samp
@item auto
Keep the same transfer characteristics property (default).
@item unknown
@item bt709
BT 601, BT 709, BT 2020
@item smpte2084
SMPTE ST 2084
@item arib-std-b67
ARIB STD-B67
@end table
@item matrix_coefficients
Set the matrix coefficient.
Available values are:
@table @samp
@item auto
Keep the same colorspace property (default).
@item unknown
@item bt709
@item smpte170m
BT 601
@item bt2020nc
@end table
@end table
Set Rec709 colorspace for each frame of the file
@example
ffmpeg -i INPUT -c copy -bsf:v prores_metadata=color_primaries=bt709:color_trc=bt709:colorspace=bt709 output.mov
@end example
Set Hybrid Log-Gamma parameters for each frame of the file
@example
ffmpeg -i INPUT -c copy -bsf:v prores_metadata=color_primaries=bt2020:color_trc=arib-std-b67:colorspace=bt2020nc output.mov
@end example
@section remove_extra
Remove extradata from packets.
@@ -689,12 +566,7 @@ Log trace output containing all syntax elements in the coded stream
headers (everything above the level of individual coded blocks).
This can be useful for debugging low-level stream issues.
Supports AV1, H.264, H.265, (M)JPEG, MPEG-2 and VP9, but depending
on the build only a subset of these may be available.
@section truehd_core
Extract the core from a TrueHD stream, dropping ATMOS data.
Supports H.264, H.265, MPEG-2 and VP9.
@section vp9_metadata
@@ -702,9 +574,7 @@ Modify metadata embedded in a VP9 stream.
@table @option
@item color_space
Set the color space value in the frame header. Note that any frame
set to RGB will be implicitly set to PC range and that RGB is
incompatible with profiles 0 and 2.
Set the color space value in the frame header.
@table @samp
@item unknown
@item bt601
@@ -716,8 +586,8 @@ incompatible with profiles 0 and 2.
@end table
@item color_range
Set the color range value in the frame header. Note that any value
imposed by the color space will take precedence over this value.
Set the color range value in the frame header. Note that this cannot
be set in RGB streams.
@table @samp
@item tv
@item pc

View File

@@ -36,11 +36,11 @@ install
examples
Build all examples located in doc/examples.
checkheaders
Check headers dependencies.
libavformat/output-example
Build the libavformat basic example.
alltools
Build all tools in tools directory.
libswscale/swscale-test
Build the swscale self-test (useful also as an example).
config
Reconfigure the project with the current configuration.
@@ -48,8 +48,6 @@ config
tools/target_dec_<decoder>_fuzzer
Build fuzzer to fuzz the specified decoder.
tools/target_bsf_<filter>_fuzzer
Build fuzzer to fuzz the specified bitstream filter.
Useful standard make commands:
make -t <target>

View File

@@ -55,10 +55,6 @@ Do not draw edges.
@item psnr
Set error[?] variables during encoding.
@item truncated
Input bitstream might be randomly truncated.
@item drop_changed
Don't output frames whose parameters differ from first decoded frame in stream.
Error AVERROR_INPUT_CHANGED is returned when a frame is dropped.
@item ildct
Use interlaced DCT.
@@ -80,8 +76,6 @@ Deprecated, use mpegvideo private options instead.
Apply interlaced motion estimation.
@item cgop
Use closed gop.
@item output_corrupt
Output even potentially corrupted frames.
@end table
@item me_method @var{integer} (@emph{encoding,video})
@@ -646,24 +640,6 @@ noise preserving sum of squared differences
@item dia_size @var{integer} (@emph{encoding,video})
Set diamond type & size for motion estimation.
@table @samp
@item (1024, INT_MAX)
full motion estimation(slowest)
@item (768, 1024]
umh motion estimation
@item (512, 768]
hex motion estimation
@item (256, 512]
l2s diamond motion estimation
@item [2,256]
var diamond motion estimation
@item (-1, 2)
small diamond motion estimation
@item -1
funny diamond motion estimation
@item (INT_MIN, -1)
sab diamond motion estimation
@end table
@item last_pred @var{integer} (@emph{encoding,video})
Set amount of motion predictors from the previous frame.
@@ -781,12 +757,14 @@ Set noise reduction.
Set number of bits which should be loaded into the rc buffer before
decoding starts.
@item flags2 @var{flags} (@emph{decoding/encoding,audio,video,subtitles})
@item flags2 @var{flags} (@emph{decoding/encoding,audio,video})
Possible values:
@table @samp
@item fast
Allow non spec compliant speedup tricks.
@item sgop
Deprecated, use mpegvideo private options instead.
@item noout
Skip bitstream encoding.
@item ignorecrop
@@ -797,25 +775,11 @@ Place global headers at every keyframe instead of in extradata.
Frame data might be split into multiple chunks.
@item showall
Show all frames before the first keyframe.
@item skiprd
Deprecated, use mpegvideo private options instead.
@item export_mvs
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
@item skip_manual
Do not skip samples and export skip information as frame side data.
@item ass_ro_flush_noop
Do not reset ASS ReadOrder field on flush.
@end table
@item export_side_data @var{flags} (@emph{decoding/encoding,audio,video,subtitles})
Possible values:
@table @samp
@item mvs
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
@item prft
Export encoder Producer Reference Time into packet side-data (see @code{AV_PKT_DATA_PRFT})
for codecs that support it.
@end table
@item error @var{integer} (@emph{encoding,video})
@@ -855,8 +819,49 @@ Set number of macroblock rows at the bottom which are skipped.
@item profile @var{integer} (@emph{encoding,audio,video})
Set encoder codec profile. Default value is @samp{unknown}. Encoder specific
profiles are documented in the relevant encoder documentation.
Possible values:
@table @samp
@item unknown
@item aac_main
@item aac_low
@item aac_ssr
@item aac_ltp
@item aac_he
@item aac_he_v2
@item aac_ld
@item aac_eld
@item mpeg2_aac_low
@item mpeg2_aac_he
@item mpeg4_sp
@item mpeg4_core
@item mpeg4_main
@item mpeg4_asp
@item dts
@item dts_es
@item dts_96_24
@item dts_hd_hra
@item dts_hd_ma
@end table
@item level @var{integer} (@emph{encoding,audio,video})
@@ -957,9 +962,6 @@ Discard all bidirectional frames.
@item nokey
Discard all frames excepts keyframes.
@item nointra
Discard all frames except I frames.
@item all
Discard all frames.
@end table
@@ -1230,7 +1232,7 @@ instead of alpha. Default is 0.
@item dump_separator @var{string} (@emph{input})
Separator used to separate the fields printed on the command line about the
Stream parameters.
For example, to separate the fields with newlines and indentation:
For example to separate the fields with newlines and indention:
@example
ffprobe -dump_separator "
" -i ~/videos/matrixbench_mpeg2.mpg

View File

@@ -47,39 +47,6 @@ top-field-first is assumed
@end table
@section libdav1d
dav1d AV1 decoder.
libdav1d allows libavcodec to decode the AOMedia Video 1 (AV1) codec.
Requires the presence of the libdav1d headers and library during configuration.
You need to explicitly configure the build with @code{--enable-libdav1d}.
@subsection Options
The following options are supported by the libdav1d wrapper.
@table @option
@item framethreads
Set amount of frame threads to use during decoding. The default value is 0 (autodetect).
@item tilethreads
Set amount of tile threads to use during decoding. The default value is 0 (autodetect).
@item filmgrain
Apply film grain to the decoded video if present in the bitstream. Defaults to the
internal default of the library.
@item oppoint
Select an operating point of a scalable AV1 bitstream (0 - 31). Defaults to the
internal default of the library.
@item alllayers
Output all spatial layers of a scalable AV1 bitstream. The default value is false.
@end table
@section libdavs2
AVS2-P2/IEEE1857.4 video decoder wrapper.
@@ -227,31 +194,6 @@ without this library.
@chapter Subtitles Decoders
@c man begin SUBTILES DECODERS
@section libaribb24
ARIB STD-B24 caption decoder.
Implements profiles A and C of the ARIB STD-B24 standard.
@subsection libaribb24 Decoder Options
@table @option
@item -aribb24-base-path @var{path}
Sets the base path for the libaribb24 library. This is utilized for reading of
configuration files (for custom unicode conversions), and for dumping of
non-text symbols as images under that location.
Unset by default.
@item -aribb24-skip-ruby-text @var{boolean}
Tells the decoder wrapper to skip text blocks that contain half-height ruby
text.
Enabled by default.
@end table
@section dvbsub
@subsection Options
@@ -287,7 +229,7 @@ palette is stored in the IFO file, and therefore not available when reading
from dumped VOB files.
The format for this option is a string containing 16 24-bits hexadecimal
numbers (without 0x prefix) separated by commas, for example @code{0d00ee,
numbers (without 0x prefix) separated by comas, for example @code{0d00ee,
ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
@@ -316,11 +258,6 @@ List of teletext page numbers to decode. Pages that do not match the specified
list are dropped. You may use the special @code{*} string to match all pages,
or @code{subtitle} to match all subtitle pages.
Default value is *.
@item txt_default_region
Set default character set used for decoding, a value between 0 and 87 (see
ETS 300 706, Section 15, Table 32). Default value is -1, which does not
override the libzvbi default. This option is needed for some legacy level 1.0
transmissions which cannot signal the proper charset.
@item txt_chop_top
Discards the top teletext line. Default value is 1.
@item txt_format

View File

@@ -25,6 +25,17 @@ Audible Format 2, 3, and 4 demuxer.
This demuxer is used to demux Audible Format 2, 3, and 4 (.aa) files.
@section applehttp
Apple HTTP Live Streaming demuxer.
This demuxer presents all AVStreams from all variant streams.
The id field is set to the bitrate variant index number. By setting
the discard flags on AVStreams (by pressing 'a' or 'v' in ffplay),
the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is
available in a metadata key named "variant_bitrate".
@section apng
Animated Portable Network Graphics demuxer.
@@ -309,15 +320,6 @@ infinitely.
HLS demuxer
Apple HTTP Live Streaming demuxer.
This demuxer presents all AVStreams from all variant streams.
The id field is set to the bitrate variant index number. By setting
the discard flags on AVStreams (by pressing 'a' or 'v' in ffplay),
the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is
available in a metadata key named "variant_bitrate".
It accepts the following options:
@table @option
@@ -331,10 +333,6 @@ segment index to start live streams at (negative values are from the end).
Maximum number of times a insufficient list is attempted to be reloaded.
Default value is 1000.
@item m3u8_hold_counters
The maximum number of times to load m3u8 when it refreshes without new segments.
Default value is 1000.
@item http_persistent
Use persistent HTTP connections. Applicable only for HTTP streams.
Enabled by default.
@@ -342,10 +340,6 @@ Enabled by default.
@item http_multiple
Use multiple HTTP connections for downloading HTTP segments.
Enabled by default for HTTP/1.1 servers.
@item http_seekable
Use HTTP partial requests for downloading HTTP segments.
0 = disable, 1 = enable, -1 = auto, Default is auto.
@end table
@section image2
@@ -456,17 +450,6 @@ nanosecond precision.
@item video_size
Set the video size of the images to read. If not specified the video
size is guessed from the first image file in the sequence.
@item export_path_metadata
If set to 1, will add two extra fields to the metadata found in input, making them
also available for other filters (see @var{drawtext} filter for examples). Default
value is 0. The extra fields are described below:
@table @option
@item lavf.image2dec.source_path
Corresponds to the full path to the input file being read.
@item lavf.image2dec.source_basename
Corresponds to the name of the file being read.
@end table
@end table
@subsection Examples
@@ -498,84 +481,14 @@ ffmpeg -framerate 10 -pattern_type glob -i "*.png" out.mkv
The Game Music Emu library is a collection of video game music file emulators.
See @url{https://bitbucket.org/mpyne/game-music-emu/overview} for more information.
See @url{http://code.google.com/p/game-music-emu/} for more information.
It accepts the following options:
Some files have multiple tracks. The demuxer will pick the first track by
default. The @option{track_index} option can be used to select a different
track. Track indexes start at 0. The demuxer exports the number of tracks as
@var{tracks} meta data entry.
@table @option
@item track_index
Set the index of which track to demux. The demuxer can only export one track.
Track indexes start at 0. Default is to pick the first track. Number of tracks
is exported as @var{tracks} metadata entry.
@item sample_rate
Set the sampling rate of the exported track. Range is 1000 to 999999. Default is 44100.
@item max_size @emph{(bytes)}
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size,
which in turn, acts as a ceiling for the size of files that can be read.
Default is 50 MiB.
@end table
@section libmodplug
ModPlug based module demuxer
See @url{https://github.com/Konstanty/libmodplug}
It will export one 2-channel 16-bit 44.1 kHz audio stream.
Optionally, a @code{pal8} 16-color video stream can be exported with or without printed metadata.
It accepts the following options:
@table @option
@item noise_reduction
Apply a simple low-pass filter. Can be 1 (on) or 0 (off). Default is 0.
@item reverb_depth
Set amount of reverb. Range 0-100. Default is 0.
@item reverb_delay
Set delay in ms, clamped to 40-250 ms. Default is 0.
@item bass_amount
Apply bass expansion a.k.a. XBass or megabass. Range is 0 (quiet) to 100 (loud). Default is 0.
@item bass_range
Set cutoff i.e. upper-bound for bass frequencies. Range is 10-100 Hz. Default is 0.
@item surround_depth
Apply a Dolby Pro-Logic surround effect. Range is 0 (quiet) to 100 (heavy). Default is 0.
@item surround_delay
Set surround delay in ms, clamped to 5-40 ms. Default is 0.
@item max_size
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size,
which in turn, acts as a ceiling for the size of files that can be read. Range is 0 to 100 MiB.
0 removes buffer size limit (not recommended). Default is 5 MiB.
@item video_stream_expr
String which is evaluated using the eval API to assign colors to the generated video stream.
Variables which can be used are @code{x}, @code{y}, @code{w}, @code{h}, @code{t}, @code{speed},
@code{tempo}, @code{order}, @code{pattern} and @code{row}.
@item video_stream
Generate video stream. Can be 1 (on) or 0 (off). Default is 0.
@item video_stream_w
Set video frame width in 'chars' where one char indicates 8 pixels. Range is 20-512. Default is 30.
@item video_stream_h
Set video frame height in 'chars' where one char indicates 8 pixels. Range is 20-512. Default is 30.
@item video_stream_ptxt
Print metadata on video stream. Includes @code{speed}, @code{tempo}, @code{order}, @code{pattern},
@code{row} and @code{ts} (time in ms). Can be 1 (on) or 0 (off). Default is 1.
@end table
For very large files, the @option{max_size} option may have to be adjusted.
@section libopenmpt
@@ -604,13 +517,9 @@ Set the sample rate for libopenmpt to output.
Range is from 1000 to INT_MAX. The value default is 48000.
@end table
@section mov/mp4/3gp
@section mov/mp4/3gp/QuickTime
Demuxer for Quicktime File Format & ISO/IEC Base Media File Format (ISO/IEC 14496-12 or MPEG-4 Part 12, ISO/IEC 15444-12 or JPEG 2000 Part 12).
Registered extensions: mov, mp4, m4a, 3gp, 3g2, mj2, psp, m4b, ism, ismv, isma, f4v
@subsection Options
QuickTime / MP4 demuxer.
This demuxer accepts the following options:
@table @option
@@ -621,73 +530,10 @@ Enabling this can theoretically leak information in some use cases.
@item use_absolute_path
Allows loading of external tracks via absolute paths, disabled by default.
Enabling this poses a security risk. It should only be enabled if the source
is known to be non-malicious.
is known to be non malicious.
@item seek_streams_individually
When seeking, identify the closest point in each stream individually and demux packets in
that stream from identified point. This can lead to a different sequence of packets compared
to demuxing linearly from the beginning. Default is true.
@item ignore_editlist
Ignore any edit list atoms. The demuxer, by default, modifies the stream index to reflect the
timeline described by the edit list. Default is false.
@item advanced_editlist
Modify the stream index to reflect the timeline described by the edit list. @code{ignore_editlist}
must be set to false for this option to be effective.
If both @code{ignore_editlist} and this option are set to false, then only the
start of the stream index is modified to reflect initial dwell time or starting timestamp
described by the edit list. Default is true.
@item ignore_chapters
Don't parse chapters. This includes GoPro 'HiLight' tags/moments. Note that chapters are
only parsed when input is seekable. Default is false.
@item use_mfra_for
For seekable fragmented input, set fragment's starting timestamp from media fragment random access box, if present.
Following options are available:
@table @samp
@item auto
Auto-detect whether to set mfra timestamps as PTS or DTS @emph{(default)}
@item dts
Set mfra timestamps as DTS
@item pts
Set mfra timestamps as PTS
@item 0
Don't use mfra box to set timestamps
@end table
@item export_all
Export unrecognized boxes within the @var{udta} box as metadata entries. The first four
characters of the box type are set as the key. Default is false.
@item export_xmp
Export entire contents of @var{XMP_} box and @var{uuid} box as a string with key @code{xmp}. Note that
if @code{export_all} is set and this option isn't, the contents of @var{XMP_} box are still exported
but with key @code{XMP_}. Default is false.
@item activation_bytes
4-byte key required to decrypt Audible AAX and AAX+ files. See Audible AAX subsection below.
@item audible_fixed_key
Fixed key used for handling Audible AAX/AAX+ files. It has been pre-set so should not be necessary to
specify.
@item decryption_key
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
@end table
@subsection Audible AAX
Audible AAX files are encrypted M4B files, and they can be decrypted by specifying a 4 byte activation secret.
@example
ffmpeg -activation_bytes 1CEB00DA -i test.aax -vn -c:a copy output.mp4
@end example
@section mpegts
MPEG-2 transport stream demuxer.
@@ -816,20 +662,4 @@ Example: convert the captions to a format most players understand:
ffmpeg -i http://www.ted.com/talks/subtitles/id/1/lang/en talk1-en.srt
@end example
@section vapoursynth
Vapoursynth wrapper.
Due to security concerns, Vapoursynth scripts will not
be autodetected so the input format has to be forced. For ff* CLI tools,
add @code{-f vapoursynth} before the input @code{-i yourscript.vpy}.
This demuxer accepts the following option:
@table @option
@item max_script_size
The demuxer buffers the entire script into memory. Adjust this value to set the maximum buffer size,
which in turn, acts as a ceiling for the size of scripts that can be read.
Default is 1 MiB.
@end table
@c man end DEMUXERS

View File

@@ -131,9 +131,6 @@ compound literals (@samp{x = (struct s) @{ 17, 23 @};}).
@item
for loops with variable definition (@samp{for (int i = 0; i < 8; i++)});
@item
Variadic macros (@samp{#define ARRAY(nb, ...) (int[nb + 1])@{ nb, __VA_ARGS__ @}});
@item
Implementation defined behavior for signed integers is assumed to match the
expected behavior for two's complement. Non representable values in integer
@@ -625,7 +622,7 @@ If the patch fixes a bug, did you provide a verbose analysis of the bug?
If the patch fixes a bug, did you provide enough information, including
a sample, so the bug can be reproduced and the fix can be verified?
Note please do not attach samples >100k to mails but rather provide a
URL, you can upload to @url{https://streams.videolan.org/upload/}.
URL, you can upload to ftp://upload.ffmpeg.org.
@item
Did you provide a verbose summary about what the patch does change?

View File

@@ -30,7 +30,11 @@ follows.
Advanced Audio Coding (AAC) encoder.
This encoder is the default AAC encoder, natively implemented into FFmpeg.
This encoder is the default AAC encoder, natively implemented into FFmpeg. Its
quality is on par or better than libfdk_aac at the default bitrate of 128kbps.
This encoder also implements more options, profiles and samplerates than
other encoders (with only the AAC-HE profile pending to be implemented) so this
encoder has become the default and is the recommended choice.
@subsection Options
@@ -647,7 +651,10 @@ configuration. You need to explicitly configure the build with
so if you allow the use of GPL, you should configure with
@code{--enable-gpl --enable-nonfree --enable-libfdk-aac}.
This encoder has support for the AAC-HE profiles.
This encoder is considered to produce output on par or worse at 128kbps to the
@ref{aacenc,,the native FFmpeg AAC encoder} but can often produce better
sounding audio at identical or lower bitrates and has support for the
AAC-HE profiles.
VBR encoding, enabled through the @option{vbr} or @option{flags
+qscale} options, is experimental and only works with some
@@ -726,14 +733,6 @@ if set to 0.
Default value is 0.
@item eld_v2
Enable ELDv2 (LD-MPS extension for ELD stereo signals) for ELDv2 if set to 1,
disabled if set to 0.
Note that option is available when fdk-aac version (AACENCODER_LIB_VL0.AACENCODER_LIB_VL1.AACENCODER_LIB_VL2) > (4.0.0).
Default value is 0.
@item signaling
Set SBR/PS signaling style.
@@ -1371,236 +1370,6 @@ makes it possible to store non-rgb pix_fmts.
@end table
@section librav1e
rav1e AV1 encoder wrapper.
Requires the presence of the rav1e headers and library during configuration.
You need to explicitly configure the build with @code{--enable-librav1e}.
@subsection Options
@table @option
@item qmax
Sets the maximum quantizer to use when using bitrate mode.
@item qmin
Sets the minimum quantizer to use when using bitrate mode.
@item qp
Uses quantizer mode to encode at the given quantizer (0-255).
@item speed
Selects the speed preset (0-10) to encode with.
@item tiles
Selects how many tiles to encode with.
@item tile-rows
Selects how many rows of tiles to encode with.
@item tile-columns
Selects how many columns of tiles to encode with.
@item rav1e-params
Set rav1e options using a list of @var{key}=@var{value} pairs separated
by ":". See @command{rav1e --help} for a list of options.
For example to specify librav1e encoding options with @option{-rav1e-params}:
@example
ffmpeg -i input -c:v librav1e -b:v 500K -rav1e-params speed=5:low_latency=true output.mp4
@end example
@end table
@section libaom-av1
libaom AV1 encoder wrapper.
Requires the presence of the libaom headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libaom}.
@subsection Options
The wrapper supports the following standard libavcodec options:
@table @option
@item b
Set bitrate target in bits/second. By default this will use
variable-bitrate mode. If @option{maxrate} and @option{minrate} are
also set to the same value then it will use constant-bitrate mode,
otherwise if @option{crf} is set as well then it will use
constrained-quality mode.
@item g keyint_min
Set key frame placement. The GOP size sets the maximum distance between
key frames; if zero the output stream will be intra-only. The minimum
distance is ignored unless it is the same as the GOP size, in which case
key frames will always appear at a fixed interval. Not set by default,
so without this option the library has completely free choice about
where to place key frames.
@item qmin qmax
Set minimum/maximum quantisation values. Valid range is from 0 to 63
(warning: this does not match the quantiser values actually used by AV1
- divide by four to map real quantiser values to this range). Defaults
to min/max (no constraint).
@item minrate maxrate bufsize rc_init_occupancy
Set rate control buffering parameters. Not used if not set - defaults
to unconstrained variable bitrate.
@item threads
Set the number of threads to use while encoding. This may require the
@option{tiles} or @option{row-mt} options to also be set to actually
use the specified number of threads fully. Defaults to the number of
hardware threads supported by the host machine.
@item profile
Set the encoding profile. Defaults to using the profile which matches
the bit depth and chroma subsampling of the input.
@end table
The wrapper also has some specific options:
@table @option
@item cpu-used
Set the quality/encoding speed tradeoff. Valid range is from 0 to 8,
higher numbers indicating greater speed and lower quality. The default
value is 1, which will be slow and high quality.
@item auto-alt-ref
Enable use of alternate reference frames. Defaults to the internal
default of the library.
@item arnr-max-frames (@emph{frames})
Set altref noise reduction max frame count. Default is -1.
@item arnr-strength (@emph{strength})
Set altref noise reduction filter strength. Range is -1 to 6. Default is -1.
@item aq-mode (@emph{aq-mode})
Set adaptive quantization mode. Possible values:
@table @samp
@item none (@emph{0})
Disabled.
@item variance (@emph{1})
Variance-based.
@item complexity (@emph{2})
Complexity-based.
@item cyclic (@emph{3})
Cyclic refresh.
@end table
@item tune (@emph{tune})
Set the distortion metric the encoder is tuned with. Default is @code{psnr}.
@table @samp
@item psnr (@emph{0})
@item ssim (@emph{1})
@end table
@item lag-in-frames
Set the maximum number of frames which the encoder may keep in flight
at any one time for lookahead purposes. Defaults to the internal
default of the library.
@item error-resilience
Enable error resilience features:
@table @option
@item default
Improve resilience against losses of whole frames.
@end table
Not enabled by default.
@item crf
Set the quality/size tradeoff for constant-quality (no bitrate target)
and constrained-quality (with maximum bitrate target) modes. Valid
range is 0 to 63, higher numbers indicating lower quality and smaller
output size. Only used if set; by default only the bitrate target is
used.
@item static-thresh
Set a change threshold on blocks below which they will be skipped by
the encoder. Defined in arbitrary units as a nonnegative integer,
defaulting to zero (no blocks are skipped).
@item drop-threshold
Set a threshold for dropping frames when close to rate control bounds.
Defined as a percentage of the target buffer - when the rate control
buffer falls below this percentage, frames will be dropped until it
has refilled above the threshold. Defaults to zero (no frames are
dropped).
@item denoise-noise-level (@emph{level})
Amount of noise to be removed for grain synthesis. Grain synthesis is disabled if
this option is not set or set to 0.
@item denoise-block-size (@emph{pixels})
Block size used for denoising for grain synthesis. If not set, AV1 codec
uses the default value of 32.
@item undershoot-pct (@emph{pct})
Set datarate undershoot (min) percentage of the target bitrate. Range is -1 to 100.
Default is -1.
@item overshoot-pct (@emph{pct})
Set datarate overshoot (max) percentage of the target bitrate. Range is -1 to 1000.
Default is -1.
@item minsection-pct (@emph{pct})
Minimum percentage variation of the GOP bitrate from the target bitrate. If minsection-pct
is not set, the libaomenc wrapper computes it as follows: @code{(minrate * 100 / bitrate)}.
Range is -1 to 100. Default is -1 (unset).
@item maxsection-pct (@emph{pct})
Maximum percentage variation of the GOP bitrate from the target bitrate. If maxsection-pct
is not set, the libaomenc wrapper computes it as follows: @code{(maxrate * 100 / bitrate)}.
Range is -1 to 5000. Default is -1 (unset).
@item frame-parallel (@emph{boolean})
Enable frame parallel decodability features. Default is true.
@item tiles
Set the number of tiles to encode the input video with, as columns x
rows. Larger numbers allow greater parallelism in both encoding and
decoding, but may decrease coding efficiency. Defaults to the minimum
number of tiles required by the size of the input video (this is 1x1
(that is, a single tile) for sizes up to and including 4K).
@item tile-columns tile-rows
Set the number of tiles as log2 of the number of tile rows and columns.
Provided for compatibility with libvpx/VP9.
@item row-mt (Requires libaom >= 1.0.0-759-g90a15f4f2)
Enable row based multi-threading. Disabled by default.
@item enable-cdef (@emph{boolean})
Enable Constrained Directional Enhancement Filter. The libaom-av1
encoder enables CDEF by default.
@item enable-restoration (@emph{boolean})
Enable Loop Restoration Filter. Default is true for libaom-av1.
@item enable-global-motion (@emph{boolean})
Enable the use of global motion for block prediction. Default is true.
@item enable-intrabc (@emph{boolean})
Enable block copy mode for intra block prediction. This mode is
useful for screen content. Default is true.
@end table
@section libkvazaar
Kvazaar H.265/HEVC encoder.
@@ -1872,8 +1641,7 @@ means unlimited.
@table @option
@item auto-alt-ref
Enable use of alternate reference frames (2-pass only).
Values greater than 1 enable multi-layer alternate reference frames (VP9 only).
@item arnr-maxframes
@item arnr-max-frames
Set altref noise reduction max frame count.
@item arnr-type
Set altref noise reduction filter type: backward, forward, centered.
@@ -1886,61 +1654,6 @@ Set number of frames to look ahead for frametype and ratecontrol.
@item error-resilient
Enable error resiliency features.
@item sharpness @var{integer}
Increase sharpness at the expense of lower PSNR.
The valid range is [0, 7].
@item ts-parameters
Sets the temporal scalability configuration using a :-separated list of
key=value pairs. For example, to specify temporal scalability parameters
with @code{ffmpeg}:
@example
ffmpeg -i INPUT -c:v libvpx -ts-parameters ts_number_layers=3:\
ts_target_bitrate=250,500,1000:ts_rate_decimator=4,2,1:\
ts_periodicity=4:ts_layer_id=0,2,1,2:ts_layering_mode=3 OUTPUT
@end example
Below is a brief explanation of each of the parameters, please
refer to @code{struct vpx_codec_enc_cfg} in @code{vpx/vpx_encoder.h} for more
details.
@table @option
@item ts_number_layers
Number of temporal coding layers.
@item ts_target_bitrate
Target bitrate for each temporal layer (in kbps).
(bitrate should be inclusive of the lower temporal layer).
@item ts_rate_decimator
Frame rate decimation factor for each temporal layer.
@item ts_periodicity
Length of the sequence defining frame temporal layer membership.
@item ts_layer_id
Template defining the membership of frames to temporal layers.
@item ts_layering_mode
(optional) Selecting the temporal structure from a set of pre-defined temporal layering modes.
Currently supports the following options.
@table @option
@item 0
No temporal layering flags are provided internally,
relies on flags being passed in using @code{metadata} field in @code{AVFrame}
with following keys.
@table @option
@item vp8-flags
Sets the flags passed into the encoder to indicate the referencing scheme for
the current frame.
Refer to function @code{vpx_codec_encode} in @code{vpx/vpx_encoder.h} for more
details.
@item temporal_id
Explicitly sets the temporal id of the current frame to encode.
@end table
@item 2
Two temporal layers. 0-1...
@item 3
Three temporal layers. 0-2-1-2...; with single reference frame.
@item 4
Same as option "3", except there is a dependency between
the two temporal layer 2 frames within the temporal period.
@end table
@end table
@item VP9-specific options
@table @option
@item lossless
@@ -1979,8 +1692,6 @@ Corpus VBR mode is a variant of standard VBR where the complexity distribution
midpoint is passed in rather than calculated for a specific clip or chunk.
The valid range is [0, 10000]. 0 (default) uses standard VBR.
@item enable-tpl @var{boolean}
Enable temporal dependency model.
@end table
@end table
@@ -2442,20 +2153,6 @@ during configuration. You need to explicitly configure the build with
@subsection Options
@table @option
@item b
Sets target video bitrate.
@item bf
@item g
Set the GOP size.
@item keyint_min
Minimum GOP size.
@item refs
Number of reference frames each P-frame can use. The range is from @var{1-16}.
@item preset
Set the x265 preset.
@@ -2468,28 +2165,6 @@ Set profile restrictions.
@item crf
Set the quality for constant quality mode.
@item qp
Set constant quantization rate control method parameter.
@item qmin
Minimum quantizer scale.
@item qmax
Maximum quantizer scale.
@item qdiff
Maximum difference between quantizer scales.
@item qblur
Quantizer curve blur
@item qcomp
Quantizer curve compression factor
@item i_qfactor
@item b_qfactor
@item forced-idr
Normally, when forcing a I-frame type, the encoder can select any type
of I-frame. This option forces it to choose an IDR-frame.
@@ -2505,63 +2180,6 @@ ffmpeg -i input -c:v libx265 -x265-params crf=26:psy-rd=1 output.mp4
@end example
@end table
@section libxavs2
xavs2 AVS2-P2/IEEE1857.4 encoder wrapper.
This encoder requires the presence of the libxavs2 headers and library
during configuration. You need to explicitly configure the build with
@option{--enable-libxavs2}.
The following standard libavcodec options are used:
@itemize
@item
@option{b} / @option{bit_rate}
@item
@option{g} / @option{gop_size}
@item
@option{bf} / @option{max_b_frames}
@end itemize
The encoder also has its own specific options:
@subsection Options
@table @option
@item lcu_row_threads
Set the number of parallel threads for rows from 1 to 8 (default 5).
@item initial_qp
Set the xavs2 quantization parameter from 1 to 63 (default 34). This is
used to set the initial qp for the first frame.
@item qp
Set the xavs2 quantization parameter from 1 to 63 (default 34). This is
used to set the qp value under constant-QP mode.
@item max_qp
Set the max qp for rate control from 1 to 63 (default 55).
@item min_qp
Set the min qp for rate control from 1 to 63 (default 20).
@item speed_level
Set the Speed level from 0 to 9 (default 0). Higher is better but slower.
@item log_level
Set the log level from -1 to 3 (default 0). -1: none, 0: error,
1: warning, 2: info, 3: debug.
@item xavs2-params
Set xavs2 options using a list of @var{key}=@var{value} couples separated
by ":".
For example to specify libxavs2 encoding options with @option{-xavs2-params}:
@example
ffmpeg -i input -c:v libxavs2 -xavs2-params RdoqLevel=0 output.avs2
@end example
@end table
@section libxvid
Xvid MPEG-4 Part 2 encoder wrapper.
@@ -2725,14 +2343,6 @@ fastest.
@end table
@section MediaFoundation
This provides wrappers to encoders (both audio and video) in the
MediaFoundation framework. It can access both SW and HW encoders.
Video encoders can take input in either of nv12 or yuv420p form
(some encoders support both, some support only either - in practice,
nv12 is the safer choice, especially among HW encoders).
@section mpeg2
MPEG-2 video encoder.
@@ -2740,20 +2350,6 @@ MPEG-2 video encoder.
@subsection Options
@table @option
@item profile @var{integer}
Select the mpeg2 profile to encode:
@table @samp
@item 422
@item main
@item ss
Spatially Scalable
@item snr
SNR Scalable
@item high
@item simple
@end table
@item seq_disp_ext @var{integer}
Specifies if the encoder should write a sequence_display_extension to the
output.
@@ -2774,9 +2370,6 @@ Specifies the video_format written into the sequence display extension
indicating the source of the video pictures. The default is @samp{unspecified},
can be @samp{component}, @samp{pal}, @samp{ntsc}, @samp{secam} or @samp{mac}.
For maximum compatibility, use @samp{component}.
@item a53cc @var{boolean}
Import closed captions (which must be ATSC compatible format) into output.
Default is 1 (on).
@end table
@section png
@@ -2862,7 +2455,7 @@ recommended value) and do not set a size constraint.
@section QSV encoders
The family of Intel QuickSync Video encoders (MPEG-2, H.264, HEVC, JPEG/MJPEG and VP9)
The family of Intel QuickSync Video encoders (MPEG-2, H.264 and HEVC)
The ratecontrol method is selected as follows:
@@ -3010,48 +2603,15 @@ Size / quality tradeoff: higher values are smaller / worse quality.
@end itemize
All encoders support the following options:
@table @option
@item low_power
@itemize
@item
@option{low_power}
Some drivers/platforms offer a second encoder for some codecs intended to use
less power than the default encoder; setting this option will attempt to use
that encoder. Note that it may support a reduced feature set, so some other
options may not be available in this mode.
@item idr_interval
Set the number of normal intra frames between full-refresh (IDR) frames in
open-GOP mode. The intra frames are still IRAPs, but will not include global
headers and may have non-decodable leading pictures.
@item b_depth
Set the B-frame reference depth. When set to one (the default), all B-frames
will refer only to P- or I-frames. When set to greater values multiple layers
of B-frames will be present, frames in each layer only referring to frames in
higher layers.
@item rc_mode
Set the rate control mode to use. A given driver may only support a subset of
modes.
Possible modes:
@table @option
@item auto
Choose the mode automatically based on driver support and the other options.
This is the default.
@item CQP
Constant-quality.
@item CBR
Constant-bitrate.
@item VBR
Variable-bitrate.
@item ICQ
Intelligent constant-quality.
@item QVBR
Quality-defined variable-bitrate.
@item AVBR
Average variable bitrate.
@end table
@end table
@end itemize
Each encoder also has its own specific options:
@table @option
@@ -3231,6 +2791,52 @@ Reduces detail but attempts to preserve color at extremely low bitrates.
@end table
@section libxavs2
xavs2 AVS2-P2/IEEE1857.4 encoder wrapper.
This encoder requires the presence of the libxavs2 headers and library
during configuration. You need to explicitly configure the build with
@option{--enable-libxavs2}.
@subsection Options
@table @option
@item lcu_row_threads
Set the number of parallel threads for rows from 1 to 8 (default 5).
@item initial_qp
Set the xavs2 quantization parameter from 1 to 63 (default 34). This is
used to set the initial qp for the first frame.
@item qp
Set the xavs2 quantization parameter from 1 to 63 (default 34). This is
used to set the qp value under constant-QP mode.
@item max_qp
Set the max qp for rate control from 1 to 63 (default 55).
@item min_qp
Set the min qp for rate control from 1 to 63 (default 20).
@item speed_level
Set the Speed level from 0 to 9 (default 0). Higher is better but slower.
@item log_level
Set the log level from -1 to 3 (default 0). -1: none, 0: error,
1: warning, 2: info, 3: debug.
@item xavs2-params
Set xavs2 options using a list of @var{key}=@var{value} couples separated
by ":".
For example to specify libxavs2 encoding options with @option{-xavs2-params}:
@example
ffmpeg -i input -c:v libxavs2 -xavs2-params preset_level=5 output.avs2
@end example
@end table
@c man end VIDEO ENCODERS
@chapter Subtitles Encoders
@@ -3245,14 +2851,6 @@ and they can also be used in Matroska files.
@subsection Options
@table @option
@item palette
Specify the global palette used by the bitmaps.
The format for this option is a string containing 16 24-bits hexadecimal
numbers (without 0x prefix) separated by commas, for example @code{0d00ee,
ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
@item even_rows_fix
When set to 1, enable a work-around that makes the number of pixel rows
even in all subtitles. This fixes a problem with some players that

View File

@@ -1,4 +1,4 @@
/avio_list_dir
/avio_dir_cmd
/avio_reading
/decode_audio
/decode_video

View File

@@ -1,4 +1,4 @@
EXAMPLES-$(CONFIG_AVIO_LIST_DIR_EXAMPLE) += avio_list_dir
EXAMPLES-$(CONFIG_AVIO_DIR_CMD_EXAMPLE) += avio_dir_cmd
EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
EXAMPLES-$(CONFIG_DECODE_AUDIO_EXAMPLE) += decode_audio
EXAMPLES-$(CONFIG_DECODE_VIDEO_EXAMPLE) += decode_video
@@ -37,7 +37,7 @@ $(EXAMPLES_G): %$(PROGSSUF)_g$(EXESUF): %.o
examples: $(EXAMPLES)
$(EXAMPLES:%$(PROGSSUF)$(EXESUF)=%.o): | doc/examples
OUTDIRS += doc/examples
OBJDIRS += doc/examples
DOXY_INPUT += $(EXAMPLES:%$(PROGSSUF)$(EXESUF)=%.c)

View File

@@ -11,7 +11,7 @@ CFLAGS += -Wall -g
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
EXAMPLES= avio_list_dir \
EXAMPLES= avio_dir_cmd \
avio_reading \
decode_audio \
decode_video \

View File

@@ -102,15 +102,38 @@ static int list_op(const char *input_dir)
return ret;
}
static int del_op(const char *url)
{
int ret = avpriv_io_delete(url);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Cannot delete '%s': %s.\n", url, av_err2str(ret));
return ret;
}
static int move_op(const char *src, const char *dst)
{
int ret = avpriv_io_move(src, dst);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Cannot move '%s' into '%s': %s.\n", src, dst, av_err2str(ret));
return ret;
}
static void usage(const char *program_name)
{
fprintf(stderr, "usage: %s input_dir\n"
"API example program to show how to list files in directory "
"accessed through AVIOContext.\n", program_name);
fprintf(stderr, "usage: %s OPERATION entry1 [entry2]\n"
"API example program to show how to manipulate resources "
"accessed through AVIOContext.\n"
"OPERATIONS:\n"
"list list content of the directory\n"
"move rename content in directory\n"
"del delete content in directory\n",
program_name);
}
int main(int argc, char *argv[])
{
const char *op = NULL;
int ret;
av_log_set_level(AV_LOG_DEBUG);
@@ -122,7 +145,32 @@ int main(int argc, char *argv[])
avformat_network_init();
ret = list_op(argv[1]);
op = argv[1];
if (strcmp(op, "list") == 0) {
if (argc < 3) {
av_log(NULL, AV_LOG_INFO, "Missing argument for list operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = list_op(argv[2]);
}
} else if (strcmp(op, "del") == 0) {
if (argc < 3) {
av_log(NULL, AV_LOG_INFO, "Missing argument for del operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = del_op(argv[2]);
}
} else if (strcmp(op, "move") == 0) {
if (argc < 4) {
av_log(NULL, AV_LOG_INFO, "Missing argument for move operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = move_op(argv[2], argv[3]);
}
} else {
av_log(NULL, AV_LOG_INFO, "Invalid operation %s\n", op);
ret = AVERROR(EINVAL);
}
avformat_network_deinit();

View File

@@ -117,12 +117,11 @@ int main(int argc, char *argv[])
end:
avformat_close_input(&fmt_ctx);
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */
if (avio_ctx)
if (avio_ctx) {
av_freep(&avio_ctx->buffer);
avio_context_free(&avio_ctx);
av_freep(&avio_ctx);
}
av_file_unmap(buffer, buffer_size);
if (ret < 0) {

View File

@@ -39,35 +39,6 @@
#define AUDIO_INBUF_SIZE 20480
#define AUDIO_REFILL_THRESH 4096
static int get_format_from_sample_fmt(const char **fmt,
enum AVSampleFormat sample_fmt)
{
int i;
struct sample_fmt_entry {
enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
} sample_fmt_entries[] = {
{ AV_SAMPLE_FMT_U8, "u8", "u8" },
{ AV_SAMPLE_FMT_S16, "s16be", "s16le" },
{ AV_SAMPLE_FMT_S32, "s32be", "s32le" },
{ AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
{ AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
};
*fmt = NULL;
for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
struct sample_fmt_entry *entry = &sample_fmt_entries[i];
if (sample_fmt == entry->sample_fmt) {
*fmt = AV_NE(entry->fmt_be, entry->fmt_le);
return 0;
}
}
fprintf(stderr,
"sample format %s is not supported as output format\n",
av_get_sample_fmt_name(sample_fmt));
return -1;
}
static void decode(AVCodecContext *dec_ctx, AVPacket *pkt, AVFrame *frame,
FILE *outfile)
{
@@ -115,9 +86,6 @@ int main(int argc, char **argv)
size_t data_size;
AVPacket *pkt;
AVFrame *decoded_frame = NULL;
enum AVSampleFormat sfmt;
int n_channels = 0;
const char *fmt;
if (argc <= 2) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
@@ -204,26 +172,6 @@ int main(int argc, char **argv)
pkt->size = 0;
decode(c, pkt, decoded_frame, outfile);
/* print output pcm infomations, because there have no metadata of pcm */
sfmt = c->sample_fmt;
if (av_sample_fmt_is_planar(sfmt)) {
const char *packed = av_get_sample_fmt_name(sfmt);
printf("Warning: the sample format the decoder produced is planar "
"(%s). This example will output the first channel only.\n",
packed ? packed : "?");
sfmt = av_get_packed_sample_fmt(sfmt);
}
n_channels = c->channels;
if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
goto end;
printf("Play the output audio file with the command:\n"
"ffplay -f %s -ac %d -ar %d %s\n",
fmt, n_channels, c->sample_rate,
outfilename);
end:
fclose(outfile);
fclose(f);

View File

@@ -95,8 +95,7 @@ int main(int argc, char **argv)
AVPacket *pkt;
if (argc <= 2) {
fprintf(stderr, "Usage: %s <input file> <output file>\n"
"And check your input file is encoded by mpeg1video please.\n", argv[0]);
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
exit(0);
}
filename = argv[1];

View File

@@ -55,93 +55,95 @@ static AVPacket pkt;
static int video_frame_count = 0;
static int audio_frame_count = 0;
static int output_video_frame(AVFrame *frame)
{
if (frame->width != width || frame->height != height ||
frame->format != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
fprintf(stderr, "Error: Width, height and pixel format have to be "
"constant in a rawvideo file, but the width, height or "
"pixel format of the input video changed:\n"
"old: width = %d, height = %d, format = %s\n"
"new: width = %d, height = %d, format = %s\n",
width, height, av_get_pix_fmt_name(pix_fmt),
frame->width, frame->height,
av_get_pix_fmt_name(frame->format));
return -1;
}
/* Enable or disable frame reference counting. You are not supposed to support
* both paths in your application but pick the one most appropriate to your
* needs. Look for the use of refcount in this example to see what are the
* differences of API usage between them. */
static int refcount = 0;
printf("video_frame n:%d coded_n:%d\n",
video_frame_count++, frame->coded_picture_number);
/* copy decoded frame to destination buffer:
* this is required since rawvideo expects non aligned data */
av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height);
/* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
return 0;
}
static int output_audio_frame(AVFrame *frame)
{
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
printf("audio_frame n:%d nb_samples:%d pts:%s\n",
audio_frame_count++, frame->nb_samples,
av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
return 0;
}
static int decode_packet(AVCodecContext *dec, const AVPacket *pkt)
static int decode_packet(int *got_frame, int cached)
{
int ret = 0;
int decoded = pkt.size;
// submit the packet to the decoder
ret = avcodec_send_packet(dec, pkt);
if (ret < 0) {
fprintf(stderr, "Error submitting a packet for decoding (%s)\n", av_err2str(ret));
return ret;
}
*got_frame = 0;
// get all the available frames from the decoder
while (ret >= 0) {
ret = avcodec_receive_frame(dec, frame);
if (pkt.stream_index == video_stream_idx) {
/* decode video frame */
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
// those two return values are special and mean there is no output
// frame available, but there were no errors during decoding
if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
return 0;
fprintf(stderr, "Error during decoding (%s)\n", av_err2str(ret));
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
// write the frame data to output file
if (dec->codec->type == AVMEDIA_TYPE_VIDEO)
ret = output_video_frame(frame);
else
ret = output_audio_frame(frame);
if (*got_frame) {
av_frame_unref(frame);
if (ret < 0)
if (frame->width != width || frame->height != height ||
frame->format != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
fprintf(stderr, "Error: Width, height and pixel format have to be "
"constant in a rawvideo file, but the width, height or "
"pixel format of the input video changed:\n"
"old: width = %d, height = %d, format = %s\n"
"new: width = %d, height = %d, format = %s\n",
width, height, av_get_pix_fmt_name(pix_fmt),
frame->width, frame->height,
av_get_pix_fmt_name(frame->format));
return -1;
}
printf("video_frame%s n:%d coded_n:%d\n",
cached ? "(cached)" : "",
video_frame_count++, frame->coded_picture_number);
/* copy decoded frame to destination buffer:
* this is required since rawvideo expects non aligned data */
av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height);
/* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
}
} else if (pkt.stream_index == audio_stream_idx) {
/* decode audio frame */
ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(ret));
return ret;
}
/* Some audio decoders decode only part of the packet, and have to be
* called again with the remainder of the packet data.
* Sample: fate-suite/lossless-audio/luckynight-partial.shn
* Also, some decoders might over-read the packet. */
decoded = FFMIN(ret, pkt.size);
if (*got_frame) {
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
printf("audio_frame%s n:%d nb_samples:%d pts:%s\n",
cached ? "(cached)" : "",
audio_frame_count++, frame->nb_samples,
av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
}
}
return 0;
/* If we use frame reference counting, we own the data and need
* to de-reference it when we don't use it anymore */
if (*got_frame && refcount)
av_frame_unref(frame);
return decoded;
}
static int open_codec_context(int *stream_idx,
@@ -184,7 +186,8 @@ static int open_codec_context(int *stream_idx,
return ret;
}
/* Init the decoders */
/* Init the decoders, with or without reference counting */
av_dict_set(&opts, "refcounted_frames", refcount ? "1" : "0", 0);
if ((ret = avcodec_open2(*dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
@@ -227,17 +230,24 @@ static int get_format_from_sample_fmt(const char **fmt,
int main (int argc, char **argv)
{
int ret = 0;
int ret = 0, got_frame;
if (argc != 4) {
fprintf(stderr, "usage: %s input_file video_output_file audio_output_file\n"
if (argc != 4 && argc != 5) {
fprintf(stderr, "usage: %s [-refcount] input_file video_output_file audio_output_file\n"
"API example program to show how to read frames from an input file.\n"
"This program reads frames from a file, decodes them, and writes decoded\n"
"video frames to a rawvideo file named video_output_file, and decoded\n"
"audio frames to a rawaudio file named audio_output_file.\n",
argv[0]);
"audio frames to a rawaudio file named audio_output_file.\n\n"
"If the -refcount option is specified, the program use the\n"
"reference counting frame system which allows keeping a copy of\n"
"the data for longer than one decode call.\n"
"\n", argv[0]);
exit(1);
}
if (argc == 5 && !strcmp(argv[1], "-refcount")) {
refcount = 1;
argv++;
}
src_filename = argv[1];
video_dst_filename = argv[2];
audio_dst_filename = argv[3];
@@ -315,22 +325,23 @@ int main (int argc, char **argv)
/* read frames from the file */
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
// check if the packet belongs to a stream we are interested in, otherwise
// skip it
if (pkt.stream_index == video_stream_idx)
ret = decode_packet(video_dec_ctx, &pkt);
else if (pkt.stream_index == audio_stream_idx)
ret = decode_packet(audio_dec_ctx, &pkt);
av_packet_unref(&pkt);
if (ret < 0)
break;
AVPacket orig_pkt = pkt;
do {
ret = decode_packet(&got_frame, 0);
if (ret < 0)
break;
pkt.data += ret;
pkt.size -= ret;
} while (pkt.size > 0);
av_packet_unref(&orig_pkt);
}
/* flush the decoders */
if (video_dec_ctx)
decode_packet(video_dec_ctx, NULL);
if (audio_dec_ctx)
decode_packet(audio_dec_ctx, NULL);
/* flush cached frames */
pkt.data = NULL;
pkt.size = 0;
do {
decode_packet(&got_frame, 1);
} while (got_frame);
printf("Demuxing succeeded.\n");

View File

@@ -145,7 +145,7 @@ int main(int argc, char **argv)
frame->width = c->width;
frame->height = c->height;
ret = av_frame_get_buffer(frame, 0);
ret = av_frame_get_buffer(frame, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate the video frame data\n");
exit(1);
@@ -186,8 +186,7 @@ int main(int argc, char **argv)
encode(c, NULL, pkt, f);
/* add sequence end code to have a real MPEG file */
if (codec->id == AV_CODEC_ID_MPEG1VIDEO || codec->id == AV_CODEC_ID_MPEG2VIDEO)
fwrite(endcode, 1, sizeof(endcode), f);
fwrite(endcode, 1, sizeof(endcode), f);
fclose(f);
avcodec_free_context(&c);

View File

@@ -47,11 +47,6 @@ int main (int argc, char **argv)
if ((ret = avformat_open_input(&fmt_ctx, argv[1], NULL, NULL)))
return ret;
if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
while ((tag = av_dict_get(fmt_ctx->metadata, "", tag, AV_DICT_IGNORE_SUFFIX)))
printf("%s=%s\n", tag->key, tag->value);

View File

@@ -78,45 +78,15 @@ static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c,
AVStream *st, AVFrame *frame)
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
int ret;
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
// send the frame to the encoder
ret = avcodec_send_frame(c, frame);
if (ret < 0) {
fprintf(stderr, "Error sending a frame to the encoder: %s\n",
av_err2str(ret));
exit(1);
}
while (ret >= 0) {
AVPacket pkt = { 0 };
ret = avcodec_receive_packet(c, &pkt);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
else if (ret < 0) {
fprintf(stderr, "Error encoding a frame: %s\n", av_err2str(ret));
exit(1);
}
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(&pkt, c->time_base, st->time_base);
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, &pkt);
ret = av_interleaved_write_frame(fmt_ctx, &pkt);
av_packet_unref(&pkt);
if (ret < 0) {
fprintf(stderr, "Error while writing output packet: %s\n", av_err2str(ret));
exit(1);
}
}
return ret == AVERROR_EOF ? 1 : 0;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
return av_interleaved_write_frame(fmt_ctx, pkt);
}
/* Add an output stream. */
@@ -315,7 +285,7 @@ static AVFrame *get_audio_frame(OutputStream *ost)
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->enc->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) > 0)
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
for (j = 0; j <frame->nb_samples; j++) {
@@ -339,10 +309,13 @@ static AVFrame *get_audio_frame(OutputStream *ost)
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{
AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0;
AVFrame *frame;
int ret;
int got_packet;
int dst_nb_samples;
av_init_packet(&pkt);
c = ost->enc;
frame = get_audio_frame(ost);
@@ -376,7 +349,22 @@ static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
ost->samples_count += dst_nb_samples;
}
return write_frame(oc, c, ost->st, frame);
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
exit(1);
}
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
if (ret < 0) {
fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret));
exit(1);
}
}
return (frame || got_packet) ? 0 : 1;
}
/**************************************************************/
@@ -396,7 +384,7 @@ static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 0);
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
@@ -476,7 +464,7 @@ static AVFrame *get_video_frame(OutputStream *ost)
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, c->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) > 0)
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
/* when we pass a frame to the encoder, it may keep a reference to it
@@ -518,8 +506,37 @@ static AVFrame *get_video_frame(OutputStream *ost)
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{
return write_frame(oc, ost->enc, ost->st, get_video_frame(ost));
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
AVPacket pkt = { 0 };
c = ost->enc;
frame = get_video_frame(ost);
av_init_packet(&pkt);
/* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1);
}
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
} else {
ret = 0;
}
if (ret < 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1);
}
return (frame || got_packet) ? 0 : 1;
}
static void close_stream(AVFormatContext *oc, OutputStream *ost)

View File

@@ -172,7 +172,7 @@ int main(int argc, char *argv[])
sw_frame->width = width;
sw_frame->height = height;
sw_frame->format = AV_PIX_FMT_NV12;
if ((err = av_frame_get_buffer(sw_frame, 0)) < 0)
if ((err = av_frame_get_buffer(sw_frame, 32)) < 0)
goto close;
if ((err = fread((uint8_t*)(sw_frame->data[0]), size, 1, fin)) <= 0)
break;

View File

@@ -76,7 +76,7 @@ the gcc developers. Note that we will not add workarounds for gcc bugs.
Also note that (some of) the gcc developers believe this is not a bug or
not a bug they should fix:
@url{https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}.
@url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}.
Then again, some of them do not know the difference between an undecidable
problem and an NP-hard problem...
@@ -257,13 +257,13 @@ default.
@section Which are good parameters for encoding high quality MPEG-4?
'-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2',
things to try: '-bf 2', '-mpv_flags qp_rd', '-mpv_flags mv0', '-mpv_flags skip_rd'.
things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'.
@section Which are good parameters for encoding high quality MPEG-1/MPEG-2?
'-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2'
but beware the '-g 100' might cause problems with some decoders.
Things to try: '-bf 2', '-mpv_flags qp_rd', '-mpv_flags mv0', '-mpv_flags skip_rd'.
Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd.
@section Interlaced video looks very bad when encoded with ffmpeg, what is wrong?
@@ -516,7 +516,7 @@ in the ffmpeg invocation. This is effective whether you run ffmpeg in a shell
or invoke ffmpeg in its own process via an operating system API.
As an alternative, when you are running ffmpeg in a shell, you can redirect
standard input to @code{/dev/null} (on Linux and macOS)
standard input to @code{/dev/null} (on Linux and Mac OS)
or @code{NUL} (on Windows). You can do this redirect either
on the ffmpeg invocation, or from a shell script which calls ffmpeg.
@@ -526,7 +526,7 @@ For example:
ffmpeg -nostdin -i INPUT OUTPUT
@end example
or (on Linux, macOS, and other UNIX-like shells):
or (on Linux, Mac OS, and other UNIX-like shells):
@example
ffmpeg -i INPUT OUTPUT </dev/null
@@ -601,7 +601,7 @@ No. These tools are too bloated and they complicate the build.
FFmpeg is already organized in a highly modular manner and does not need to
be rewritten in a formal object language. Further, many of the developers
favor straight C; it works for them. For more arguments on this matter,
read @uref{https://web.archive.org/web/20111004021423/http://kernel.org/pub/linux/docs/lkml/#s15, "Programming Religion"}.
read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}.
@section Why are the ffmpeg programs devoid of debugging symbols?

View File

@@ -149,8 +149,6 @@ the synchronisation of the samples directory.
@chapter Uploading new samples to the fate suite
If you need a sample uploaded send a mail to samples-request.
This is for developers who have an account on the fate suite server.
If you upload new samples, please make sure they are as small as possible,
space on each client, network bandwidth and so on benefit from smaller test cases.
@@ -159,8 +157,6 @@ practice generally do not replace, remove or overwrite files as it likely would
break older checkouts or releases.
Also all needed samples for a commit should be uploaded, ideally 24
hours, before the push.
If you need an account for frequently uploading samples or you wish to help
others by doing that send a mail to ffmpeg-devel.
@example
#First update your local samples copy:

View File

@@ -523,9 +523,6 @@ The offset is added to the timestamps of the input files. Specifying
a positive offset means that the corresponding streams are delayed by
the time duration specified in @var{offset}.
@item -itsscale @var{scale} (@emph{input,per-stream})
Rescale input timestamps. @var{scale} should be a floating point number.
@item -timestamp @var{date} (@emph{output})
Set the recording timestamp in the container.
@@ -617,13 +614,8 @@ they do not conflict with the standard, as in:
ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
@end example
@item -dn (@emph{input/output})
As an input option, blocks all data streams of a file from being filtered or
being automatically selected or mapped for any output. See @code{-discard}
option to disable streams individually.
As an output option, disables data recording i.e. automatic selection or
mapping of any data stream. For full manual control see the @code{-map}
@item -dn (@emph{output})
Disable data recording. For full manual control see the @code{-map}
option.
@item -dframes @var{number} (@emph{output})
@@ -783,13 +775,8 @@ If used together with @option{-vcodec copy}, it will affect the aspect ratio
stored at container level, but not the aspect ratio stored in encoded
frames, if it exists.
@item -vn (@emph{input/output})
As an input option, blocks all video streams of a file from being filtered or
being automatically selected or mapped for any output. See @code{-discard}
option to disable streams individually.
As an output option, disables video recording i.e. automatic selection or
mapping of any video stream. For full manual control see the @code{-map}
@item -vn (@emph{output})
Disable video recording. For full manual control see the @code{-map}
option.
@item -vcodec @var{codec} (@emph{output})
@@ -879,19 +866,12 @@ Deprecated see -bsf
@item -force_key_frames[:@var{stream_specifier}] @var{time}[,@var{time}...] (@emph{output,per-stream})
@item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
@item -force_key_frames[:@var{stream_specifier}] source (@emph{output,per-stream})
Force key frames at the specified timestamps, more precisely at the first
frames after each specified time.
@var{force_key_frames} can take arguments of the following form:
@table @option
@item @var{time}[,@var{time}...]
If the argument consists of timestamps, ffmpeg will round the specified times to the nearest
output timestamp as per the encoder time base and force a keyframe at the first frame having
timestamp equal or greater than the computed timestamp. Note that if the encoder time base is too
coarse, then the keyframes may be forced on frames with timestamps lower than the specified time.
The default encoder time base is the inverse of the output framerate but may be set otherwise
via @code{-enc_time_base}.
If the argument is prefixed with @code{expr:}, the string @var{expr}
is interpreted like an expression and is evaluated for each frame. A
key frame is forced in case the evaluation is non-zero.
If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
the time of the beginning of all chapters in the file, shifted by
@@ -905,11 +885,6 @@ before the beginning of every chapter:
-force_key_frames 0:05:00,chapters-0.1
@end example
@item expr:@var{expr}
If the argument is prefixed with @code{expr:}, the string @var{expr}
is interpreted like an expression and is evaluated for each frame. A
key frame is forced in case the evaluation is non-zero.
The expression in @var{expr} can contain the following constants:
@table @option
@item n
@@ -937,12 +912,6 @@ starting from second 13:
-force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
@end example
@item source
If the argument is @code{source}, ffmpeg will force a key frame if
the current frame being encoded is marked as a key frame in its source.
@end table
Note that forcing too many keyframes is very harmful for the lookahead
algorithms of certain encoders: using fixed-GOP options or similar
would be more efficient.
@@ -1029,35 +998,6 @@ Choose the GPU device on the second platform supporting the @emph{cl_khr_fp16}
extension.
@end table
@item vulkan
If @var{device} is an integer, it selects the device by its index in a
system-dependent list of devices. If @var{device} is any other string, it
selects the first device with a name containing that string as a substring.
The following options are recognized:
@table @option
@item debug
If set to 1, enables the validation layer, if installed.
@item linear_images
If set to 1, images allocated by the hwcontext will be linear and locally mappable.
@item instance_extensions
A plus separated list of additional instance extensions to enable.
@item device_extensions
A plus separated list of additional device extensions to enable.
@end table
Examples:
@table @emph
@item -init_hw_device vulkan:1
Choose the second device on the system.
@item -init_hw_device vulkan:RADV
Choose the first device with a name containing the string @emph{RADV}.
@item -init_hw_device vulkan:0,instance_extensions=VK_KHR_wayland_surface+VK_KHR_xcb_surface
Choose the first device and enable the Wayland and XCB instance extensions.
@end table
@end table
@item -init_hw_device @var{type}[=@var{name}]@@@var{source}
@@ -1149,13 +1089,8 @@ Set the number of audio channels. For output streams it is set by
default to the number of input audio channels. For input streams
this option only makes sense for audio grabbing devices and raw demuxers
and is mapped to the corresponding demuxer options.
@item -an (@emph{input/output})
As an input option, blocks all audio streams of a file from being filtered or
being automatically selected or mapped for any output. See @code{-discard}
option to disable streams individually.
As an output option, disables audio recording i.e. automatic selection or
mapping of any audio stream. For full manual control see the @code{-map}
@item -an (@emph{output})
Disable audio recording. For full manual control see the @code{-map}
option.
@item -acodec @var{codec} (@emph{input/output})
Set the audio codec. This is an alias for @code{-codec:a}.
@@ -1190,13 +1125,8 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use
@table @option
@item -scodec @var{codec} (@emph{input/output})
Set the subtitle codec. This is an alias for @code{-codec:s}.
@item -sn (@emph{input/output})
As an input option, blocks all subtitle streams of a file from being filtered or
being automatically selected or mapped for any output. See @code{-discard}
option to disable streams individually.
As an output option, disables subtitle recording i.e. automatic selection or
mapping of any subtitle stream. For full manual control see the @code{-map}
@item -sn (@emph{output})
Disable subtitle recording. For full manual control see the @code{-map}
option.
@item -sbsf @var{bitstream_filter}
Deprecated, see -bsf
@@ -1430,7 +1360,7 @@ it will usually display as 0 if not supported.
Show benchmarking information during the encode.
Shows real, system and user time used in various steps (audio/video encode/decode).
@item -timelimit @var{duration} (@emph{global})
Exit after ffmpeg has been running for @var{duration} seconds in CPU user time.
Exit after ffmpeg has been running for @var{duration} seconds.
@item -dump (@emph{global})
Dump each input packet to stderr.
@item -hex (@emph{global})
@@ -1443,6 +1373,10 @@ loss).
By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
This option will slow down the reading of the input(s) to the native frame rate
of the input(s). It is useful for real-time output (e.g. live streaming).
@item -loop_output @var{number_of_times}
Repeatedly loop output for formats that support looping such as animated GIF
(0 will loop the output infinitely).
This option is deprecated, use -loop.
@item -vsync @var{parameter}
Video sync method.
For compatibility reasons old values can be specified as numbers.
@@ -1562,13 +1496,9 @@ Enable bitexact mode for (de)muxer and (de/en)coder
Finish encoding when the shortest input stream ends.
@item -dts_delta_threshold
Timestamp discontinuity delta threshold.
@item -dts_error_threshold @var{seconds}
Timestamp error delta threshold. This threshold use to discard crazy/damaged
timestamps and the default is 30 hours which is arbitrarily picked and quite
conservative.
@item -muxdelay @var{seconds} (@emph{output})
@item -muxdelay @var{seconds} (@emph{input})
Set the maximum demux-decode delay.
@item -muxpreload @var{seconds} (@emph{output})
@item -muxpreload @var{seconds} (@emph{input})
Set the initial demux-decode delay.
@item -streamid @var{output-stream-index}:@var{new-value} (@emph{output})
Assign a new stream-id value to an output stream. This option should be
@@ -1690,10 +1620,8 @@ This allows dumping sdp information when at least one output isn't an
rtp stream. (Requires at least one of the output formats to be rtp).
@item -discard (@emph{input})
Allows discarding specific streams or frames from streams.
Any input stream can be fully discarded, using value @code{all} whereas
selective discarding of frames from a stream occurs at the demuxer
and is not supported by all demuxers.
Allows discarding specific streams or frames of streams at the demuxer.
Not all demuxers support this.
@table @option
@item none
@@ -1721,8 +1649,6 @@ Stop and abort on various conditions. The following flags are available:
@table @option
@item empty_output
No packets were passed to the muxer, the output is empty.
@item empty_output_stream
No packets were passed to the muxer in some of the output streams.
@end table
@item -xerror (@emph{global})

View File

@@ -66,8 +66,6 @@ Set custom interval, in seconds, for seeking using left/right keys. Default is 1
Disable graphical display.
@item -noborder
Borderless window.
@item -alwaysontop
Window always on top. Available on: X11 with SDL >= 2.0.5, Windows SDL >= 2.0.6.
@item -volume
Set the startup volume. 0 means silence, 100 means no volume reduction or
amplification. Negative values are treated as 0, values above 100 are treated
@@ -133,9 +131,8 @@ This option has been deprecated in favor of private options, try -pixel_format.
@item -stats
Print several playback statistics, in particular show the stream
duration, the codec parameters, the current position in the stream and
the audio/video synchronisation drift. It is shown by default, unless the
log level is lower than @code{info}. Its display can be forced by manually
specifying this option. To disable it, you need to specify @code{-nostats}.
the audio/video synchronisation drift. It is on by default, to
explicitly disable it you need to specify @code{-nostats}.
@item -fast
Non-spec-compliant optimizations.
@@ -198,12 +195,6 @@ input as soon as possible. Enabled by default for realtime streams, where data
may be dropped if not read in time. Use this option to enable infinite buffers
for all inputs, use @option{-noinfbuf} to disable it.
@item -filter_threads @var{nb_threads}
Defines how many threads are used to process a filter pipeline. Each pipeline
will produce a thread pool with this many threads available for parallel
processing. The default is 0 which means that the thread count will be
determined by the number of available CPUs.
@end table
@section While playing

View File

@@ -425,7 +425,7 @@ The @code{csv} writer is equivalent to @code{compact}, but supports
different defaults.
Each section is printed on a single line.
If no option is specified, the output has the form:
If no option is specifid, the output has the form:
@example
section|key1=val1| ... |keyN=valN
@end example
@@ -591,7 +591,7 @@ This option automatically sets @option{fully_qualified} to 1.
@end table
For more information about the XML format, see
@url{https://www.w3.org/XML/}.
@url{http://www.w3.org/XML/}.
@c man end WRITERS
@chapter Timecode

View File

@@ -147,25 +147,11 @@
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataType">
<xsd:sequence>
<xsd:element name="timecodes" type="ffprobe:frameSideDataTimecodeList" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
<xsd:attribute name="timecode" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="frameSideDataTimecodeList">
<xsd:sequence>
<xsd:element name="timecode" type="ffprobe:frameSideDataTimecodeType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataTimecodeType">
<xsd:attribute name="value" type="xsd:string"/>
</xsd:complexType>
<xsd:complexType name="subtitleType">
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
<xsd:attribute name="pts" type="xsd:long" />
@@ -226,7 +212,6 @@
<xsd:attribute name="height" type="xsd:int"/>
<xsd:attribute name="coded_width" type="xsd:int"/>
<xsd:attribute name="coded_height" type="xsd:int"/>
<xsd:attribute name="closed_captions" type="xsd:boolean"/>
<xsd:attribute name="has_b_frames" type="xsd:int"/>
<xsd:attribute name="sample_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/>

View File

@@ -34,24 +34,27 @@ Possible forms of stream specifiers are:
@table @option
@item @var{stream_index}
Matches the stream with this index. E.g. @code{-threads:1 4} would set the
thread count for the second stream to 4. If @var{stream_index} is used as an
additional stream specifier (see below), then it selects stream number
@var{stream_index} from the matching streams. Stream numbering is based on the
order of the streams as detected by libavformat except when a program ID is
also specified. In this case it is based on the ordering of the streams in the
program.
@item @var{stream_type}[:@var{additional_stream_specifier}]
thread count for the second stream to 4.
@item @var{stream_type}[:@var{stream_index}]
@var{stream_type} is one of following: 'v' or 'V' for video, 'a' for audio, 's'
for subtitle, 'd' for data, and 't' for attachments. 'v' matches all video
streams, 'V' only matches video streams which are not attached pictures, video
thumbnails or cover arts. If @var{additional_stream_specifier} is used, then
it matches streams which both have this type and match the
@var{additional_stream_specifier}. Otherwise, it matches all streams of the
specified type.
@item p:@var{program_id}[:@var{additional_stream_specifier}]
Matches streams which are in the program with the id @var{program_id}. If
@var{additional_stream_specifier} is used, then it matches streams which both
are part of the program and match the @var{additional_stream_specifier}.
thumbnails or cover arts. If @var{stream_index} is given, then it matches
stream number @var{stream_index} of this type. Otherwise, it matches all
streams of this type.
@item p:@var{program_id}[:@var{stream_index}] or p:@var{program_id}[:@var{stream_type}[:@var{stream_index}]] or
p:@var{program_id}:m:@var{key}[:@var{value}]
In first version, if @var{stream_index} is given, then it matches the stream with number @var{stream_index}
in the program with the id @var{program_id}. Otherwise, it matches all streams in the
program. In the second version, @var{stream_type} is one of following: 'v' for video, 'a' for audio, 's'
for subtitle, 'd' for data. If @var{stream_index} is also given, then it matches
stream number @var{stream_index} of this type in the program with the id @var{program_id}.
Otherwise, if only @var{stream_type} is given, it matches all
streams of this type in the program with the id @var{program_id}.
In the third version matches streams in the program with the id @var{program_id} with the metadata
tag @var{key} having the specified value. If
@var{value} is not given, matches streams that contain the given tag with any
value.
@item #@var{stream_id} or i:@var{stream_id}
Match the stream by stream id (e.g. PID in MPEG-TS container).
@@ -109,10 +112,6 @@ Print detailed information about the muxer named @var{muxer_name}. Use the
@item filter=@var{filter_name}
Print detailed information about the filter name @var{filter_name}. Use the
@option{-filters} option to get a list of all filters.
@item bsf=@var{bitstream_filter_name}
Print detailed information about the bitstream filter name @var{bitstream_filter_name}.
Use the @option{-bsfs} option to get a list of all bitstream filters.
@end table
@item -version
@@ -236,15 +235,17 @@ ffmpeg [...] -loglevel +repeat
By default the program logs to stderr. If coloring is supported by the
terminal, colors are used to mark errors and warnings. Log coloring
can be disabled setting the environment variable
@env{AV_LOG_FORCE_NOCOLOR}, or can be forced setting
@env{AV_LOG_FORCE_NOCOLOR} or @env{NO_COLOR}, or can be forced setting
the environment variable @env{AV_LOG_FORCE_COLOR}.
The use of the environment variable @env{NO_COLOR} is deprecated and
will be dropped in a future FFmpeg version.
@item -report
Dump full command line and log output to a file named
Dump full command line and console output to a file named
@code{@var{program}-@var{YYYYMMDD}-@var{HHMMSS}.log} in the current
directory.
This file can be useful for bug reports.
It also implies @code{-loglevel debug}.
It also implies @code{-loglevel verbose}.
Setting the environment variable @env{FFREPORT} to any value has the
same effect. If the value is a ':'-separated key=value sequence, these
@@ -370,15 +371,7 @@ ffmpeg -i input.flac -id3v2_version 3 out.mp3
@end example
All codec AVOptions are per-stream, and thus a stream specifier
should be attached to them:
@example
ffmpeg -i multichannel.mxf -map 0:v:0 -map 0:a:0 -map 0:a:0 -c:a:0 ac3 -b:a:0 640k -ac:a:1 2 -c:a:1 aac -b:2 128k out.mp4
@end example
In the above example, a multichannel audio stream is mapped twice for output.
The first instance is encoded with codec ac3 and bitrate 640k.
The second instance is downmixed to 2 channels and encoded with codec aac. A bitrate of 128k is specified for it using
absolute index of the output stream.
should be attached to them.
Note: the @option{-nooption} syntax cannot be used for boolean
AVOptions, use @option{-option 0}/@option{-option 1}.

File diff suppressed because it is too large Load Diff

View File

@@ -27,10 +27,6 @@ stream information. A higher value will enable detecting more
information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by default.
@item max_probe_packets @var{integer} (@emph{input})
Set the maximum number of buffered packets when probing a codec.
Default is 2500 packets.
@item packetsize @var{integer} (@emph{output})
Set packet size.
@@ -143,7 +139,7 @@ Consider things that a sane encoder should not do as an error.
@item max_interleave_delta @var{integer} (@emph{output})
Set maximum buffering duration for interleaving. The duration is
expressed in microseconds, and defaults to 10000000 (10 seconds).
expressed in microseconds, and defaults to 1000000 (1 second).
To ensure all the streams are interleaved correctly, libavformat will
wait until it has at least one packet for each stream before actually
@@ -215,7 +211,7 @@ is @code{0} (meaning that no offset is applied).
@item dump_separator @var{string} (@emph{input})
Separator used to separate the fields printed on the command line about the
Stream parameters.
For example, to separate the fields with newlines and indentation:
For example to separate the fields with newlines and indention:
@example
ffprobe -dump_separator "
" -i ~/videos/matrixbench_mpeg2.mpg
@@ -228,28 +224,6 @@ would require too many resources due to a large number of streams.
@item skip_estimate_duration_from_pts @var{bool} (@emph{input})
Skip estimation of input duration when calculated using PTS.
At present, applicable for MPEG-PS and MPEG-TS.
@item strict, f_strict @var{integer} (@emph{input/output})
Specify how strictly to follow the standards. @code{f_strict} is deprecated and
should be used only via the @command{ffmpeg} tool.
Possible values:
@table @samp
@item very
strictly conform to an older more strict version of the spec or reference software
@item strict
strictly conform to all the things in the spec no matter what consequences
@item normal
@item unofficial
allow unofficial extensions
@item experimental
allow non standardized experimental things, experimental
(unfinished/work in progress/not well tested) decoders and encoders.
Note: experimental decoders can pose a security risk, do not use this for
decoding untrusted input.
@end table
@end table
@c man end FORMAT OPTIONS
@@ -260,10 +234,30 @@ decoding untrusted input.
Format stream specifiers allow selection of one or more streams that
match specific properties.
Possible forms of stream specifiers are:
@table @option
@item @var{stream_index}
Matches the stream with this index.
@item @var{stream_type}[:@var{stream_index}]
@var{stream_type} is one of following: 'v' for video, 'a' for audio,
's' for subtitle, 'd' for data, and 't' for attachments. If
@var{stream_index} is given, then it matches the stream number
@var{stream_index} of this type. Otherwise, it matches all streams of
this type.
@item p:@var{program_id}[:@var{stream_index}]
If @var{stream_index} is given, then it matches the stream with number
@var{stream_index} in the program with the id
@var{program_id}. Otherwise, it matches all streams in the program.
@item #@var{stream_id}
Matches the stream by a format-specific ID.
@end table
The exact semantics of stream specifiers is defined by the
@code{avformat_match_stream_specifier()} function declared in the
@file{libavformat/avformat.h} header and documented in the
@ref{Stream specifiers,,Stream specifiers section in the ffmpeg(1) manual,ffmpeg}.
@file{libavformat/avformat.h} header.
@ifclear config-writeonly
@include demuxers.texi

View File

@@ -17,107 +17,21 @@ for more formats. None of them are used by default, their use has to be
explicitly requested by passing the appropriate flags to
@command{./configure}.
@section Alliance for Open Media (AOM)
@section libxavs2
FFmpeg can make use of the AOM library for AV1 decoding and encoding.
FFmpeg can make use of the xavs2 library for AVS2-P2/IEEE1857.4 video encoding.
Go to @url{http://aomedia.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libaom} to configure to
Go to @url{https://github.com/pkuvcl/xavs2} and follow the instructions for
installing the library. Then pass @code{--enable-libxavs2} to configure to
enable it.
@section AMD AMF/VCE
FFmpeg can use the AMD Advanced Media Framework library
for accelerated H.264 and HEVC(only windows) encoding on hardware with Video Coding Engine (VCE).
To enable support you must obtain the AMF framework header files(version 1.4.9+) from
@url{https://github.com/GPUOpen-LibrariesAndSDKs/AMF.git}.
Create an @code{AMF/} directory in the system include path.
Copy the contents of @code{AMF/amf/public/include/} into that directory.
Then configure FFmpeg with @code{--enable-amf}.
Initialization of amf encoder occurs in this order:
1) trying to initialize through dx11(only windows)
2) trying to initialize through dx9(only windows)
3) trying to initialize through vulkan
To use h.264(AMD VCE) encoder on linux amdgru-pro version 19.20+ and amf-amdgpu-pro
package(amdgru-pro contains, but does not install automatically) are required.
This driver can be installed using amdgpu-pro-install script in official amd driver archive.
@section AviSynth
FFmpeg can read AviSynth scripts as input. To enable support, pass
@code{--enable-avisynth} to configure after installing the headers
provided by @url{https://github.com/AviSynth/AviSynthPlus, AviSynth+}.
AviSynth+ can be configured to install only the headers by either
passing @code{-DHEADERS_ONLY:bool=on} to the normal CMake-based build
system, or by using the supplied @code{GNUmakefile}.
For Windows, supported AviSynth variants are
@url{http://avisynth.nl, AviSynth 2.6 RC1 or higher} for 32-bit builds and
@url{http://avisynth.nl/index.php/AviSynth+, AviSynth+ r1718 or higher} for 32-bit and 64-bit builds.
For Linux, macOS, and BSD, the only supported AviSynth variant is
@url{https://github.com/AviSynth/AviSynthPlus, AviSynth+}, starting with version 3.5.
@float NOTE
In 2016, AviSynth+ added support for building with GCC. However, due to
the eccentricities of Windows' calling conventions, 32-bit GCC builds
of AviSynth+ are not compatible with typical 32-bit builds of FFmpeg.
By default, FFmpeg assumes compatibility with 32-bit MSVC builds of
AviSynth+ since that is the most widely-used and entrenched build
configuration. Users can override this and enable support for 32-bit
GCC builds of AviSynth+ by passing @code{-DAVSC_WIN32_GCC32} to
@code{--extra-cflags} when configuring FFmpeg.
64-bit builds of FFmpeg are not affected, and can use either MSVC or
GCC builds of AviSynth+ without any special flags.
libxavs2 is under the GNU Public License Version 2 or later
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for
details), you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@float NOTE
AviSynth(+) is loaded dynamically. Distributors can build FFmpeg
with @code{--enable-avisynth}, and the binaries will work regardless
of the end user having AviSynth installed. If/when an end user
would like to use AviSynth scripts, then they can install AviSynth(+)
and FFmpeg will be able to find and use it to open scripts.
@end float
@section Chromaprint
FFmpeg can make use of the Chromaprint library for generating audio fingerprints.
Pass @code{--enable-chromaprint} to configure to
enable it. See @url{https://acoustid.org/chromaprint}.
@section codec2
FFmpeg can make use of the codec2 library for codec2 decoding and encoding.
There is currently no native decoder, so libcodec2 must be used for decoding.
Go to @url{http://freedv.org/}, download "Codec 2 source archive".
Build and install using CMake. Debian users can install the libcodec2-dev package instead.
Once libcodec2 is installed you can pass @code{--enable-libcodec2} to configure to enable it.
The easiest way to use codec2 is with .c2 files, since they contain the mode information required for decoding.
To encode such a file, use a .c2 file extension and give the libcodec2 encoder the -mode option:
@code{ffmpeg -i input.wav -mode 700C output.c2}.
Playback is as simple as @code{ffplay output.c2}.
For a list of supported modes, run @code{ffmpeg -h encoder=libcodec2}.
Raw codec2 files are also supported.
To make sense of them the mode in use needs to be specified as a format option:
@code{ffmpeg -f codec2raw -mode 1300 -i input.raw output.wav}.
@section dav1d
FFmpeg can make use of the dav1d library for AV1 video decoding.
Go to @url{https://code.videolan.org/videolan/dav1d} and follow the instructions for
installing the library. Then pass @code{--enable-libdav1d} to configure to enable it.
@section davs2
@section libdavs2
FFmpeg can make use of the davs2 library for AVS2-P2/IEEE1857.4 video decoding.
@@ -131,63 +45,21 @@ libdavs2 is under the GNU Public License Version 2 or later
details), you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@section Game Music Emu
@section Alliance for Open Media libaom
FFmpeg can make use of the Game Music Emu library to read audio from supported video game
music file formats. Pass @code{--enable-libgme} to configure to
enable it. See @url{https://bitbucket.org/mpyne/game-music-emu/overview}.
FFmpeg can make use of the libaom library for AV1 decoding.
@section Intel QuickSync Video
FFmpeg can use Intel QuickSync Video (QSV) for accelerated decoding and encoding
of multiple codecs. To use QSV, FFmpeg must be linked against the @code{libmfx}
dispatcher, which loads the actual decoding libraries.
The dispatcher is open source and can be downloaded from
@url{https://github.com/lu-zero/mfx_dispatch.git}. FFmpeg needs to be configured
with the @code{--enable-libmfx} option and @code{pkg-config} needs to be able to
locate the dispatcher's @code{.pc} files.
@section Kvazaar
FFmpeg can make use of the Kvazaar library for HEVC encoding.
Go to @url{https://github.com/ultravideo/kvazaar} and follow the
instructions for installing the library. Then pass
@code{--enable-libkvazaar} to configure to enable it.
@section LAME
FFmpeg can make use of the LAME library for MP3 encoding.
Go to @url{http://lame.sourceforge.net/} and follow the
instructions for installing the library.
Then pass @code{--enable-libmp3lame} to configure to enable it.
@section libilbc
iLBC is a narrowband speech codec that has been made freely available
by Google as part of the WebRTC project. libilbc is a packaging friendly
copy of the iLBC codec. FFmpeg can make use of the libilbc library for
iLBC decoding and encoding.
Go to @url{https://github.com/TimothyGu/libilbc} and follow the instructions for
installing the library. Then pass @code{--enable-libilbc} to configure to
Go to @url{http://aomedia.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libaom} to configure to
enable it.
@section libvpx
@section OpenJPEG
FFmpeg can make use of the libvpx library for VP8/VP9 decoding and encoding.
FFmpeg can use the OpenJPEG libraries for encoding/decoding J2K videos. Go to
@url{http://www.openjpeg.org/} to get the libraries and follow the installation
instructions. To enable using OpenJPEG in FFmpeg, pass @code{--enable-libopenjpeg} to
@file{./configure}.
Go to @url{http://www.webmproject.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libvpx} to configure to
enable it.
@section ModPlug
FFmpeg can make use of this library, originating in Modplug-XMMS, to read from MOD-like music files.
See @url{https://github.com/Konstanty/libmodplug}. Pass @code{--enable-libmodplug} to configure to
enable it.
@section OpenCORE, VisualOn, and Fraunhofer libraries
@@ -234,9 +106,67 @@ Go to @url{http://sourceforge.net/projects/opencore-amr/} and follow the
instructions for installing the library.
Then pass @code{--enable-libfdk-aac} to configure to enable it.
@section LAME
FFmpeg can make use of the LAME library for MP3 encoding.
Go to @url{http://lame.sourceforge.net/} and follow the
instructions for installing the library.
Then pass @code{--enable-libmp3lame} to configure to enable it.
@section TwoLAME
FFmpeg can make use of the TwoLAME library for MP2 encoding.
Go to @url{http://www.twolame.org/} and follow the
instructions for installing the library.
Then pass @code{--enable-libtwolame} to configure to enable it.
@section libcodec2 / codec2 general
FFmpeg can make use of libcodec2 for codec2 encoding and decoding.
There is currently no native decoder, so libcodec2 must be used for decoding.
Go to @url{http://freedv.org/}, download "Codec 2 source archive".
Build and install using CMake. Debian users can install the libcodec2-dev package instead.
Once libcodec2 is installed you can pass @code{--enable-libcodec2} to configure to enable it.
The easiest way to use codec2 is with .c2 files, since they contain the mode information required for decoding.
To encode such a file, use a .c2 file extension and give the libcodec2 encoder the -mode option:
@code{ffmpeg -i input.wav -mode 700C output.c2}.
Playback is as simple as @code{ffplay output.c2}.
For a list of supported modes, run @code{ffmpeg -h encoder=libcodec2}.
Raw codec2 files are also supported.
To make sense of them the mode in use needs to be specified as a format option:
@code{ffmpeg -f codec2raw -mode 1300 -i input.raw output.wav}.
@section libvpx
FFmpeg can make use of the libvpx library for VP8/VP9 encoding.
Go to @url{http://www.webmproject.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libvpx} to configure to
enable it.
@section libwavpack
FFmpeg can make use of the libwavpack library for WavPack encoding.
Go to @url{http://www.wavpack.com/} and follow the instructions for
installing the library. Then pass @code{--enable-libwavpack} to configure to
enable it.
@section libxavs
FFmpeg can make use of the libxavs library for Xavs encoding.
Go to @url{http://xavs.sf.net/} and follow the instructions for
installing the library. Then pass @code{--enable-libxavs} to configure to
enable it.
@section OpenH264
FFmpeg can make use of the OpenH264 library for H.264 decoding and encoding.
FFmpeg can make use of the OpenH264 library for H.264 encoding and decoding.
Go to @url{http://www.openh264.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libopenh264} to configure to
@@ -249,47 +179,6 @@ constrained baseline profile and CABAC.) Using it is mostly useful for
testing and for taking advantage of Cisco's patent portfolio license
(@url{http://www.openh264.org/BINARY_LICENSE.txt}).
@section OpenJPEG
FFmpeg can use the OpenJPEG libraries for decoding/encoding J2K videos. Go to
@url{http://www.openjpeg.org/} to get the libraries and follow the installation
instructions. To enable using OpenJPEG in FFmpeg, pass @code{--enable-libopenjpeg} to
@file{./configure}.
@section rav1e
FFmpeg can make use of rav1e (Rust AV1 Encoder) via its C bindings to encode videos.
Go to @url{https://github.com/xiph/rav1e/} and follow the instructions to build
the C library. To enable using rav1e in FFmpeg, pass @code{--enable-librav1e}
to @file{./configure}.
@section TwoLAME
FFmpeg can make use of the TwoLAME library for MP2 encoding.
Go to @url{http://www.twolame.org/} and follow the
instructions for installing the library.
Then pass @code{--enable-libtwolame} to configure to enable it.
@section VapourSynth
FFmpeg can read VapourSynth scripts as input. To enable support, pass
@code{--enable-vapoursynth} to configure. Vapoursynth is detected via
@code{pkg-config}. Versions 42 or greater supported.
See @url{http://www.vapoursynth.com/}.
Due to security concerns, Vapoursynth scripts will not
be autodetected so the input format has to be forced. For ff* CLI tools,
add @code{-f vapoursynth} before the input @code{-i yourscript.vpy}.
@section WavPack
FFmpeg can make use of the libwavpack library for WavPack encoding.
Go to @url{http://www.wavpack.com/} and follow the instructions for
installing the library. Then pass @code{--enable-libwavpack} to configure to
enable it.
@section x264
FFmpeg can make use of the x264 library for H.264 encoding.
@@ -318,37 +207,92 @@ x265 is under the GNU Public License Version 2 or later
details), you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@section xavs
@section kvazaar
FFmpeg can make use of the xavs library for AVS encoding.
FFmpeg can make use of the kvazaar library for HEVC encoding.
Go to @url{http://xavs.sf.net/} and follow the instructions for
installing the library. Then pass @code{--enable-libxavs} to configure to
Go to @url{https://github.com/ultravideo/kvazaar} and follow the
instructions for installing the library. Then pass
@code{--enable-libkvazaar} to configure to enable it.
@section libilbc
iLBC is a narrowband speech codec that has been made freely available
by Google as part of the WebRTC project. libilbc is a packaging friendly
copy of the iLBC codec. FFmpeg can make use of the libilbc library for
iLBC encoding and decoding.
Go to @url{https://github.com/TimothyGu/libilbc} and follow the instructions for
installing the library. Then pass @code{--enable-libilbc} to configure to
enable it.
@section xavs2
@section libzvbi
FFmpeg can make use of the xavs2 library for AVS2-P2/IEEE1857.4 video encoding.
Go to @url{https://github.com/pkuvcl/xavs2} and follow the instructions for
installing the library. Then pass @code{--enable-libxavs2} to configure to
enable it.
@float NOTE
libxavs2 is under the GNU Public License Version 2 or later
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for
details), you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@section ZVBI
ZVBI is a VBI decoding library which can be used by FFmpeg to decode DVB
libzvbi is a VBI decoding library which can be used by FFmpeg to decode DVB
teletext pages and DVB teletext subtitles.
Go to @url{http://sourceforge.net/projects/zapping/} and follow the instructions for
installing the library. Then pass @code{--enable-libzvbi} to configure to
enable it.
@section AviSynth
FFmpeg can read AviSynth scripts as input. To enable support, pass
@code{--enable-avisynth} to configure. The correct headers are
included in compat/avisynth/, which allows the user to enable support
without needing to search for these headers themselves.
For Windows, supported AviSynth variants are
@url{http://avisynth.nl, AviSynth 2.6 RC1 or higher} for 32-bit builds and
@url{http://avs-plus.net, AviSynth+ r1718 or higher} for 32-bit and 64-bit builds.
For Linux and OS X, the supported AviSynth variant is
@url{https://github.com/avxsynth/avxsynth, AvxSynth}.
@float NOTE
There is currently a regression in AviSynth+'s @code{capi.h} header as of
October 2016, which interferes with the ability for builds of FFmpeg to use
MSVC-built binaries of AviSynth. Until this is resolved, you can make sure
a known good version is installed by checking out a version from before
the regression occurred:
@code{git clone -b MT git://github.com/AviSynth/AviSynthPlus.git @*
cd AviSynthPlus @*
git checkout -b oldheader b4f292b4dbfad149697fb65c6a037bb3810813f9 @*
make install PREFIX=/install/prefix}
@end float
@float NOTE
AviSynth and AvxSynth are loaded dynamically. Distributors can build FFmpeg
with @code{--enable-avisynth}, and the binaries will work regardless of the
end user having AviSynth or AvxSynth installed - they'll only need to be
installed to use AviSynth scripts (obviously).
@end float
@section Intel QuickSync Video
FFmpeg can use Intel QuickSync Video (QSV) for accelerated encoding and decoding
of multiple codecs. To use QSV, FFmpeg must be linked against the @code{libmfx}
dispatcher, which loads the actual decoding libraries.
The dispatcher is open source and can be downloaded from
@url{https://github.com/lu-zero/mfx_dispatch.git}. FFmpeg needs to be configured
with the @code{--enable-libmfx} option and @code{pkg-config} needs to be able to
locate the dispatcher's @code{.pc} files.
@section AMD VCE
FFmpeg can use the AMD Advanced Media Framework library for accelerated H.264
and HEVC encoding on VCE enabled hardware under Windows.
To enable support you must obtain the AMF framework header files from
@url{https://github.com/GPUOpen-LibrariesAndSDKs/AMF.git}.
Create an @code{AMF/} directory in the system include path.
Copy the contents of @code{AMF/amf/public/include/} into that directory.
Then configure FFmpeg with @code{--enable-amf}.
@chapter Supported File Formats, Codecs or Features
You can use the @code{-formats} and @code{-codecs} options to have an exhaustive list.
@@ -419,9 +363,6 @@ library:
@tab Contains header with version and mode info, simplifying playback.
@item CRI ADX @tab X @tab X
@tab Audio-only format used in console video games.
@item CRI AIX @tab @tab X
@item CRI HCA @tab @tab X
@tab Audio-only format used in console video games.
@item Discworld II BMV @tab @tab X
@item Interplay C93 @tab @tab X
@tab Used in the game Cyberia from Interplay.
@@ -491,8 +432,6 @@ library:
@item IEC61937 encapsulation @tab X @tab X
@item IFF @tab @tab X
@tab Interchange File Format
@item IFV @tab @tab X
@tab A format used by some old CCTV DVRs.
@item iLBC @tab X @tab X
@item Interplay MVE @tab @tab X
@tab Format used in various Interplay computer games.
@@ -577,6 +516,7 @@ library:
@item raw aptX @tab X @tab X
@item raw aptX HD @tab X @tab X
@item raw Chinese AVS video @tab X @tab X
@item raw CRI ADX @tab X @tab X
@item raw Dirac @tab X @tab X
@item raw DNxHD @tab X @tab X
@item raw DTS @tab X @tab X
@@ -710,7 +650,7 @@ library:
@item Psygnosis YOP @tab @tab X
@end multitable
@code{X} means that the feature in that column (encoding / decoding) is supported.
@code{X} means that encoding (resp. decoding) is supported.
@section Image Formats
@@ -780,7 +720,7 @@ following image formats are supported:
@tab X Window Dump image format
@end multitable
@code{X} means that the feature in that column (encoding / decoding) is supported.
@code{X} means that encoding (resp. decoding) is supported.
@code{E} means that support is provided through an external library.
@@ -818,14 +758,12 @@ following image formats are supported:
@item Autodesk Animator Flic video @tab @tab X
@item Autodesk RLE @tab @tab X
@tab fourcc: AASC
@item AV1 @tab E @tab E
@tab Supported through external libraries libaom, libdav1d and librav1e
@item AV1 @tab @tab E
@tab Supported through external library libaom
@item Avid 1:1 10-bit RGB Packer @tab X @tab X
@tab fourcc: AVrp
@item AVS (Audio Video Standard) video @tab @tab X
@tab Video encoding used by the Creature Shock game.
@item AVS2-P2/IEEE1857.4 @tab E @tab E
@tab Supported through external libraries libxavs2 and libdavs2
@item AYUV @tab X @tab X
@tab Microsoft uncompressed packed 4:4:4:4
@item Beam Software VB @tab @tab X
@@ -851,8 +789,6 @@ following image formats are supported:
@tab Codec used in Delphine Software International games.
@item Discworld II BMV Video @tab @tab X
@item Canopus Lossless Codec @tab @tab X
@item CDToons @tab @tab X
@tab Codec used in various Broderbund games.
@item Cinepak @tab @tab X
@item Cirrus Logic AccuPak @tab X @tab X
@tab fourcc: CLJR
@@ -972,8 +908,6 @@ following image formats are supported:
@tab Video encoding used in NuppelVideo files.
@item On2 VP3 @tab @tab X
@tab still experimental
@item On2 VP4 @tab @tab X
@tab fourcc: VP40
@item On2 VP5 @tab @tab X
@tab fourcc: VP50
@item On2 VP6 @tab @tab X
@@ -1068,7 +1002,7 @@ following image formats are supported:
@tab Encoder works only in PAL8.
@end multitable
@code{X} means that the feature in that column (encoding / decoding) is supported.
@code{X} means that encoding (resp. decoding) is supported.
@code{E} means that support is provided through an external library.
@@ -1083,10 +1017,8 @@ following image formats are supported:
@item AAC+ @tab E @tab IX
@tab encoding supported through external library libfdk-aac
@item AC-3 @tab IX @tab IX
@item ACELP.KELVIN @tab @tab X
@item ADPCM 4X Movie @tab @tab X
@item APDCM Yamaha AICA @tab @tab X
@item ADPCM Argonaut Games @tab @tab X
@item ADPCM CDROM XA @tab @tab X
@item ADPCM Creative Technology @tab @tab X
@tab 16 -> 4, 8 -> 4, 8 -> 3, 8 -> 2
@@ -1102,14 +1034,10 @@ following image formats are supported:
@item ADPCM G.726 @tab X @tab X
@item ADPCM IMA AMV @tab @tab X
@tab Used in AMV files
@item ADPCM IMA Cunning Developments @tab @tab X
@item ADPCM IMA Electronic Arts EACS @tab @tab X
@item ADPCM IMA Electronic Arts SEAD @tab @tab X
@item ADPCM IMA Funcom @tab @tab X
@item ADPCM IMA High Voltage Software ALP @tab @tab X
@item ADPCM IMA QuickTime @tab X @tab X
@item ADPCM IMA Simon & Schuster Interactive @tab X @tab X
@item ADPCM IMA Ubisoft APM @tab @tab X
@item ADPCM IMA Loki SDL MJPEG @tab @tab X
@item ADPCM IMA WAV @tab X @tab X
@item ADPCM IMA Westwood @tab @tab X
@@ -1139,7 +1067,6 @@ following image formats are supported:
@item ADPCM Westwood Studios IMA @tab @tab X
@tab Used in Westwood Studios games like Command and Conquer.
@item ADPCM Yamaha @tab X @tab X
@item ADPCM Zork @tab @tab X
@item AMR-NB @tab E @tab X
@tab encoding supported through external library libopencore-amrnb
@item AMR-WB @tab E @tab X
@@ -1161,7 +1088,6 @@ following image formats are supported:
@tab decoding supported through external library libcelt
@item codec2 @tab E @tab E
@tab en/decoding supported through external library libcodec2
@item CRI HCA @tab @tab X
@item Delphine Software International CIN audio @tab @tab X
@tab Codec used in Delphine Software International games.
@item Digital Speech Standard - Standard Play mode (DSS SP) @tab @tab X
@@ -1171,7 +1097,6 @@ following image formats are supported:
@item DCA (DTS Coherent Acoustics) @tab X @tab X
@tab supported extensions: XCh, XXCH, X96, XBR, XLL, LBR (partially)
@item Dolby E @tab @tab X
@item DPCM Gremlin @tab @tab X
@item DPCM id RoQ @tab X @tab X
@tab Used in Quake III, Jedi Knight 2 and other computer games.
@item DPCM Interplay @tab @tab X
@@ -1183,11 +1108,10 @@ following image formats are supported:
@item DPCM Sol @tab @tab X
@item DPCM Xan @tab @tab X
@tab Used in Origin's Wing Commander IV AVI files.
@item DPCM Xilam DERF @tab @tab X
@item DSD (Direct Stream Digital), least significant bit first @tab @tab X
@item DSD (Direct Stream Digital), most significant bit first @tab @tab X
@item DSD (Direct Stream Digital), least significant bit first, planar @tab @tab X
@item DSD (Direct Stream Digital), most significant bit first, planar @tab @tab X
@item DSD (Direct Stream Digitial), least significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), least significant bit first, planar @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first, planar @tab @tab X
@item DSP Group TrueSpeech @tab @tab X
@item DST (Direct Stream Transfer) @tab @tab X
@item DV audio @tab @tab X
@@ -1250,6 +1174,7 @@ following image formats are supported:
@item PCM unsigned 24-bit little-endian @tab X @tab X
@item PCM unsigned 32-bit big-endian @tab X @tab X
@item PCM unsigned 32-bit little-endian @tab X @tab X
@item PCM Zork @tab @tab X
@item QCELP / PureVoice @tab @tab X
@item QDesign Music Codec 1 @tab @tab X
@item QDesign Music Codec 2 @tab @tab X
@@ -1296,7 +1221,7 @@ following image formats are supported:
@item Xbox Media Audio 2 @tab @tab X
@end multitable
@code{X} means that the feature in that column (encoding / decoding) is supported.
@code{X} means that encoding (resp. decoding) is supported.
@code{E} means that support is provided through an external library.
@@ -1340,7 +1265,6 @@ performance on systems without hardware floating point support).
@multitable @columnfractions .4 .1
@item Name @tab Support
@item AMQP @tab E
@item file @tab X
@item FTP @tab X
@item Gopher @tab X
@@ -1365,7 +1289,6 @@ performance on systems without hardware floating point support).
@item TCP @tab X
@item TLS @tab X
@item UDP @tab X
@item ZMQ @tab E
@end multitable
@code{X} means that the protocol is supported.

View File

@@ -178,9 +178,6 @@ Capture the mouse pointer. Default is 0.
@item -capture_mouse_clicks
Capture the screen mouse clicks. Default is 0.
@item -capture_raw_data
Capture the raw device data. Default is 0.
Using this option may result in receiving the underlying data delivered to the AVFoundation framework. E.g. for muxed devices that sends raw DV data to the framework (like tape-based camcorders), setting this option to false results in extracted video frames captured in the designated pixel format only. Setting this option to true results in receiving the raw DV stream untouched.
@end table
@subsection Examples
@@ -211,13 +208,6 @@ Record video from the system default video device using the pixel format bgr0 an
$ ffmpeg -f avfoundation -pixel_format bgr0 -i "default:none" out.avi
@end example
@item
Record raw DV data from a suitable input device and write the output into out.dv:
@example
$ ffmpeg -f avfoundation -capture_raw_data true -i "zr100:none" out.dv
@end example
@end itemize
@section bktr
@@ -277,8 +267,8 @@ audio track.
@item list_devices
If set to @option{true}, print a list of devices and exit.
Defaults to @option{false}. This option is deprecated, please use the
@code{-sources} option of ffmpeg to list the available input devices.
Defaults to @option{false}. Alternatively you can use the @code{-sources}
option of ffmpeg to list the available input devices.
@item list_formats
If set to @option{true}, print a list of supported formats and exit.
@@ -292,6 +282,11 @@ as @option{pal} (3 letters).
Default behavior is autodetection of the input video format, if the hardware
supports it.
@item bm_v210
This is a deprecated option, you can use @option{raw_format} instead.
If set to @samp{1}, video is captured in 10 bit v210 instead
of uyvy422. Not all Blackmagic devices support this option.
@item raw_format
Set the pixel format of the captured video.
Available values are:
@@ -390,14 +385,6 @@ Either sync could go wrong by 1 frame or in a rarer case
@option{timestamp_align} seconds.
Defaults to @samp{0}.
@item wait_for_tc (@emph{bool})
Drop frames till a frame with timecode is received. Sometimes serial timecode
isn't received with the first input frame. If that happens, the stored stream
timecode will be inaccurate. If this option is set to @option{true}, input frames
are dropped till a frame with timecode is received.
Option @var{timecode_format} must be specified.
Defaults to @option{false}.
@end table
@subsection Examples
@@ -407,7 +394,7 @@ Defaults to @option{false}.
@item
List input devices:
@example
ffmpeg -sources decklink
ffmpeg -f decklink -list_devices 1 -i dummy
@end example
@item
@@ -425,7 +412,7 @@ ffmpeg -format_code Hi50 -f decklink -i 'Intensity Pro' -c:a copy -c:v copy outp
@item
Capture video clip at 1080i50 10 bit:
@example
ffmpeg -raw_format yuv422p10 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -c:a copy -c:v copy output.avi
ffmpeg -bm_v210 1 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -c:a copy -c:v copy output.avi
@end example
@item
@@ -800,7 +787,7 @@ ffplay -f iec61883 -i auto
Grab and record the input of a FireWire DV/HDV device,
using a packet buffer of 100000 packets if the source is HDV.
@example
ffmpeg -f iec61883 -i auto -dvbuffer 100000 out.mpg
ffmpeg -f iec61883 -i auto -hdvbuffer 100000 out.mpg
@end example
@end itemize
@@ -923,14 +910,6 @@ Capture from CRTC ID 42 at 60fps, map the result to VAAPI, convert to NV12 and e
ffmpeg -crtc_id 42 -framerate 60 -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' -c:v h264_vaapi output.mp4
@end example
@item
To capture only part of a plane the output can be cropped - this can be used to capture
a single window, as long as it has a known absolute position and size. For example, to
capture and encode the middle quarter of a 1920x1080 plane:
@example
ffmpeg -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,crop=960:540:480:270,scale_vaapi=960:540:nv12' -c:v h264_vaapi output.mp4
@end example
@end itemize
@section lavfi
@@ -1071,21 +1050,71 @@ IIDC1394 input device, based on libdc1394 and libraw1394.
Requires the configure option @code{--enable-libdc1394}.
@section libndi_newtek
The libndi_newtek input device provides capture capabilities for using NDI (Network
Device Interface, standard created by NewTek).
Input filename is a NDI source name that could be found by sending -find_sources 1
to command line - it has no specific syntax but human-readable formatted.
To enable this input device, you need the NDI SDK and you
need to configure with the appropriate @code{--extra-cflags}
and @code{--extra-ldflags}.
@subsection Options
@table @option
@item framerate
Set the frame rate. Default is @code{ntsc}, corresponding to a frame
rate of @code{30000/1001}.
@item find_sources
If set to @option{true}, print a list of found/available NDI sources and exit.
Defaults to @option{false}.
@item pixel_format
Select the pixel format. Default is @code{uyvy422}.
@item wait_sources
Override time to wait until the number of online sources have changed.
Defaults to @option{0.5}.
@item allow_video_fields
When this flag is @option{false}, all video that you receive will be progressive.
Defaults to @option{true}.
@item extra_ips
If is set to list of comma separated ip addresses, scan for sources not only
using mDNS but also use unicast ip addresses specified by this list.
@item video_size
Set the video size given as a string such as @code{640x480} or @code{hd720}.
Default is @code{qvga}.
@end table
@subsection Examples
@itemize
@item
List input devices:
@example
ffmpeg -f libndi_newtek -find_sources 1 -i dummy
@end example
@item
List local and remote input devices:
@example
ffmpeg -f libndi_newtek -extra_ips "192.168.10.10" -find_sources 1 -i dummy
@end example
@item
Restream to NDI:
@example
ffmpeg -f libndi_newtek -i "DEV-5.INTERNAL.M1STEREO.TV (NDI_SOURCE_NAME_1)" -f libndi_newtek -y NDI_SOURCE_NAME_2
@end example
@item
Restream remote NDI to local NDI:
@example
ffmpeg -f libndi_newtek -extra_ips "192.168.10.10" -i "DEV-5.REMOTE.M1STEREO.TV (NDI_SOURCE_NAME_1)" -f libndi_newtek -y NDI_SOURCE_NAME_2
@end example
@end itemize
@section openal
The OpenAL input device provides audio capture on all systems with a
@@ -1527,7 +1556,7 @@ ffmpeg -f x11grab -follow_mouse centered -show_region 1 -framerate 25 -video_siz
@end example
@item video_size
Set the video frame size. Default is the full desktop.
Set the video frame size. Default value is @code{vga}.
@item grab_x
@item grab_y

View File

@@ -100,7 +100,6 @@ Stuff that didn't reach the codebase:
- 4de220d2e frame: allow align=0 (meaning automatic) for av_frame_get_buffer()
- Support recovery from an already present HLS playlist (see 16cb06bb30)
- Remove all output devices (see 8e7e042d41, 8d3db95f20, 6ce13070bd, d46cd24986 and https://ffmpeg.org/pipermail/ffmpeg-devel/2017-September/216904.html)
- avcodec/libaomenc: export the Sequence Header OBU as extradata (See a024c3ce9a)
Collateral damage that needs work locally:
------------------------------------------

View File

@@ -64,6 +64,10 @@ Email @email{ffmpeg-devel@@ffmpeg.org} to send a message to the
ffmpeg-devel mailing list.
@end itemize
Note that the ffmpeg-devel mailing list does not require you to subscribe
to send a message or patch, but ffmpeg-user and libav-user do require
subscription.
@chapter Subscribing / Unsubscribing
@anchor{How do I subscribe?}
@@ -90,9 +94,6 @@ The process is the same for the other mailing lists.
Please avoid asking a mailing list admin to unsubscribe you unless you
are absolutely unable to do so by yourself. See @ref{Who do I contact if I have a problem with the mailing list?}
Note that it is possible to temporarily halt message delivery (vacation mode).
See @ref{How do I disable mail delivery without unsubscribing?}
@chapter Moderation Queue
@anchor{Why is my message awaiting moderator approval?}
@section Why is my message awaiting moderator approval?
@@ -115,8 +116,7 @@ or is abusive towards others).
@section How long does it take for my message in the moderation queue to be approved?
The queue is not checked on a regular basis. You can ask on the
@t{#ffmpeg-devel} IRC channel on Freenode for someone to approve your message.
The queue is usually checked daily to several times a week.
@anchor{How do I delete my message in the moderation queue?}
@section How do I delete my message in the moderation queue?
@@ -157,12 +157,11 @@ Perform a site search using your favorite search engine. Example:
You can ask for help in the official @t{#ffmpeg} IRC channel on Freenode.
Some users prefer the third-party @url{http://www.ffmpeg-archive.org/, Nabble}
interface which presents the mailing lists in a typical forum layout.
Some users prefer the third-party Nabble interface which presents the
mailing lists in a typical forum layout.
There are also numerous third-party help sites such as
@url{https://superuser.com/tags/ffmpeg, Super User} and
@url{https://www.reddit.com/r/ffmpeg/, r/ffmpeg on reddit}.
There are also numerous third-party help sites such as Super User and
r/ffmpeg on reddit.
@anchor{What is top-posting?}
@section What is top-posting?
@@ -182,7 +181,7 @@ instead of attaching them.
Anywhere that is not too annoying for us to use.
Google Drive and Dropbox are acceptable if you need a file host, and
@url{https://0x0.st/, 0x0.st} is good for files under 256 MiB.
0x0.st is good for files under 256 MiB.
Small, short samples are preferred if possible.
@@ -229,54 +228,6 @@ or headers.
You can then filter the mailing list messages to their own folder.
@anchor{How do I disable mail delivery without unsubscribing?}
@section How do I disable mail delivery without unsubscribing?
Sometimes you may want to temporarily stop receiving all mailing list
messages. This "vacation mode" is simple to do:
@enumerate
@item
Go to the @url{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-user/, ffmpeg-user mailing list info page}
@item
Enter your email address in the box at very bottom of the page and click the
@emph{Unsubscribe or edit options} box.
@item
Enter your password and click the @emph{Log in} button.
@item
Look for the @emph{Mail delivery} option. Here you can disable/enable mail
delivery. If you check @emph{Set globally} it will apply your choice to all
other FFmpeg mailing lists you are subscribed to.
@end enumerate
Alternatively, from your subscribed address, send a message to @email{ffmpeg-user-request@@ffmpeg.org}
with the subject @emph{set delivery off}. To re-enable mail delivery send a
message to @email{ffmpeg-user-request@@ffmpeg.org} with the subject
@emph{set delivery on}.
@anchor{Why is the mailing list munging my address?}
@section Why is the mailing list munging my address?
This is due to subscribers that use an email service with a DMARC reject policy
which adds difficulties to mailing list operators.
The mailing list must re-write (munge) the @emph{From:} header for such users;
otherwise their email service will reject and bounce the message resulting in
automatic unsubscribing from the mailing list.
When sending a message these users will see @emph{via <mailing list name>}
added to their name and the @emph{From:} address munged to the address of
the particular mailing list.
If you want to avoid this then please use a different email service.
Note that ffmpeg-devel does not apply any munging as it causes issues with
patch authorship. As a result users with an email service with a DMARC reject
policy may be automatically unsubscribed due to rejected and bounced messages.
@chapter Rules and Etiquette
@section What are the rules and the proper etiquette?
@@ -375,15 +326,6 @@ form a multi-part message is recommended by email standards.
Check your spam folder.
@end itemize
@anchor{Why do I keep getting unsubscribed from ffmpeg-devel?}
@section Why do I keep getting unsubscribed from ffmpeg-devel?
Users with an email service that has a DMARC reject or quarantine policy may be
automatically unsubscribed from the ffmpeg-devel mailing list due to the mailing
list messages being continuously rejected and bounced back.
Consider using a different email service.
@anchor{Who do I contact if I have a problem with the mailing list?}
@section Who do I contact if I have a problem with the mailing list?

View File

@@ -33,7 +33,7 @@ At the beginning of a chapter section there may be an optional timebase to be
used for start/end values. It must be in form
@samp{TIMEBASE=@var{num}/@var{den}}, where @var{num} and @var{den} are
integers. If the timebase is missing then start/end times are assumed to
be in nanoseconds.
be in milliseconds.
Next a chapter section must contain chapter start and end times in form
@samp{START=@var{num}}, @samp{END=@var{num}}, where @var{num} is a positive

View File

@@ -51,14 +51,16 @@ the decode process starts. Call ff_thread_finish_setup() afterwards. If
some code can't be moved, have update_thread_context() run it in the next
thread.
If the codec allocates writable tables in its init(), add an init_thread_copy()
which re-allocates them for other threads.
Add AV_CODEC_CAP_FRAME_THREADS to the codec capabilities. There will be very little
speed gain at this point but it should work.
If there are inter-frame dependencies, so the codec calls
ff_thread_report/await_progress(), set FF_CODEC_CAP_ALLOCATE_PROGRESS in
AVCodec.caps_internal and use ff_thread_get_buffer() to allocate frames. The
ff_thread_report/await_progress(), set AVCodecInternal.allocate_progress. The
frames must then be freed with ff_thread_release_buffer().
Otherwise decode directly into the user-supplied frames.
Otherwise leave it at zero and decode directly into the user-supplied frames.
Call ff_thread_report_progress() after some part of the current picture has decoded.
A good place to put this is where draw_horiz_band() is called - add this if it isn't

View File

@@ -94,25 +94,21 @@ compatibility with software that only supports a single audio stream in AVI
@anchor{chromaprint}
@section chromaprint
Chromaprint fingerprinter.
Chromaprint fingerprinter
This muxer feeds audio data to the Chromaprint library,
which generates a fingerprint for the provided audio data. See @url{https://acoustid.org/chromaprint}
It takes a single signed native-endian 16-bit raw audio stream of at most 2 channels.
This muxer feeds audio data to the Chromaprint library, which generates
a fingerprint for the provided audio data. It takes a single signed
native-endian 16-bit raw audio stream.
@subsection Options
@table @option
@item silence_threshold
Threshold for detecting silence. Range is from -1 to 32767, where -1 disables
silence detection. Silence detection can only be used with version 3 of the
algorithm.
Silence detection must be disabled for use with the AcoustID service. Default is -1.
Threshold for detecting silence, ranges from 0 to 32767. -1 for default
(required for use with the AcoustID service).
@item algorithm
Version of algorithm to fingerprint with. Range is 0 to 4.
Version 3 enables silence detection. Default is 1.
Algorithm index to fingerprint with.
@item fp_format
Format to output the fingerprint as. Accepts the following options:
@@ -124,7 +120,7 @@ Binary raw fingerprint
Binary compressed fingerprint
@item base64
Base64 compressed fingerprint @emph{(default)}
Base64 compressed fingerprint
@end table
@@ -218,79 +214,66 @@ It creates a MPD manifest file and segment files for each stream.
The segment filename might contain pre-defined identifiers used with SegmentTemplate
as defined in section 5.3.9.4.4 of the standard. Available identifiers are "$RepresentationID$",
"$Number$", "$Bandwidth$" and "$Time$".
In addition to the standard identifiers, an ffmpeg-specific "$ext$" identifier is also supported.
When specified ffmpeg will replace $ext$ in the file name with muxing format's extensions such as mp4, webm etc.,
@example
ffmpeg -re -i <input> -map 0 -map 0 -c:a libfdk_aac -c:v libx264 \
-b:v:0 800k -b:v:1 300k -s:v:1 320x170 -profile:v:1 baseline \
-profile:v:0 main -bf 1 -keyint_min 120 -g 120 -sc_threshold 0 \
-b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1 \
-window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" \
ffmpeg -re -i <input> -map 0 -map 0 -c:a libfdk_aac -c:v libx264
-b:v:0 800k -b:v:1 300k -s:v:1 320x170 -profile:v:1 baseline
-profile:v:0 main -bf 1 -keyint_min 120 -g 120 -sc_threshold 0
-b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1
-window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a"
-f dash /path/to/out.mpd
@end example
@table @option
@item min_seg_duration @var{microseconds}
@item -min_seg_duration @var{microseconds}
This is a deprecated option to set the segment length in microseconds, use @var{seg_duration} instead.
@item seg_duration @var{duration}
@item -seg_duration @var{duration}
Set the segment length in seconds (fractional value can be set). The value is
treated as average segment duration when @var{use_template} is enabled and
@item frag_duration @var{duration}
Set the length in seconds of fragments within segments (fractional value can be set).
@item frag_type @var{type}
Set the type of interval for fragmentation.
@item window_size @var{size}
@var{use_timeline} is disabled and as minimum segment duration for all the other
use cases.
@item -window_size @var{size}
Set the maximum number of segments kept in the manifest.
@item extra_window_size @var{size}
@item -extra_window_size @var{size}
Set the maximum number of segments kept outside of the manifest before removing from disk.
@item remove_at_exit @var{remove}
@item -remove_at_exit @var{remove}
Enable (1) or disable (0) removal of all segments when finished.
@item use_template @var{template}
@item -use_template @var{template}
Enable (1) or disable (0) use of SegmentTemplate instead of SegmentList.
@item use_timeline @var{timeline}
@item -use_timeline @var{timeline}
Enable (1) or disable (0) use of SegmentTimeline in SegmentTemplate.
@item single_file @var{single_file}
@item -single_file @var{single_file}
Enable (1) or disable (0) storing all segments in one file, accessed using byte ranges.
@item single_file_name @var{file_name}
DASH-templated name to be used for baseURL. Implies @var{single_file} set to "1". In the template, "$ext$" is replaced with the file name extension specific for the segment format.
@item init_seg_name @var{init_name}
DASH-templated name to used for the initialization segment. Default is "init-stream$RepresentationID$.$ext$". "$ext$" is replaced with the file name extension specific for the segment format.
@item media_seg_name @var{segment_name}
DASH-templated name to used for the media segments. Default is "chunk-stream$RepresentationID$-$Number%05d$.$ext$". "$ext$" is replaced with the file name extension specific for the segment format.
@item utc_timing_url @var{utc_url}
@item -single_file_name @var{file_name}
DASH-templated name to be used for baseURL. Implies @var{single_file} set to "1".
@item -init_seg_name @var{init_name}
DASH-templated name to used for the initialization segment. Default is "init-stream$RepresentationID$.m4s"
@item -media_seg_name @var{segment_name}
DASH-templated name to used for the media segments. Default is "chunk-stream$RepresentationID$-$Number%05d$.m4s"
@item -utc_timing_url @var{utc_url}
URL of the page that will return the UTC timestamp in ISO format. Example: "https://time.akamai.com/?iso"
@item method @var{method}
Use the given HTTP method to create output files. Generally set to PUT or POST.
@item http_user_agent @var{user_agent}
@item -http_user_agent @var{user_agent}
Override User-Agent field in HTTP header. Applicable only for HTTP output.
@item http_persistent @var{http_persistent}
@item -http_persistent @var{http_persistent}
Use persistent HTTP connections. Applicable only for HTTP output.
@item hls_playlist @var{hls_playlist}
@item -hls_playlist @var{hls_playlist}
Generate HLS playlist files as well. The master playlist is generated with the filename master.m3u8.
One media playlist file is generated for each stream with filenames media_0.m3u8, media_1.m3u8, etc.
@item streaming @var{streaming}
@item -streaming @var{streaming}
Enable (1) or disable (0) chunk streaming mode of output. In chunk streaming
mode, each frame will be a moof fragment which forms a chunk.
@item adaptation_sets @var{adaptation_sets}
@item -adaptation_sets @var{adaptation_sets}
Assign streams to AdaptationSets. Syntax is "id=x,streams=a,b,c id=y,streams=d,e" with x and y being the IDs
of the adaptation sets and a,b,c,d and e are the indices of the mapped streams.
To map all video (or audio) streams to an AdaptationSet, "v" (or "a") can be used as stream identifier instead of IDs.
When no assignment is defined, this defaults to an AdaptationSet for each stream.
Optional syntax is "id=x,seg_duration=x,frag_duration=x,frag_type=type,descriptor=descriptor_string,streams=a,b,c id=y,seg_duration=y,frag_type=type,streams=d,e" and so on,
descriptor is useful to the scheme defined by ISO/IEC 23009-1:2014/Amd.2:2015.
For example, -adaptation_sets "id=0,descriptor=<SupplementalProperty schemeIdUri=\"urn:mpeg:dash:srd:2014\" value=\"0,0,0,1,1,2,2\"/>,streams=v".
Please note that descriptor string should be a self-closing xml tag.
seg_duration, frag_duration and frag_type override the global option values for each adaptation set.
For example, -adaptation_sets "id=0,seg_duration=2,frag_duration=1,frag_type=duration,streams=v id=1,seg_duration=2,frag_type=none,streams=a"
type_id marks an adaptation set as containing streams meant to be used for Trick Mode for the referenced adaptation set.
For example, -adaptation_sets "id=0,seg_duration=2,frag_type=none,streams=0 id=1,seg_duration=10,frag_type=none,trick_id=0,streams=1"
@item timeout @var{timeout}
@item -timeout @var{timeout}
Set timeout for socket I/O operations. Applicable only for HTTP output.
@item index_correction @var{index_correction}
@item -index_correction @var{index_correction}
Enable (1) or Disable (0) segment index correction logic. Applicable only when
@var{use_template} is enabled and @var{use_timeline} is disabled.
@@ -301,68 +284,18 @@ corrects that index value.
Typically this logic is needed in live streaming use cases. The network bandwidth
fluctuations are common during long run streaming. Each fluctuation can cause
the segment indexes fall behind the expected real time position.
@item format_options @var{options_list}
@item -format_options @var{options_list}
Set container format (mp4/webm) options using a @code{:} separated list of
key=value parameters. Values containing @code{:} special characters must be
escaped.
@item global_sidx @var{global_sidx}
Write global SIDX atom. Applicable only for single file, mp4 output, non-streaming mode.
@item dash_segment_type @var{dash_segment_type}
Possible values:
@table @option
@item auto
If this flag is set, the dash segment files format will be selected based on the stream codec. This is the default mode.
@item mp4
If this flag is set, the dash segment files will be in in ISOBMFF format.
If this flag is set, the dash segment files will be in in ISOBMFF format. This is the default format.
@item webm
If this flag is set, the dash segment files will be in in WebM format.
@end table
@item ignore_io_errors @var{ignore_io_errors}
Ignore IO errors during open and write. Useful for long-duration runs with network output.
@item lhls @var{lhls}
Enable Low-latency HLS(LHLS). Adds #EXT-X-PREFETCH tag with current segment's URI.
Apple doesn't have an official spec for LHLS. Meanwhile hls.js player folks are
trying to standardize a open LHLS spec. The draft spec is available in https://github.com/video-dev/hlsjs-rfcs/blob/lhls-spec/proposals/0001-lhls.md
This option will also try to comply with the above open spec, till Apple's spec officially supports it.
Applicable only when @var{streaming} and @var{hls_playlist} options are enabled.
This is an experimental feature.
@item ldash @var{ldash}
Enable Low-latency Dash by constraining the presence and values of some elements.
@item master_m3u8_publish_rate @var{master_m3u8_publish_rate}
Publish master playlist repeatedly every after specified number of segment intervals.
@item write_prft @var{write_prft}
Write Producer Reference Time elements on supported streams. This also enables writing
prft boxes in the underlying muxer. Applicable only when the @var{utc_url} option is enabled.
It's set to auto by default, in which case the muxer will attempt to enable it only in modes
that require it.
@item mpd_profile @var{mpd_profile}
Set one or more manifest profiles.
@item http_opts @var{http_opts}
A :-separated list of key=value options to pass to the underlying HTTP
protocol. Applicable only for HTTP output.
@item target_latency @var{target_latency}
Set an intended target latency in seconds (fractional value can be set) for serving. Applicable only when @var{streaming} and @var{write_prft} options are enabled.
This is an informative fields clients can use to measure the latency of the service.
@item min_playback_rate @var{min_playback_rate}
Set the minimum playback rate indicated as appropriate for the purposes of automatically
adjusting playback latency and buffer occupancy during normal playback by clients.
@item max_playback_rate @var{max_playback_rate}
Set the maximum playback rate indicated as appropriate for the purposes of automatically
adjusting playback latency and buffer occupancy during normal playback by clients.
@end table
@@ -648,9 +581,6 @@ Set the starting sequence numbers according to @var{start_number} option value.
@item epoch
The start number will be the seconds since epoch (1970-01-01 00:00:00)
@item epoch_us
The start number will be the microseconds since epoch (1970-01-01 00:00:00)
@item datetime
The start number will be based on the current date/time as YYYYmmddHHMMSS. e.g. 20161231235759.
@@ -702,8 +632,7 @@ This example will produce the playlists segment file sets:
@file{file_1_000.ts}, @file{file_1_001.ts}, @file{file_1_002.ts}, etc.
The string "%v" may be present in the filename or in the last directory name
containing the file, but only in one of them. (Additionally, %v may appear multiple times in the last
sub-directory or filename.) If the string %v is present in the directory name, then
containing the file. If the string is present in the directory name, then
sub-directories are created after expanding the directory name pattern. This
enables creation of segments corresponding to different variant streams in
subdirectories.
@@ -847,9 +776,6 @@ fmp4 files may be used in HLS version 7 and above.
@item hls_fmp4_init_filename @var{filename}
Set filename to the fragment files header file, default filename is @file{init.mp4}.
@item hls_fmp4_init_resend
Resend init file after m3u8 file refresh every time, default is @var{0}.
When @code{var_stream_map} is set with two or more variant streams, the
@var{filename} pattern must contain the string "%v", this string specifies
the position of variant stream index in the generated init file names.
@@ -902,10 +828,6 @@ including the file containing the AES encryption key.
Add the @code{#EXT-X-INDEPENDENT-SEGMENTS} to playlists that has video segments
and when all the segments of that playlist are guaranteed to start with a Key frame.
@item iframes_only
Add the @code{#EXT-X-I-FRAMES-ONLY} to playlists that has video segments
and can play only I-frames in the @code{#EXT-X-BYTERANGE} mode.
@item split_by_time
Allow segments to start on frames other than keyframes. This improves
behavior on some players when the time between keyframes is inconsistent,
@@ -942,11 +864,7 @@ This will produce segments like this:
@item temp_file
Write segment data to filename.tmp and rename to filename only once the segment is complete. A webserver
serving up segments can be configured to reject requests to *.tmp to prevent access to in-progress segments
before they have been added to the m3u8 playlist. This flag also affects how m3u8 playlist files are created.
If this flag is set, all playlist files will written into temporary file and renamed after they are complete, similarly as segments are handled.
But playlists with @code{file} protocol and with type (@code{hls_playlist_type}) other than @code{vod}
are always written into temporary file regardless of this flag. Master playlist files (@code{master_pl_name}), if any, with @code{file} protocol,
are always written into temporary file regardless of this flag if @code{master_pl_publish_rate} value is other than zero.
before they have been added to the m3u8 playlist.
@end table
@@ -997,21 +915,7 @@ This example creates two hls variant streams. The first variant stream will
contain video stream of bitrate 1000k and audio stream of bitrate 64k and the
second variant stream will contain video stream of bitrate 256k and audio
stream of bitrate 32k. Here, two media playlist with file names out_0.m3u8 and
out_1.m3u8 will be created. If you want something meaningful text instead of indexes
in result names, you may specify names for each or some of the variants
as in the following example.
@example
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k -b:a:1 32k \
-map 0:v -map 0:a -map 0:v -map 0:a -f hls -var_stream_map "v:0,a:0,name:my_hd v:1,a:1,name:my_sd" \
http://example.com/live/out_%v.m3u8
@end example
This example creates two hls variant streams as in the previous one.
But here, the two media playlist with file names out_my_hd.m3u8 and
out_my_sd.m3u8 will be created.
out_1.m3u8 will be created.
@example
ffmpeg -re -i in.ts -b:v:0 1000k -b:v:1 256k -b:a:0 64k \
-map 0:v -map 0:a -map 0:v -f hls -var_stream_map "v:0 a:0 v:1" \
@@ -1045,52 +949,6 @@ and they are mapped to the two video only variant streams with audio group names
By default, a single hls variant containing all the encoded streams is created.
@example
ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k \
-map 0:a -map 0:a -map 0:v -f hls \
-var_stream_map "a:0,agroup:aud_low,default:yes a:1,agroup:aud_low v:0,agroup:aud_low" \
-master_pl_name master.m3u8 \
http://example.com/live/out_%v.m3u8
@end example
This example creates two audio only and one video only variant streams. In
addition to the #EXT-X-STREAM-INF tag for each variant stream in the master
playlist, #EXT-X-MEDIA tag is also added for the two audio only variant streams
and they are mapped to the one video only variant streams with audio group name
'aud_low', and the audio group have default stat is NO or YES.
By default, a single hls variant containing all the encoded streams is created.
@example
ffmpeg -re -i in.ts -b:a:0 32k -b:a:1 64k -b:v:0 1000k \
-map 0:a -map 0:a -map 0:v -f hls \
-var_stream_map "a:0,agroup:aud_low,default:yes,language:ENG a:1,agroup:aud_low,language:CHN v:0,agroup:aud_low" \
-master_pl_name master.m3u8 \
http://example.com/live/out_%v.m3u8
@end example
This example creates two audio only and one video only variant streams. In
addition to the #EXT-X-STREAM-INF tag for each variant stream in the master
playlist, #EXT-X-MEDIA tag is also added for the two audio only variant streams
and they are mapped to the one video only variant streams with audio group name
'aud_low', and the audio group have default stat is NO or YES, and one audio
have and language is named ENG, the other audio language is named CHN.
By default, a single hls variant containing all the encoded streams is created.
@example
ffmpeg -y -i input_with_subtitle.mkv \
-b:v:0 5250k -c:v h264 -pix_fmt yuv420p -profile:v main -level 4.1 \
-b:a:0 256k \
-c:s webvtt -c:a mp2 -ar 48000 -ac 2 -map 0:v -map 0:a:0 -map 0:s:0 \
-f hls -var_stream_map "v:0,a:0,s:0,sgroup:subtitle" \
-master_pl_name master.m3u8 -t 300 -hls_time 10 -hls_init_time 4 -hls_list_size \
10 -master_pl_publish_rate 10 -hls_flags \
delete_segments+discont_start+split_by_time ./tmp/video.m3u8
@end example
This example adds @code{#EXT-X-MEDIA} tag with @code{TYPE=SUBTITLES} in
the master playlist with webvtt subtitle group name 'subtitle'. Please make sure
the input file has one text subtitle stream at least.
@item cc_stream_map
Map string which specifies different closed captions groups and their
attributes. The closed captions stream groups are separated by space.
@@ -1111,7 +969,7 @@ ffmpeg -re -i in.ts -b:v 1000k -b:a 64k -a53cc 1 -f hls \
http://example.com/live/out.m3u8
@end example
This example adds @code{#EXT-X-MEDIA} tag with @code{TYPE=CLOSED-CAPTIONS} in
the master playlist with group name 'cc', language 'en' (english) and
the master playlist with group name 'cc', langauge 'en' (english) and
INSTREAM-ID 'CC1'. Also, it adds @code{CLOSED-CAPTIONS} attribute with group
name 'cc' for the output variant stream.
@example
@@ -1154,12 +1012,6 @@ Use persistent HTTP connections. Applicable only for HTTP output.
@item timeout
Set timeout for socket I/O operations. Applicable only for HTTP output.
@item -ignore_io_errors
Ignore IO errors during open, write and delete. Useful for long-duration runs with network output.
@item headers
Set custom HTTP headers, can override built in default headers. Applicable only for HTTP output.
@end table
@anchor{ico}
@@ -1225,37 +1077,6 @@ The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
etc.
The image muxer supports the .Y.U.V image file format. This format is
special in that that each image frame consists of three files, for
each of the YUV420P components. To read or write this image file format,
specify the name of the '.Y' file. The muxer will automatically open the
'.U' and '.V' files as required.
@subsection Options
@table @option
@item frame_pts
If set to 1, expand the filename with pts from pkt->pts.
Default value is 0.
@item start_number
Start the sequence from the specified number. Default value is 1.
@item update
If set to 1, the filename will always be interpreted as just a
filename, not a pattern, and the corresponding file will be continuously
overwritten with new images. Default value is 0.
@item strftime
If set to 1, expand the filename with date and time information from
@code{strftime()}. Default value is 0.
@item protocol_opts @var{options_list}
Set protocol options as a :-separated list of key=value parameters. Values
containing the @code{:} special character must be escaped.
@end table
@subsection Examples
The following example shows how to use @command{ffmpeg} for creating a
@@ -1296,11 +1117,31 @@ You can set the file name with current frame's PTS:
ffmpeg -f v4l2 -r 1 -i /dev/video0 -copyts -f image2 -frame_pts true %d.jpg"
@end example
A more complex example is to publish contents of your desktop directly to a
WebDAV server every second:
@example
ffmpeg -f x11grab -framerate 1 -i :0.0 -q:v 6 -update 1 -protocol_opts method=PUT http://example.com/desktop.jpg
@end example
@subsection Options
@table @option
@item frame_pts
If set to 1, expand the filename with pts from pkt->pts.
Default value is 0.
@item start_number
Start the sequence from the specified number. Default value is 1.
@item update
If set to 1, the filename will always be interpreted as just a
filename, not a pattern, and the corresponding file will be continuously
overwritten with new images. Default value is 0.
@item strftime
If set to 1, expand the filename with date and time information from
@code{strftime()}. Default value is 0.
@end table
The image muxer supports the .Y.U.V image file format. This format is
special in that that each image frame consists of three files, for
each of the YUV420P components. To read or write this image file format,
specify the name of the '.Y' file. The muxer will automatically open the
'.U' and '.V' files as required.
@section matroska
@@ -1314,8 +1155,7 @@ The recognized metadata settings in this muxer are:
@table @option
@item title
Set title name provided to a single track. This gets mapped to
the FileDescription element for a stream written as attachment.
Set title name provided to a single track.
@item language
Specify the language of the track in the Matroska languages form.
@@ -1382,31 +1222,11 @@ index at the beginning of the file.
If this option is set to a non-zero value, the muxer will reserve a given amount
of space in the file header and then try to write the cues there when the muxing
finishes. If the reserved space does not suffice, no Cues will be written, the
file will be finalized and writing the trailer will return an error.
A safe size for most use cases should be about 50kB per hour of video.
finishes. If the available space does not suffice, muxing will fail. A safe size
for most use cases should be about 50kB per hour of video.
Note that cues are only written if the output is seekable and this option will
have no effect if it is not.
@item default_mode
This option controls how the FlagDefault of the output tracks will be set.
It influences which tracks players should play by default. The default mode
is @samp{infer}.
@table @samp
@item infer
In this mode, for each type of track (audio, video or subtitle), if there is
a track with disposition default of this type, then the first such track
(i.e. the one with the lowest index) will be marked as default; if no such
track exists, the first track of this type will be marked as default instead
(if existing). This ensures that the default flag is set in a sensible way even
if the input originated from containers that lack the concept of default tracks.
@item infer_no_subs
This mode is the same as infer except that if no subtitle track with
disposition default exists, no subtitle track will be marked as default.
@item passthrough
In this mode the FlagDefault is set if and only if the AV_DISPOSITION_DEFAULT
flag is set in the disposition of the corresponding stream.
@end table
@end table
@anchor{md5}
@@ -1499,10 +1319,6 @@ more efficient), but with this option set, the muxer writes one moof/mdat
pair for each track, making it easier to separate tracks.
This option is implicitly set when writing ismv (Smooth Streaming) files.
@item -movflags skip_sidx
Skip writing of sidx atom. When bitrate overhead due to sidx atom is high,
this option could be used for cases where sidx atom is not mandatory.
When global_sidx flag is enabled, this option will be ignored.
@item -movflags faststart
Run a second pass moving the index (moov atom) to the beginning of the file.
This operation can take a while, and will not work in various situations such
@@ -1556,6 +1372,13 @@ point on IIS with this muxer. Example:
ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
@end example
@subsection Audible AAX
Audible AAX files are encrypted M4B files, and they can be decrypted by specifying a 4 byte activation secret.
@example
ffmpeg -activation_bytes 1CEB00DA -i test.aax -vn -c:a copy output.mp4
@end example
@section mp3
The MP3 muxer writes a raw MP3 stream with the following optional features:
@@ -1644,7 +1467,7 @@ Set the program @samp{service_type}. Default is @code{digital_tv}.
Accepts the following options:
@table @samp
@item hex_value
Any hexadecimal value between @code{0x01} and @code{0xff} as defined in
Any hexdecimal value between @code{0x01} to @code{0xff} as defined in
ETSI 300 468.
@item digital_tv
Digital TV service.
@@ -1663,14 +1486,11 @@ Advanced Codec Digital HDTV service.
@end table
@item mpegts_pmt_start_pid @var{integer}
Set the first PID for PMTs. Default is @code{0x1000}, minimum is @code{0x0020},
maximum is @code{0x1ffa}. This option has no effect in m2ts mode where the PMT
PID is fixed @code{0x0100}.
Set the first PID for PMT. Default is @code{0x1000}. Max is @code{0x1f00}.
@item mpegts_start_pid @var{integer}
Set the first PID for elementary streams. Default is @code{0x0100}, minimum is
@code{0x0020}, maximum is @code{0x1ffa}. This option has no effect in m2ts mode
where the elementary stream PIDs are fixed.
Set the first PID for data packets. Default is @code{0x0100}. Max is
@code{0x0f00}.
@item mpegts_m2ts_mode @var{boolean}
Enable m2ts mode if set to @code{1}. Default value is @code{-1} which
@@ -1697,6 +1517,10 @@ Conform to System B (DVB) instead of System A (ATSC).
Mark the initial packet of each stream as discontinuity.
@end table
@item resend_headers @var{integer}
Reemit PAT/PMT before writing the next packet. This option is deprecated:
use @option{mpegts_flags} instead.
@item mpegts_copyts @var{boolean}
Preserve original timestamps, if value is set to @code{1}. Default value
is @code{-1}, which results in shifting timestamps so that they start from 0.
@@ -1705,16 +1529,14 @@ is @code{-1}, which results in shifting timestamps so that they start from 0.
Omit the PES packet length for video packets. Default is @code{1} (true).
@item pcr_period @var{integer}
Override the default PCR retransmission time in milliseconds. Default is
@code{-1} which means that the PCR interval will be determined automatically:
20 ms is used for CBR streams, the highest multiple of the frame duration which
is less than 100 ms is used for VBR streams.
Override the default PCR retransmission time in milliseconds. Ignored if
variable muxrate is selected. Default is @code{20}.
@item pat_period @var{duration}
Maximum time in seconds between PAT/PMT tables. Default is @code{0.1}.
@item pat_period @var{double}
Maximum time in seconds between PAT/PMT tables.
@item sdt_period @var{duration}
Maximum time in seconds between SDT tables. Default is @code{0.5}.
@item sdt_period @var{double}
Maximum time in seconds between SDT tables.
@item tables_version @var{integer}
Set PAT, PMT and SDT version (default @code{0}, valid values are from 0 to 31, inclusively).
@@ -1748,7 +1570,7 @@ ffmpeg -i file.mpg -c copy \
out.ts
@end example
@section mxf, mxf_d10, mxf_opatom
@section mxf, mxf_d10
MXF muxer.
@@ -1760,7 +1582,7 @@ The muxer options are:
@item store_user_comments @var{bool}
Set if user comments should be stored if available or never.
IRT D-10 does not allow user comments. The default is thus to write them for
mxf and mxf_opatom but not for mxf_d10
mxf but not for mxf_d10
@end table
@section null
@@ -2150,53 +1972,6 @@ Specify whether to remove all fragments when finished. Default 0 (do not remove)
@end table
@anchor{streamhash}
@section streamhash
Per stream hash testing format.
This muxer computes and prints a cryptographic hash of all the input frames,
on a per-stream basis. This can be used for equality checks without having
to do a complete binary comparison.
By default audio frames are converted to signed 16-bit raw audio and
video frames to raw video before computing the hash, but the output
of explicit conversions to other codecs can also be used. Timestamps
are ignored. It uses the SHA-256 cryptographic hash function by default,
but supports several other algorithms.
The output of the muxer consists of one line per stream of the form:
@var{streamindex},@var{streamtype},@var{algo}=@var{hash}, where
@var{streamindex} is the index of the mapped stream, @var{streamtype} is a
single character indicating the type of stream, @var{algo} is a short string
representing the hash function used, and @var{hash} is a hexadecimal number
representing the computed hash.
@table @option
@item hash @var{algorithm}
Use the cryptographic hash function specified by the string @var{algorithm}.
Supported values include @code{MD5}, @code{murmur3}, @code{RIPEMD128},
@code{RIPEMD160}, @code{RIPEMD256}, @code{RIPEMD320}, @code{SHA160},
@code{SHA224}, @code{SHA256} (default), @code{SHA512/224}, @code{SHA512/256},
@code{SHA384}, @code{SHA512}, @code{CRC32} and @code{adler32}.
@end table
@subsection Examples
To compute the SHA-256 hash of the input converted to raw audio and
video, and store it in the file @file{out.sha256}:
@example
ffmpeg -i INPUT -f streamhash out.sha256
@end example
To print an MD5 hash to stdout use the command:
@example
ffmpeg -i INPUT -f streamhash -hash md5 -
@end example
See also the @ref{hash} and @ref{framehash} muxers.
@anchor{fifo}
@section fifo

View File

@@ -140,8 +140,8 @@ device with @command{-list_formats 1}. Audio sample rate is always 48 kHz.
@item list_devices
If set to @option{true}, print a list of devices and exit.
Defaults to @option{false}. This option is deprecated, please use the
@code{-sinks} option of ffmpeg to list the available output devices.
Defaults to @option{false}. Alternatively you can use the @code{-sinks}
option of ffmpeg to list the available output devices.
@item list_formats
If set to @option{true}, print a list of supported formats and exit.
@@ -155,10 +155,6 @@ Defaults to @option{0.5}.
Sets the decklink device duplex mode. Must be @samp{unset}, @samp{half} or @samp{full}.
Defaults to @samp{unset}.
@item timing_offset
Sets the genlock timing pixel offset on the used output.
Defaults to @samp{unset}.
@end table
@subsection Examples
@@ -168,7 +164,7 @@ Defaults to @samp{unset}.
@item
List output devices:
@example
ffmpeg -sinks decklink
ffmpeg -i test.avi -f decklink -list_devices 1 dummy
@end example
@item
@@ -220,6 +216,51 @@ ffmpeg -re -i INPUT -c:v rawvideo -pix_fmt bgra -f fbdev /dev/fb0
See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1).
@section libndi_newtek
The libndi_newtek output device provides playback capabilities for using NDI (Network
Device Interface, standard created by NewTek).
Output filename is a NDI name.
To enable this output device, you need the NDI SDK and you
need to configure with the appropriate @code{--extra-cflags}
and @code{--extra-ldflags}.
NDI uses uyvy422 pixel format natively, but also supports bgra, bgr0, rgba and
rgb0.
@subsection Options
@table @option
@item reference_level
The audio reference level in dB. This specifies how many dB above the
reference level (+4dBU) is the full range of 16 bit audio.
Defaults to @option{0}.
@item clock_video
These specify whether video "clock" themselves.
Defaults to @option{false}.
@item clock_audio
These specify whether audio "clock" themselves.
Defaults to @option{false}.
@end table
@subsection Examples
@itemize
@item
Play video clip:
@example
ffmpeg -i "udp://@@239.1.1.1:10480?fifo_size=1000000&overrun_nonfatal=1" -vf "scale=720:576,fps=fps=25,setdar=dar=16/9,format=pix_fmts=uyvy422" -f libndi_newtek NEW_NDI1
@end example
@end itemize
@section opengl
OpenGL output device.
@@ -329,8 +370,6 @@ ffmpeg -i INPUT -f pulse "stream name"
SDL (Simple DirectMedia Layer) output device.
"sdl2" can be used as alias for "sdl".
This output device allows one to show a video stream in an SDL
window. Only one SDL window is allowed per application, so you can
have only one instance of this output device in an application.

View File

@@ -51,66 +51,6 @@ in microseconds.
A description of the currently available protocols follows.
@section amqp
Advanced Message Queueing Protocol (AMQP) version 0-9-1 is a broker based
publish-subscribe communication protocol.
FFmpeg must be compiled with --enable-librabbitmq to support AMQP. A separate
AMQP broker must also be run. An example open-source AMQP broker is RabbitMQ.
After starting the broker, an FFmpeg client may stream data to the broker using
the command:
@example
ffmpeg -re -i input -f mpegts amqp://[[user]:[password]@@]hostname[:port]
@end example
Where hostname and port (default is 5672) is the address of the broker. The
client may also set a user/password for authentication. The default for both
fields is "guest".
Muliple subscribers may stream from the broker using the command:
@example
ffplay amqp://[[user]:[password]@@]hostname[:port]
@end example
In RabbitMQ all data published to the broker flows through a specific exchange,
and each subscribing client has an assigned queue/buffer. When a packet arrives
at an exchange, it may be copied to a client's queue depending on the exchange
and routing_key fields.
The following options are supported:
@table @option
@item exchange
Sets the exchange to use on the broker. RabbitMQ has several predefined
exchanges: "amq.direct" is the default exchange, where the publisher and
subscriber must have a matching routing_key; "amq.fanout" is the same as a
broadcast operation (i.e. the data is forwarded to all queues on the fanout
exchange independent of the routing_key); and "amq.topic" is similar to
"amq.direct", but allows for more complex pattern matching (refer to the RabbitMQ
documentation).
@item routing_key
Sets the routing key. The default value is "amqp". The routing key is used on
the "amq.direct" and "amq.topic" exchanges to decide whether packets are written
to the queue of a subscriber.
@item pkt_size
Maximum size of each packet sent/received to the broker. Default is 131072.
Minimum is 4096 and max is any large value (representable by an int). When
receiving packets, this sets an internal buffer size in FFmpeg. It should be
equal to or greater than the size of the published packets to the broker. Otherwise
the received message may be truncated causing decoding errors.
@item connection_timeout
The timeout in seconds during the initial connection to the broker. The
default value is rw_timeout, or 5 seconds if rw_timeout is not set.
@end table
@section async
Asynchronous data filling wrapper for input stream.
@@ -253,20 +193,6 @@ Set I/O operation maximum block size, in bytes. Default value is
@code{INT_MAX}, which results in not limiting the requested block size.
Setting this value reasonably low improves user termination request reaction
time, which is valuable for files on slow medium.
@item follow
If set to 1, the protocol will retry reading at the end of the file, allowing
reading files that still are being written. In order for this to terminate,
you either need to use the rw_timeout option, or use the interrupt callback
(for API users).
@item seekable
Controls if seekability is advertised on the file. 0 means non-seekable, -1
means auto (seekable for normal files, non-seekable for named pipes).
Many demuxers handle seekable and non-seekable resources differently,
overriding this might speed up opening certain files at the cost of losing some
features (e.g. accurate seeking).
@end table
@section ftp
@@ -288,14 +214,6 @@ Set timeout in microseconds of socket I/O operations used by the underlying low
operation. By default it is set to -1, which means that the timeout is
not specified.
@item ftp-user
Set a user to be used for authenticating to the FTP server. This is overridden by the
user in the FTP URL.
@item ftp-password
Set a password to be used for authenticating to the FTP server. This is overridden by
the password in the FTP URL, or by @option{ftp-anonymous-password} if no user is set.
@item ftp-anonymous-password
Password used when login as anonymous user. Typically an e-mail address
should be used.
@@ -311,6 +229,17 @@ it, unless special care is taken (tests, customized server configuration
etc.). Different FTP servers behave in different way during seek
operation. ff* tools may produce incomplete content due to server limitations.
This protocol accepts the following options:
@table @option
@item follow
If set to 1, the protocol will retry reading at the end of the file, allowing
reading files that still are being written. In order for this to terminate,
you either need to use the rw_timeout option, or use the interrupt callback
(for API users).
@end table
@section gopher
Gopher protocol.
@@ -461,11 +390,6 @@ ffmpeg -i somefile.ogg -chunked_post 0 -c copy -f ogg http://@var{server}:@var{p
wget --post-file=somefile.ogg http://@var{server}:@var{port}
@end example
@item send_expect_100
Send an Expect: 100-continue header for POST. If set to 1 it will send, if set
to 0 it won't, if set to -1 it will try to send if it is applicable. Default
value is -1.
@end table
@subsection HTTP Cookies
@@ -1255,7 +1179,7 @@ options.
This protocol accepts the following options.
@table @option
@item connect_timeout=@var{milliseconds}
@item connect_timeout
Connection timeout; SRT cannot connect for RTT > 1500 msec
(2 handshake exchanges) with the default connect timeout of
3 seconds. This option applies to the caller and rendezvous
@@ -1286,7 +1210,7 @@ IP Type of Service. Applies to sender only. Default value is 0xB8.
@item ipttl=@var{ttl}
IP Time To Live. Applies to sender only. Default value is 64.
@item latency=@var{microseconds}
@item latency
Timestamp-based Packet Delivery Delay.
Used to absorb bursts of missed packet retransmissions.
This flag sets both @option{rcvlatency} and @option{peerlatency}
@@ -1297,7 +1221,7 @@ when side is sender and @option{rcvlatency}
when side is receiver, and the bidirectional stream
sending is not supported.
@item listen_timeout=@var{microseconds}
@item listen_timeout
Set socket listen timeout.
@item maxbw=@var{bytes/seconds}
@@ -1342,26 +1266,6 @@ only if @option{pbkeylen} is non-zero. It is used on
the receiver only if the received data is encrypted.
The configured passphrase cannot be recovered (write-only).
@item enforced_encryption=@var{1|0}
If true, both connection parties must have the same password
set (including empty, that is, with no encryption). If the
password doesn't match or only one side is unencrypted,
the connection is rejected. Default is true.
@item kmrefreshrate=@var{packets}
The number of packets to be transmitted after which the
encryption key is switched to a new key. Default is -1.
-1 means auto (0x1000000 in srt library). The range for
this option is integers in the 0 - @code{INT_MAX}.
@item kmpreannounce=@var{packets}
The interval between when a new encryption key is sent and
when switchover occurs. This value also applies to the
subsequent interval between when switchover occurs and
when the old encryption key is decommissioned. Default is -1.
-1 means auto (0x1000 in srt library). The range for
this option is integers in the 0 - @code{INT_MAX}.
@item payload_size=@var{bytes}
Sets the maximum declared size of a packet transferred
during the single call to the sending function in Live
@@ -1377,7 +1281,7 @@ use a bigger maximum frame size, though not greater than
@item pkt_size=@var{bytes}
Alias for @samp{payload_size}.
@item peerlatency=@var{microseconds}
@item peerlatency
The latency value (as described in @option{rcvlatency}) that is
set by the sender side as a minimum value for the receiver.
@@ -1389,7 +1293,7 @@ Not required on receiver (set to 0),
key size obtained from sender in HaiCrypt handshake.
Default value is 0.
@item rcvlatency=@var{microseconds}
@item rcvlatency
The time that should elapse since the moment when the
packet was sent and the moment when it's delivered to
the receiver application in the receiving function.
@@ -1407,10 +1311,12 @@ Set UDP receive buffer size, expressed in bytes.
@item send_buffer_size=@var{bytes}
Set UDP send buffer size, expressed in bytes.
@item timeout=@var{microseconds}
Set raise error timeouts for read, write and connect operations. Note that the
SRT library has internal timeouts which can be controlled separately, the
value set here is only a cap on those.
@item rw_timeout
Set raise error timeout for read/write optations.
This option is only relevant in read mode:
if no data arrived in more than this time
interval, raise error.
@item tlpktdrop=@var{1|0}
Too-late Packet Drop. When enabled on receiver, it skips
@@ -1504,12 +1410,6 @@ the overhead transmission (retransmitted and control packets).
file: Set options as for non-live transmission. See @option{messageapi}
for further explanations
@item linger=@var{seconds}
The number of seconds that the socket waits for unsent data when closing.
Default is -1. -1 means auto (off with 0 seconds in live mode, on with 180
seconds in file mode). The range for this option is integers in the
0 - @code{INT_MAX}.
@end table
For more information see: @url{https://github.com/Haivision/srt}.
@@ -1711,7 +1611,7 @@ The list of supported options follows.
@item buffer_size=@var{size}
Set the UDP maximum socket buffer size in bytes. This is used to set either
the receive or send buffer size, depending on what the socket is used for.
Default is 32 KB for output, 384 KB for input. See also @var{fifo_size}.
Default is 64KB. See also @var{fifo_size}.
@item bitrate=@var{bitrate}
If set to nonzero, the output will have the specified constant bitrate if the
@@ -1820,51 +1720,4 @@ Timeout in ms.
Create the Unix socket in listening mode.
@end table
@section zmq
ZeroMQ asynchronous messaging using the libzmq library.
This library supports unicast streaming to multiple clients without relying on
an external server.
The required syntax for streaming or connecting to a stream is:
@example
zmq:tcp://ip-address:port
@end example
Example:
Create a localhost stream on port 5555:
@example
ffmpeg -re -i input -f mpegts zmq:tcp://127.0.0.1:5555
@end example
Multiple clients may connect to the stream using:
@example
ffplay zmq:tcp://127.0.0.1:5555
@end example
Streaming to multiple clients is implemented using a ZeroMQ Pub-Sub pattern.
The server side binds to a port and publishes data. Clients connect to the
server (via IP address/port) and subscribe to the stream. The order in which
the server and client start generally does not matter.
ffmpeg must be compiled with the --enable-libzmq option to support
this protocol.
Options can be set on the @command{ffmpeg}/@command{ffplay} command
line. The following options are supported:
@table @option
@item pkt_size
Forces the maximum packet size for sending/receiving data. The default value is
131,072 bytes. On the server side, this sets the maximum size of sent packets
via ZeroMQ. On the clients, it sets an internal buffer size for receiving
packets. Note that pkt_size on the clients should be equal to or greater than
pkt_size on the server. Otherwise the received message may be truncated causing
decoding errors.
@end table
@c man end PROTOCOLS

View File

@@ -5,8 +5,7 @@
The video scaler supports the following named options.
Options may be set by specifying -@var{option} @var{value} in the
FFmpeg tools, with a few API-only exceptions noted below.
For programmatic use, they can be set explicitly in the
FFmpeg tools. For programmatic use, they can be set explicitly in the
@code{SwsContext} options or through the @file{libavutil/opt.h} API.
@table @option
@@ -48,8 +47,7 @@ Select Gaussian rescaling algorithm.
Select sinc rescaling algorithm.
@item lanczos
Select Lanczos rescaling algorithm. The default width (alpha) is 3 and can be
changed by setting @code{param0}.
Select Lanczos rescaling algorithm.
@item spline
Select natural bicubic spline rescaling algorithm.
@@ -70,31 +68,29 @@ Select full chroma input.
Enable bitexact output.
@end table
@item srcw @var{(API only)}
@item srcw
Set source width.
@item srch @var{(API only)}
@item srch
Set source height.
@item dstw @var{(API only)}
@item dstw
Set destination width.
@item dsth @var{(API only)}
@item dsth
Set destination height.
@item src_format @var{(API only)}
@item src_format
Set source pixel format (must be expressed as an integer).
@item dst_format @var{(API only)}
@item dst_format
Set destination pixel format (must be expressed as an integer).
@item src_range @var{(boolean)}
If value is set to @code{1}, indicates source is full range. Default value is
@code{0}, which indicates source is limited range.
@item src_range
Select source range.
@item dst_range @var{(boolean)}
If value is set to @code{1}, enable full range for destination. Default value
is @code{0}, which enables limited range.
@item dst_range
Select destination range.
@anchor{sws_params}
@item param0, param1

View File

@@ -172,7 +172,7 @@ spatial_decomposition_count
FIXME
colorspace_type
0 unspecified YCbCr
0 unspecified YcbCr
1 Gray
2 Gray + Alpha
3 GBR
@@ -235,7 +235,7 @@ spatial_decomposition_type
stored as delta from last, last is reset to 0 if always_reset || keyframe
qlog
quality (logarithmic quantizer scale)
quality (logarthmic quantizer scale)
stored as delta from last, last is reset to 0 if always_reset || keyframe
mv_scale
@@ -251,11 +251,11 @@ block_max_depth
stored as delta from last, last is reset to 0 if always_reset || keyframe
quant_table
quantization table
quantiztation table
Highlevel bitstream structure:
==============================
=============================
--------------------------------------------
| Header |
--------------------------------------------
@@ -303,7 +303,7 @@ Decoding process:
| Intra DC | |
| | LL0 subband prediction
------------ |
\ Dequantization
\ Dequantizaton
------------------- \ |
| Reference frames | \ IDWT
| ------- ------- | Motion \ |
@@ -390,8 +390,8 @@ motion vector prediction
(mvx_diff, mvy_diff)*mv_scale
Intra DC Prediction:
====================
Intra DC Predicton:
======================
the luma and chroma values of the left block are used as predictors
the used luma and chroma is the sum of the predictor and y_diff, cb_diff, cr_diff
@@ -407,7 +407,7 @@ Motion Compensation:
Halfpel interpolation:
----------------------
Halfpel interpolation is done by convolution with the halfpel filter stored
halfpel interpolation is done by convolution with the halfpel filter stored
in the header:
horizontal halfpel samples are found by
@@ -463,8 +463,8 @@ to the closest available fullpel sample
Smaller pel interpolation:
--------------------------
if diag_mc is set then points which lie on a line between 2 vertically,
horizontally or diagonally adjacent halfpel points shall be interpolated
linearly with rounding to nearest and halfway values rounded up.
horiziontally or diagonally adjacent halfpel points shall be interpolated
linearls with rounding to nearest and halfway values rounded up.
points which lie on 2 diagonals at the same time should only use the one
diagonal not containing the fullpel point
@@ -519,8 +519,8 @@ width,height here are the width and height of the LL0 subband not of the final
video
Dequantization:
===============
Dequantizaton:
==============
FIXME
Wavelet Transform:

View File

@@ -126,15 +126,6 @@ The following examples are all valid time duration:
@item 55
55 seconds
@item 0.2
0.2 seconds
@item 200ms
200 milliseconds, that's 0.2s
@item 200000us
200000 microseconds, that's 0.2s
@item 12:03:45
12 hours, 03 minutes and 45 seconds
@@ -713,8 +704,6 @@ FL+FR+FC+LFE+BL+BR+FLC+FRC
FL+FR+FC+LFE+FLC+FRC+SL+SR
@item octagonal
FL+FR+FC+BL+BR+BC+SL+SR
@item hexadecagonal
FL+FR+FC+BL+BR+BC+SL+SR+WL+WR+TBL+TBR+TBC+TFC+TFL+TFR
@item downmix
DL+DR
@end table
@@ -931,9 +920,6 @@ corresponding input value will be returned.
@item round(expr)
Round the value of expression @var{expr} to the nearest integer. For example, "round(1.5)" is "2.0".
@item sgn(x)
Compute sign of @var{x}.
@item sin(x)
Compute sine of @var{x}.

View File

@@ -389,7 +389,7 @@ distributor with something like this:
td.in = in;
td.out = out;
ctx->internal->execute(ctx, filter_slice, &td, NULL, FFMIN(outlink->h, ff_filter_get_nb_threads(ctx)));
ctx->internal->execute(ctx, filter_slice, &td, NULL, FFMIN(outlink->h, ctx->graph->nb_threads));
// ...

View File

@@ -38,6 +38,7 @@ OBJCCFLAGS = $(CPPFLAGS) $(CFLAGS) $(OBJCFLAGS)
ASFLAGS := $(CPPFLAGS) $(ASFLAGS)
CXXFLAGS := $(CPPFLAGS) $(CFLAGS) $(CXXFLAGS)
X86ASMFLAGS += $(IFLAGS:%=%/) -I$(<D)/ -Pconfig.asm
NVCCFLAGS += -ptx
HOSTCCFLAGS = $(IFLAGS) $(HOSTCPPFLAGS) $(HOSTCFLAGS)
LDFLAGS := $(ALLFFLIBS:%=$(LD_PATH)lib%) $(LDFLAGS)
@@ -90,7 +91,7 @@ COMPILE_NVCC = $(call COMPILE,NVCC)
%.h.c:
$(Q)echo '#include "$*.h"' >$@
%.ptx: %.cu $(SRC_PATH)/compat/cuda/cuda_runtime.h
%.ptx: %.cu
$(COMPILE_NVCC)
%.ptx.c: %.ptx
@@ -160,9 +161,9 @@ $(SLIBOBJS): | $(sort $(dir $(SLIBOBJS)))
$(TESTOBJS): | $(sort $(dir $(TESTOBJS)))
$(TOOLOBJS): | tools
OUTDIRS := $(OUTDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
CLEANSUFFIXES = *.d *.gcda *.gcno *.h.c *.ho *.map *.o *.pc *.ptx *.ptx.c *.ver *.version *$(DEFAULT_X86ASMD).asm *~ *.ilk *.pdb
CLEANSUFFIXES = *.d *.gcda *.gcno *.h.c *.ho *.map *.o *.pc *.ptx *.ptx.c *.ver *.version *$(DEFAULT_X86ASMD).asm *~
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a
define RULES

View File

@@ -10,6 +10,7 @@ ALLAVPROGS = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF))
ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
OBJS-ffmpeg += fftools/ffmpeg_opt.o fftools/ffmpeg_filter.o fftools/ffmpeg_hw.o
OBJS-ffmpeg-$(CONFIG_CUVID) += fftools/ffmpeg_cuvid.o
OBJS-ffmpeg-$(CONFIG_LIBMFX) += fftools/ffmpeg_qsv.o
ifndef CONFIG_VIDEOTOOLBOX
OBJS-ffmpeg-$(CONFIG_VDA) += fftools/ffmpeg_videotoolbox.o
@@ -31,7 +32,7 @@ $(foreach P,$(AVPROGS-yes),$(eval $(call DOFFTOOL,$(P))))
all: $(AVPROGS)
fftools/ffprobe.o fftools/cmdutils.o: libavutil/ffversion.h | fftools
OUTDIRS += fftools
OBJDIRS += fftools
ifdef AVPROGS
install: install-progs install-data

View File

@@ -55,6 +55,9 @@
#include "libavutil/ffversion.h"
#include "libavutil/version.h"
#include "cmdutils.h"
#if CONFIG_NETWORK
#include "libavformat/network.h"
#endif
#if HAVE_SYS_RESOURCE_H
#include <sys/time.h>
#include <sys/resource.h>
@@ -116,7 +119,7 @@ static void log_callback_report(void *ptr, int level, const char *fmt, va_list v
void init_dynload(void)
{
#if HAVE_SETDLLDIRECTORY && defined(_WIN32)
#ifdef _WIN32
/* Calling SetDllDirectory with the empty string (but not NULL) removes the
* current working directory from the DLL search path as a security pre-caution. */
SetDllDirectory("");
@@ -179,7 +182,7 @@ void show_help_options(const OptionDef *options, const char *msg, int req_flags,
first = 1;
for (po = options; po->name; po++) {
char buf[128];
char buf[64];
if (((po->flags & req_flags) != req_flags) ||
(alt_flags && !(po->flags & alt_flags)) ||
@@ -845,8 +848,8 @@ do { \
}
if (octx->cur_group.nb_opts || codec_opts || format_opts || resample_opts)
av_log(NULL, AV_LOG_WARNING, "Trailing option(s) found in the "
"command: may be ignored.\n");
av_log(NULL, AV_LOG_WARNING, "Trailing options were found on the "
"commandline.\n");
av_log(NULL, AV_LOG_DEBUG, "Finished splitting the commandline.\n");
@@ -977,7 +980,6 @@ static int init_report(const char *env)
char *filename_template = NULL;
char *key, *val;
int ret, count = 0;
int prog_loglevel, envlevel = 0;
time_t now;
struct tm *tm;
AVBPrint filename;
@@ -1009,7 +1011,6 @@ static int init_report(const char *env)
av_log(NULL, AV_LOG_FATAL, "Invalid report file level\n");
exit_program(1);
}
envlevel = 1;
} else {
av_log(NULL, AV_LOG_ERROR, "Unknown key '%s' in FFREPORT\n", key);
}
@@ -1026,10 +1027,6 @@ static int init_report(const char *env)
return AVERROR(ENOMEM);
}
prog_loglevel = av_log_get_level();
if (!envlevel)
report_file_level = FFMAX(report_file_level, prog_loglevel);
report_file = fopen(filename.str, "w");
if (!report_file) {
int ret = AVERROR(errno);
@@ -1040,17 +1037,16 @@ static int init_report(const char *env)
av_log_set_callback(log_callback_report);
av_log(NULL, AV_LOG_INFO,
"%s started on %04d-%02d-%02d at %02d:%02d:%02d\n"
"Report written to \"%s\"\n"
"Log level: %d\n",
"Report written to \"%s\"\n",
program_name,
tm->tm_year + 1900, tm->tm_mon + 1, tm->tm_mday,
tm->tm_hour, tm->tm_min, tm->tm_sec,
filename.str, report_file_level);
filename.str);
av_bprint_finalize(&filename, NULL);
return 0;
}
int opt_report(void *optctx, const char *opt, const char *arg)
int opt_report(const char *opt)
{
return init_report(NULL);
}
@@ -1420,6 +1416,10 @@ static void print_codec(const AVCodec *c)
printf("threads ");
if (c->capabilities & AV_CODEC_CAP_AVOID_PROBING)
printf("avoidprobe ");
if (c->capabilities & AV_CODEC_CAP_INTRA_ONLY)
printf("intraonly ");
if (c->capabilities & AV_CODEC_CAP_LOSSLESS)
printf("lossless ");
if (c->capabilities & AV_CODEC_CAP_HARDWARE)
printf("hardware ");
if (c->capabilities & AV_CODEC_CAP_HYBRID)
@@ -1493,14 +1493,13 @@ static char get_media_type_char(enum AVMediaType type)
}
}
static const AVCodec *next_codec_for_id(enum AVCodecID id, void **iter,
static const AVCodec *next_codec_for_id(enum AVCodecID id, const AVCodec *prev,
int encoder)
{
const AVCodec *c;
while ((c = av_codec_iterate(iter))) {
if (c->id == id &&
(encoder ? av_codec_is_encoder(c) : av_codec_is_decoder(c)))
return c;
while ((prev = av_codec_next(prev))) {
if (prev->id == id &&
(encoder ? av_codec_is_encoder(prev) : av_codec_is_decoder(prev)))
return prev;
}
return NULL;
}
@@ -1537,12 +1536,11 @@ static unsigned get_codecs_sorted(const AVCodecDescriptor ***rcodecs)
static void print_codecs_for_id(enum AVCodecID id, int encoder)
{
void *iter = NULL;
const AVCodec *codec;
const AVCodec *codec = NULL;
printf(" (%s: ", encoder ? "encoders" : "decoders");
while ((codec = next_codec_for_id(id, &iter, encoder)))
while ((codec = next_codec_for_id(id, codec, encoder)))
printf("%s ", codec->name);
printf(")");
@@ -1565,8 +1563,7 @@ int show_codecs(void *optctx, const char *opt, const char *arg)
" -------\n");
for (i = 0; i < nb_codecs; i++) {
const AVCodecDescriptor *desc = codecs[i];
const AVCodec *codec;
void *iter = NULL;
const AVCodec *codec = NULL;
if (strstr(desc->name, "_deprecated"))
continue;
@@ -1584,14 +1581,14 @@ int show_codecs(void *optctx, const char *opt, const char *arg)
/* print decoders/encoders when there's more than one or their
* names are different from codec name */
while ((codec = next_codec_for_id(desc->id, &iter, 0))) {
while ((codec = next_codec_for_id(desc->id, codec, 0))) {
if (strcmp(codec->name, desc->name)) {
print_codecs_for_id(desc->id, 0);
break;
}
}
iter = NULL;
while ((codec = next_codec_for_id(desc->id, &iter, 1))) {
codec = NULL;
while ((codec = next_codec_for_id(desc->id, codec, 1))) {
if (strcmp(codec->name, desc->name)) {
print_codecs_for_id(desc->id, 1);
break;
@@ -1622,10 +1619,9 @@ static void print_codecs(int encoder)
encoder ? "Encoders" : "Decoders");
for (i = 0; i < nb_codecs; i++) {
const AVCodecDescriptor *desc = codecs[i];
const AVCodec *codec;
void *iter = NULL;
const AVCodec *codec = NULL;
while ((codec = next_codec_for_id(desc->id, &iter, encoder))) {
while ((codec = next_codec_for_id(desc->id, codec, encoder))) {
printf(" %c", get_media_type_char(desc->type));
printf((codec->capabilities & AV_CODEC_CAP_FRAME_THREADS) ? "F" : ".");
printf((codec->capabilities & AV_CODEC_CAP_SLICE_THREADS) ? "S" : ".");
@@ -1830,10 +1826,9 @@ static void show_help_codec(const char *name, int encoder)
if (codec)
print_codec(codec);
else if ((desc = avcodec_descriptor_get_by_name(name))) {
void *iter = NULL;
int printed = 0;
while ((codec = next_codec_for_id(desc->id, &iter, encoder))) {
while ((codec = next_codec_for_id(desc->id, codec, encoder))) {
printed = 1;
print_codec(codec);
}
@@ -1868,24 +1863,6 @@ static void show_help_demuxer(const char *name)
show_help_children(fmt->priv_class, AV_OPT_FLAG_DECODING_PARAM);
}
static void show_help_protocol(const char *name)
{
const AVClass *proto_class;
if (!name) {
av_log(NULL, AV_LOG_ERROR, "No protocol name specified.\n");
return;
}
proto_class = avio_protocol_get_class(name);
if (!proto_class) {
av_log(NULL, AV_LOG_ERROR, "Unknown protocol '%s'.\n", name);
return;
}
show_help_children(proto_class, AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_ENCODING_PARAM);
}
static void show_help_muxer(const char *name)
{
const AVCodecDescriptor *desc;
@@ -2016,8 +1993,6 @@ int show_help(void *optctx, const char *opt, const char *arg)
show_help_demuxer(par);
} else if (!strcmp(topic, "muxer")) {
show_help_muxer(par);
} else if (!strcmp(topic, "protocol")) {
show_help_protocol(par);
#if CONFIG_AVFILTER
} else if (!strcmp(topic, "filter")) {
show_help_filter(par);
@@ -2057,7 +2032,7 @@ FILE *get_preset_file(char *filename, size_t filename_size,
av_strlcpy(filename, preset_name, filename_size);
f = fopen(filename, "r");
} else {
#if HAVE_GETMODULEHANDLE && defined(_WIN32)
#ifdef _WIN32
char datadir[MAX_PATH], *ls;
base[2] = NULL;
@@ -2210,7 +2185,7 @@ double get_rotation(AVStream *st)
if (fabs(theta - 90*round(theta/90)) > 2)
av_log(NULL, AV_LOG_WARNING, "Odd rotation angle.\n"
"If you want to help, upload a sample "
"of this file to https://streams.videolan.org/upload/ "
"of this file to ftp://upload.ffmpeg.org/incoming/ "
"and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)");
return theta;

View File

@@ -99,7 +99,7 @@ int opt_default(void *optctx, const char *opt, const char *arg);
*/
int opt_loglevel(void *optctx, const char *opt, const char *arg);
int opt_report(void *optctx, const char *opt, const char *arg);
int opt_report(const char *opt);
int opt_max_alloc(void *optctx, const char *opt, const char *arg);
@@ -236,7 +236,7 @@ void show_help_options(const OptionDef *options, const char *msg, int req_flags,
{ "colors", OPT_EXIT, { .func_arg = show_colors }, "show available color names" }, \
{ "loglevel", HAS_ARG, { .func_arg = opt_loglevel }, "set logging level", "loglevel" }, \
{ "v", HAS_ARG, { .func_arg = opt_loglevel }, "set logging level", "loglevel" }, \
{ "report", 0, { .func_arg = opt_report }, "generate a report" }, \
{ "report", 0, { (void*)opt_report }, "generate a report" }, \
{ "max_alloc", HAS_ARG, { .func_arg = opt_max_alloc }, "set maximum size of a single allocated block", "bytes" }, \
{ "cpuflags", HAS_ARG | OPT_EXPERT, { .func_arg = opt_cpuflags }, "force specific cpu flags", "flags" }, \
{ "hide_banner", OPT_BOOL | OPT_EXPERT, {&hide_banner}, "do not show program banner", "hide_banner" }, \

View File

@@ -182,7 +182,7 @@ static int sub2video_get_blank_frame(InputStream *ist)
ist->sub2video.frame->width = ist->dec_ctx->width ? ist->dec_ctx->width : ist->sub2video.w;
ist->sub2video.frame->height = ist->dec_ctx->height ? ist->dec_ctx->height : ist->sub2video.h;
ist->sub2video.frame->format = AV_PIX_FMT_RGB32;
if ((ret = av_frame_get_buffer(frame, 0)) < 0)
if ((ret = av_frame_get_buffer(frame, 32)) < 0)
return ret;
memset(frame->data[0], 0, frame->height * frame->linesize[0]);
return 0;
@@ -237,7 +237,7 @@ static void sub2video_push_ref(InputStream *ist, int64_t pts)
}
}
void sub2video_update(InputStream *ist, int64_t heartbeat_pts, AVSubtitle *sub)
void sub2video_update(InputStream *ist, AVSubtitle *sub)
{
AVFrame *frame = ist->sub2video.frame;
int8_t *dst;
@@ -254,12 +254,7 @@ void sub2video_update(InputStream *ist, int64_t heartbeat_pts, AVSubtitle *sub)
AV_TIME_BASE_Q, ist->st->time_base);
num_rects = sub->num_rects;
} else {
/* If we are initializing the system, utilize current heartbeat
PTS as the start time, and show until the following subpicture
is received. Otherwise, utilize the previous subpicture's end time
as the fall-back value. */
pts = ist->sub2video.initialize ?
heartbeat_pts : ist->sub2video.end_pts;
pts = ist->sub2video.end_pts;
end_pts = INT64_MAX;
num_rects = 0;
}
@@ -274,7 +269,6 @@ void sub2video_update(InputStream *ist, int64_t heartbeat_pts, AVSubtitle *sub)
sub2video_copy_rect(dst, dst_linesize, frame->width, frame->height, sub->rects[i]);
sub2video_push_ref(ist, pts);
ist->sub2video.end_pts = end_pts;
ist->sub2video.initialize = 0;
}
static void sub2video_heartbeat(InputStream *ist, int64_t pts)
@@ -297,11 +291,9 @@ static void sub2video_heartbeat(InputStream *ist, int64_t pts)
/* do not send the heartbeat frame if the subtitle is already ahead */
if (pts2 <= ist2->sub2video.last_pts)
continue;
if (pts2 >= ist2->sub2video.end_pts || ist2->sub2video.initialize)
/* if we have hit the end of the current displayed subpicture,
or if we need to initialize the system, update the
overlayed subpicture and its start/end times */
sub2video_update(ist2, pts2 + 1, NULL);
if (pts2 >= ist2->sub2video.end_pts ||
(!ist2->sub2video.frame->data[0] && ist2->sub2video.end_pts < INT64_MAX))
sub2video_update(ist2, NULL);
for (j = 0, nb_reqs = 0; j < ist2->nb_filters; j++)
nb_reqs += av_buffersrc_get_nb_failed_requests(ist2->filters[j]->filter);
if (nb_reqs)
@@ -315,7 +307,7 @@ static void sub2video_flush(InputStream *ist)
int ret;
if (ist->sub2video.end_pts < INT64_MAX)
sub2video_update(ist, INT64_MAX, NULL);
sub2video_update(ist, NULL);
for (i = 0; i < ist->nb_filters; i++) {
ret = av_buffersrc_add_frame(ist->filters[i]->filter, NULL);
if (ret != AVERROR_EOF && ret < 0)
@@ -501,37 +493,32 @@ static void ffmpeg_cleanup(int ret)
FilterGraph *fg = filtergraphs[i];
avfilter_graph_free(&fg->graph);
for (j = 0; j < fg->nb_inputs; j++) {
InputFilter *ifilter = fg->inputs[j];
struct InputStream *ist = ifilter->ist;
while (av_fifo_size(ifilter->frame_queue)) {
while (av_fifo_size(fg->inputs[j]->frame_queue)) {
AVFrame *frame;
av_fifo_generic_read(ifilter->frame_queue, &frame,
av_fifo_generic_read(fg->inputs[j]->frame_queue, &frame,
sizeof(frame), NULL);
av_frame_free(&frame);
}
av_fifo_freep(&ifilter->frame_queue);
if (ist->sub2video.sub_queue) {
while (av_fifo_size(ist->sub2video.sub_queue)) {
av_fifo_freep(&fg->inputs[j]->frame_queue);
if (fg->inputs[j]->ist->sub2video.sub_queue) {
while (av_fifo_size(fg->inputs[j]->ist->sub2video.sub_queue)) {
AVSubtitle sub;
av_fifo_generic_read(ist->sub2video.sub_queue,
av_fifo_generic_read(fg->inputs[j]->ist->sub2video.sub_queue,
&sub, sizeof(sub), NULL);
avsubtitle_free(&sub);
}
av_fifo_freep(&ist->sub2video.sub_queue);
av_fifo_freep(&fg->inputs[j]->ist->sub2video.sub_queue);
}
av_buffer_unref(&ifilter->hw_frames_ctx);
av_freep(&ifilter->name);
av_buffer_unref(&fg->inputs[j]->hw_frames_ctx);
av_freep(&fg->inputs[j]->name);
av_freep(&fg->inputs[j]);
}
av_freep(&fg->inputs);
for (j = 0; j < fg->nb_outputs; j++) {
OutputFilter *ofilter = fg->outputs[j];
av_freep(&ofilter->name);
av_freep(&ofilter->formats);
av_freep(&ofilter->channel_layouts);
av_freep(&ofilter->sample_rates);
av_freep(&fg->outputs[j]->name);
av_freep(&fg->outputs[j]->formats);
av_freep(&fg->outputs[j]->channel_layouts);
av_freep(&fg->outputs[j]->sample_rates);
av_freep(&fg->outputs[j]);
}
av_freep(&fg->outputs);
@@ -563,7 +550,9 @@ static void ffmpeg_cleanup(int ret)
if (!ost)
continue;
av_bsf_free(&ost->bsf_ctx);
for (j = 0; j < ost->nb_bitstream_filters; j++)
av_bsf_free(&ost->bsf_ctx[j]);
av_freep(&ost->bsf_ctx);
av_frame_free(&ost->filtered_frame);
av_frame_free(&ost->last_frame);
@@ -578,7 +567,6 @@ static void ffmpeg_cleanup(int ret)
ost->audio_channels_mapped = 0;
av_dict_free(&ost->sws_dict);
av_dict_free(&ost->swr_opts);
avcodec_free_context(&ost->enc_ctx);
avcodec_parameters_free(&ost->ref_par);
@@ -791,8 +779,6 @@ static void write_packet(OutputFile *of, AVPacket *pkt, OutputStream *ost, int u
int64_t max = ost->last_mux_dts + !(s->oformat->flags & AVFMT_TS_NONSTRICT);
if (pkt->dts < max) {
int loglevel = max - pkt->dts > 2 || st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO ? AV_LOG_WARNING : AV_LOG_DEBUG;
if (exit_on_error)
loglevel = AV_LOG_ERROR;
av_log(s, loglevel, "Non-monotonous DTS in output stream "
"%d:%d; previous: %"PRId64", current: %"PRId64"; ",
ost->file_index, ost->st->index, ost->last_mux_dts, pkt->dts);
@@ -862,15 +848,40 @@ static void output_packet(OutputFile *of, AVPacket *pkt,
{
int ret = 0;
/* apply the output bitstream filters */
if (ost->bsf_ctx) {
ret = av_bsf_send_packet(ost->bsf_ctx, eof ? NULL : pkt);
/* apply the output bitstream filters, if any */
if (ost->nb_bitstream_filters) {
int idx;
ret = av_bsf_send_packet(ost->bsf_ctx[0], eof ? NULL : pkt);
if (ret < 0)
goto finish;
while ((ret = av_bsf_receive_packet(ost->bsf_ctx, pkt)) >= 0)
write_packet(of, pkt, ost, 0);
if (ret == AVERROR(EAGAIN))
ret = 0;
eof = 0;
idx = 1;
while (idx) {
/* get a packet from the previous filter up the chain */
ret = av_bsf_receive_packet(ost->bsf_ctx[idx - 1], pkt);
if (ret == AVERROR(EAGAIN)) {
ret = 0;
idx--;
continue;
} else if (ret == AVERROR_EOF) {
eof = 1;
} else if (ret < 0)
goto finish;
/* send it to the next filter down the chain or to the muxer */
if (idx < ost->nb_bitstream_filters) {
ret = av_bsf_send_packet(ost->bsf_ctx[idx], eof ? NULL : pkt);
if (ret < 0)
goto finish;
idx++;
eof = 0;
} else if (eof)
goto finish;
else
write_packet(of, pkt, ost, 0);
}
} else if (!eof)
write_packet(of, pkt, ost, 0);
@@ -1068,7 +1079,6 @@ static void do_video_out(OutputFile *of,
if (!ost->filters_script &&
!ost->filters &&
(nb_filtergraphs == 0 || !filtergraphs[0]->graph_desc) &&
next_picture &&
ist &&
lrintf(next_picture->pkt_duration * av_q2d(ist->st->time_base) / av_q2d(enc->time_base)) > 0) {
@@ -1125,7 +1135,7 @@ static void do_video_out(OutputFile *of,
av_log(NULL, AV_LOG_DEBUG, "Not duplicating %d initial frames\n", (int)lrintf(delta0));
delta = duration;
delta0 = 0;
ost->sync_opts = llrint(sync_ipts);
ost->sync_opts = lrint(sync_ipts);
}
case VSYNC_CFR:
// FIXME set to 0.5 after we fix some dts/pts bugs like in avidec.c
@@ -1136,18 +1146,18 @@ static void do_video_out(OutputFile *of,
else if (delta > 1.1) {
nb_frames = lrintf(delta);
if (delta0 > 1.1)
nb0_frames = llrintf(delta0 - 0.6);
nb0_frames = lrintf(delta0 - 0.6);
}
break;
case VSYNC_VFR:
if (delta <= -0.6)
nb_frames = 0;
else if (delta > 0.6)
ost->sync_opts = llrint(sync_ipts);
ost->sync_opts = lrint(sync_ipts);
break;
case VSYNC_DROP:
case VSYNC_PASSTHROUGH:
ost->sync_opts = llrint(sync_ipts);
ost->sync_opts = lrint(sync_ipts);
break;
default:
av_assert0(0);
@@ -1183,27 +1193,33 @@ static void do_video_out(OutputFile *of,
}
ost->last_dropped = nb_frames == nb0_frames && next_picture;
/* duplicates frame if needed */
for (i = 0; i < nb_frames; i++) {
AVFrame *in_picture;
/* duplicates frame if needed */
for (i = 0; i < nb_frames; i++) {
AVFrame *in_picture;
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
if (i < nb0_frames && ost->last_frame) {
in_picture = ost->last_frame;
} else
in_picture = next_picture;
if (!in_picture)
return;
in_picture->pts = ost->sync_opts;
#if 1
if (!check_recording_time(ost))
#else
if (ost->frame_number >= ost->max_frames)
#endif
return;
{
int forced_keyframe = 0;
double pts_time;
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
if (i < nb0_frames && ost->last_frame) {
in_picture = ost->last_frame;
} else
in_picture = next_picture;
if (!in_picture)
return;
in_picture->pts = ost->sync_opts;
if (!check_recording_time(ost))
return;
if (enc->flags & (AV_CODEC_FLAG_INTERLACED_DCT | AV_CODEC_FLAG_INTERLACED_ME) &&
ost->top_field_first >= 0)
@@ -1254,8 +1270,7 @@ static void do_video_out(OutputFile *of,
ost->forced_keyframes_expr_const_values[FKF_N] += 1;
} else if ( ost->forced_keyframes
&& !strncmp(ost->forced_keyframes, "source", 6)
&& in_picture->key_frame==1
&& !i) {
&& in_picture->key_frame==1) {
forced_keyframe = 1;
}
@@ -1277,8 +1292,6 @@ static void do_video_out(OutputFile *of,
ret = avcodec_send_frame(enc, in_picture);
if (ret < 0)
goto error;
// Make sure Closed Captions will not be duplicated
av_frame_remove_side_data(in_picture, AV_FRAME_DATA_A53_CC);
while (1) {
ret = avcodec_receive_packet(enc, &pkt);
@@ -1315,17 +1328,18 @@ static void do_video_out(OutputFile *of,
fprintf(ost->logfile, "%s", enc->stats_out);
}
}
ost->sync_opts++;
/*
* For video, number of frames in == number of packets out.
* But there may be reordering, so we can't throw away frames on encoder
* flush, we need to limit them here, before they go into encoder.
*/
ost->frame_number++;
if (vstats_filename && frame_size)
do_video_stats(ost, frame_size);
}
ost->sync_opts++;
/*
* For video, number of frames in == number of packets out.
* But there may be reordering, so we can't throw away frames on encoder
* flush, we need to limit them here, before they go into encoder.
*/
ost->frame_number++;
if (vstats_filename && frame_size)
do_video_stats(ost, frame_size);
}
if (!ost->last_frame)
ost->last_frame = av_frame_alloc();
@@ -1478,6 +1492,8 @@ static int reap_filters(int flush)
av_rescale_q(filtered_frame->pts, filter_tb, enc->time_base) -
av_rescale_q(start_time, AV_TIME_BASE_Q, enc->time_base);
}
//if (ost->source_index >= 0)
// *filtered_frame= *input_streams[ost->source_index]->decoded_frame; //for me_threshold
switch (av_buffersink_get_type(filter)) {
case AVMEDIA_TYPE_VIDEO:
@@ -1808,7 +1824,7 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
} else
av_log(NULL, AV_LOG_INFO, "%s %c", buf.str, end);
fflush(stderr);
fflush(stderr);
}
av_bprint_finalize(&buf, NULL);
@@ -1893,6 +1909,9 @@ static void flush_encoders(void)
}
}
if (enc->codec_type == AVMEDIA_TYPE_AUDIO && enc->frame_size <= 1)
continue;
if (enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO)
continue;
@@ -1912,46 +1931,46 @@ static void flush_encoders(void)
av_assert0(0);
}
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
update_benchmark(NULL);
update_benchmark(NULL);
while ((ret = avcodec_receive_packet(enc, &pkt)) == AVERROR(EAGAIN)) {
ret = avcodec_send_frame(enc, NULL);
if (ret < 0) {
while ((ret = avcodec_receive_packet(enc, &pkt)) == AVERROR(EAGAIN)) {
ret = avcodec_send_frame(enc, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_FATAL, "%s encoding failed: %s\n",
desc,
av_err2str(ret));
exit_program(1);
}
}
update_benchmark("flush_%s %d.%d", desc, ost->file_index, ost->index);
if (ret < 0 && ret != AVERROR_EOF) {
av_log(NULL, AV_LOG_FATAL, "%s encoding failed: %s\n",
desc,
av_err2str(ret));
exit_program(1);
}
}
update_benchmark("flush_%s %d.%d", desc, ost->file_index, ost->index);
if (ret < 0 && ret != AVERROR_EOF) {
av_log(NULL, AV_LOG_FATAL, "%s encoding failed: %s\n",
desc,
av_err2str(ret));
exit_program(1);
}
if (ost->logfile && enc->stats_out) {
fprintf(ost->logfile, "%s", enc->stats_out);
}
if (ret == AVERROR_EOF) {
output_packet(of, &pkt, ost, 1);
break;
}
if (ost->finished & MUXER_FINISHED) {
av_packet_unref(&pkt);
continue;
}
av_packet_rescale_ts(&pkt, enc->time_base, ost->mux_timebase);
pkt_size = pkt.size;
output_packet(of, &pkt, ost, 0);
if (ost->enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO && vstats_filename) {
do_video_stats(ost, pkt_size);
}
if (ost->logfile && enc->stats_out) {
fprintf(ost->logfile, "%s", enc->stats_out);
}
if (ret == AVERROR_EOF) {
output_packet(of, &pkt, ost, 1);
break;
}
if (ost->finished & MUXER_FINISHED) {
av_packet_unref(&pkt);
continue;
}
av_packet_rescale_ts(&pkt, enc->time_base, ost->mux_timebase);
pkt_size = pkt.size;
output_packet(of, &pkt, ost, 0);
if (ost->enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO && vstats_filename) {
do_video_stats(ost, pkt_size);
}
}
}
}
@@ -1982,13 +2001,12 @@ static void do_streamcopy(InputStream *ist, OutputStream *ost, const AVPacket *p
InputFile *f = input_files [ist->file_index];
int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : of->start_time;
int64_t ost_tb_start_time = av_rescale_q(start_time, AV_TIME_BASE_Q, ost->mux_timebase);
AVPacket opkt;
AVPacket opkt = { 0 };
av_init_packet(&opkt);
// EOF: flush output bitstream filters.
if (!pkt) {
av_init_packet(&opkt);
opkt.data = NULL;
opkt.size = 0;
output_packet(of, &opkt, ost, 1);
return;
}
@@ -2027,29 +2045,40 @@ static void do_streamcopy(InputStream *ist, OutputStream *ost, const AVPacket *p
if (ost->enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
ost->sync_opts++;
if (av_packet_ref(&opkt, pkt) < 0)
exit_program(1);
if (pkt->pts != AV_NOPTS_VALUE)
opkt.pts = av_rescale_q(pkt->pts, ist->st->time_base, ost->mux_timebase) - ost_tb_start_time;
else
opkt.pts = AV_NOPTS_VALUE;
if (pkt->dts == AV_NOPTS_VALUE) {
if (pkt->dts == AV_NOPTS_VALUE)
opkt.dts = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ost->mux_timebase);
} else if (ost->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
int duration = av_get_audio_frame_duration(ist->dec_ctx, pkt->size);
if(!duration)
duration = ist->dec_ctx->frame_size;
opkt.dts = av_rescale_delta(ist->st->time_base, pkt->dts,
(AVRational){1, ist->dec_ctx->sample_rate}, duration,
&ist->filter_in_rescale_delta_last, ost->mux_timebase);
/* dts will be set immediately afterwards to what pts is now */
opkt.pts = opkt.dts - ost_tb_start_time;
} else
else
opkt.dts = av_rescale_q(pkt->dts, ist->st->time_base, ost->mux_timebase);
opkt.dts -= ost_tb_start_time;
if (ost->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO && pkt->dts != AV_NOPTS_VALUE) {
int duration = av_get_audio_frame_duration(ist->dec_ctx, pkt->size);
if(!duration)
duration = ist->dec_ctx->frame_size;
opkt.dts = opkt.pts = av_rescale_delta(ist->st->time_base, pkt->dts,
(AVRational){1, ist->dec_ctx->sample_rate}, duration, &ist->filter_in_rescale_delta_last,
ost->mux_timebase) - ost_tb_start_time;
}
opkt.duration = av_rescale_q(pkt->duration, ist->st->time_base, ost->mux_timebase);
opkt.flags = pkt->flags;
if (pkt->buf) {
opkt.buf = av_buffer_ref(pkt->buf);
if (!opkt.buf)
exit_program(1);
}
opkt.data = pkt->data;
opkt.size = pkt->size;
av_copy_packet_side_data(&opkt, pkt);
output_packet(of, &opkt, ost, 0);
}
@@ -2290,12 +2319,14 @@ static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output,
ist->samples_decoded += decoded_frame->nb_samples;
ist->frames_decoded++;
#if 1
/* increment next_dts to use for the case where the input stream does not
have timestamps or there are multiple frames in the packet */
ist->next_pts += ((int64_t)AV_TIME_BASE * decoded_frame->nb_samples) /
avctx->sample_rate;
ist->next_dts += ((int64_t)AV_TIME_BASE * decoded_frame->nb_samples) /
avctx->sample_rate;
#endif
if (decoded_frame->pts != AV_NOPTS_VALUE) {
decoded_frame_tb = ist->st->time_base;
@@ -2370,7 +2401,7 @@ static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output, int64_
av_log(ist->dec_ctx, AV_LOG_WARNING,
"video_delay is larger in decoder than demuxer %d > %d.\n"
"If you want to help, upload a sample "
"of this file to https://streams.videolan.org/upload/ "
"of this file to ftp://upload.ffmpeg.org/incoming/ "
"and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)\n",
ist->dec_ctx->has_b_frames,
ist->st->codecpar->video_delay);
@@ -2492,7 +2523,7 @@ static int transcode_subtitles(InputStream *ist, AVPacket *pkt, int *got_output,
return ret;
if (ist->sub2video.frame) {
sub2video_update(ist, INT64_MIN, &subtitle);
sub2video_update(ist, &subtitle);
} else if (ist->nb_filters) {
if (!ist->sub2video.sub_queue)
ist->sub2video.sub_queue = av_fifo_alloc(8 * sizeof(AVSubtitle));
@@ -2761,7 +2792,7 @@ static void print_sdp(void)
if (avio_open2(&sdp_pb, sdp_filename, AVIO_FLAG_WRITE, &int_cb, NULL) < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open sdp file '%s'\n", sdp_filename);
} else {
avio_print(sdp_pb, sdp);
avio_printf(sdp_pb, "SDP:\n%s", sdp);
avio_closep(&sdp_pb);
av_freep(&sdp_filename);
}
@@ -2993,28 +3024,35 @@ static int check_init_output_file(OutputFile *of, int file_index)
static int init_output_bsfs(OutputStream *ost)
{
AVBSFContext *ctx = ost->bsf_ctx;
int ret;
AVBSFContext *ctx;
int i, ret;
if (!ctx)
if (!ost->nb_bitstream_filters)
return 0;
ret = avcodec_parameters_copy(ctx->par_in, ost->st->codecpar);
if (ret < 0)
return ret;
for (i = 0; i < ost->nb_bitstream_filters; i++) {
ctx = ost->bsf_ctx[i];
ctx->time_base_in = ost->st->time_base;
ret = avcodec_parameters_copy(ctx->par_in,
i ? ost->bsf_ctx[i - 1]->par_out : ost->st->codecpar);
if (ret < 0)
return ret;
ret = av_bsf_init(ctx);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error initializing bitstream filter: %s\n",
ctx->filter->name);
return ret;
ctx->time_base_in = i ? ost->bsf_ctx[i - 1]->time_base_out : ost->st->time_base;
ret = av_bsf_init(ctx);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error initializing bitstream filter: %s\n",
ost->bsf_ctx[i]->filter->name);
return ret;
}
}
ctx = ost->bsf_ctx[ost->nb_bitstream_filters - 1];
ret = avcodec_parameters_copy(ost->st->codecpar, ctx->par_out);
if (ret < 0)
return ret;
ost->st->time_base = ctx->time_base_out;
return 0;
@@ -3311,7 +3349,7 @@ static int init_output_stream_encode(OutputStream *ost)
"if you want a different framerate.\n",
ost->file_index, ost->index);
}
// ost->frame_rate = ist->st->avg_frame_rate.num ? ist->st->avg_frame_rate : (AVRational){25, 1};
if (ost->enc->supported_framerates && !ost->force_fps) {
int idx = av_find_nearest_q_idx(ost->frame_rate, ost->enc->supported_framerates);
ost->frame_rate = ost->enc->supported_framerates[idx];
@@ -3346,6 +3384,10 @@ static int init_output_stream_encode(OutputStream *ost)
av_log(oc, AV_LOG_WARNING, "Frame rate very high for a muxer not efficiently supporting it.\n"
"Please consider specifying a lower framerate, a different muxer or -vsync 2\n");
}
for (j = 0; j < ost->forced_kf_count; j++)
ost->forced_kf_pts[j] = av_rescale_q(ost->forced_kf_pts[j],
AV_TIME_BASE_Q,
enc_ctx->time_base);
enc_ctx->width = av_buffersink_get_w(ost->filter->filter);
enc_ctx->height = av_buffersink_get_h(ost->filter->filter);
@@ -3390,8 +3432,8 @@ static int init_output_stream_encode(OutputStream *ost)
ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N] = NAN;
ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T] = NAN;
// Don't parse the 'forced_keyframes' in case of 'keep-source-keyframes',
// parse it only for static kf timings
// Don't parse the 'forced_keyframes' in case of 'keep-source-keyframes',
// parse it only for static kf timings
} else if(strncmp(ost->forced_keyframes, "source", 6)) {
parse_forced_key_frames(ost->forced_keyframes, ost, ost->enc_ctx);
}
@@ -3447,14 +3489,21 @@ static int init_output_stream(OutputStream *ost, char *error, int error_len)
!av_dict_get(ost->encoder_opts, "ab", NULL, 0))
av_dict_set(&ost->encoder_opts, "b", "128000", 0);
ret = hw_device_setup_for_encode(ost);
if (ret < 0) {
snprintf(error, error_len, "Device setup failed for "
"encoder on output stream #%d:%d : %s",
if (ost->filter && av_buffersink_get_hw_frames_ctx(ost->filter->filter) &&
((AVHWFramesContext*)av_buffersink_get_hw_frames_ctx(ost->filter->filter)->data)->format ==
av_buffersink_get_format(ost->filter->filter)) {
ost->enc_ctx->hw_frames_ctx = av_buffer_ref(av_buffersink_get_hw_frames_ctx(ost->filter->filter));
if (!ost->enc_ctx->hw_frames_ctx)
return AVERROR(ENOMEM);
} else {
ret = hw_device_setup_for_encode(ost);
if (ret < 0) {
snprintf(error, error_len, "Device setup failed for "
"encoder on output stream #%d:%d : %s",
ost->file_index, ost->index, av_err2str(ret));
return ret;
return ret;
}
}
if (ist && ist->dec->type == AVMEDIA_TYPE_SUBTITLE && ost->enc->type == AVMEDIA_TYPE_SUBTITLE) {
int input_props = 0, output_props = 0;
AVCodecDescriptor const *input_descriptor =
@@ -3530,14 +3579,12 @@ static int init_output_stream(OutputStream *ost, char *error, int error_len)
int i;
for (i = 0; i < ist->st->nb_side_data; i++) {
AVPacketSideData *sd = &ist->st->side_data[i];
if (sd->type != AV_PKT_DATA_CPB_PROPERTIES) {
uint8_t *dst = av_stream_new_side_data(ost->st, sd->type, sd->size);
if (!dst)
return AVERROR(ENOMEM);
memcpy(dst, sd->data, sd->size);
if (ist->autorotate && sd->type == AV_PKT_DATA_DISPLAYMATRIX)
av_display_rotation_set((uint32_t *)dst, 0);
}
uint8_t *dst = av_stream_new_side_data(ost->st, sd->type, sd->size);
if (!dst)
return AVERROR(ENOMEM);
memcpy(dst, sd->data, sd->size);
if (ist->autorotate && sd->type == AV_PKT_DATA_DISPLAYMATRIX)
av_display_rotation_set((uint32_t *)dst, 0);
}
}
@@ -3836,9 +3883,7 @@ static OutputStream *choose_output(void)
av_rescale_q(ost->st->cur_dts, ost->st->time_base,
AV_TIME_BASE_Q);
if (ost->st->cur_dts == AV_NOPTS_VALUE)
av_log(NULL, AV_LOG_DEBUG,
"cur_dts is invalid st:%d (%d) [init:%d i_done:%d finish:%d] (this is harmless if it occurs once at the start per stream)\n",
ost->st->index, ost->st->id, ost->initialized, ost->inputs_done, ost->finished);
av_log(NULL, AV_LOG_DEBUG, "cur_dts is invalid (this is harmless if it occurs once at the start per stream)\n");
if (!ost->initialized && !ost->inputs_done)
return ost;
@@ -4131,7 +4176,7 @@ static void reset_eagain(void)
// set duration to max(tmp, duration) in a proper time base and return duration's time_base
static AVRational duration_max(int64_t tmp, int64_t *duration, AVRational tmp_time_base,
AVRational time_base)
AVRational time_base)
{
int ret;
@@ -4156,7 +4201,7 @@ static int seek_to_start(InputFile *ifile, AVFormatContext *is)
int i, ret, has_audio = 0;
int64_t duration = 0;
ret = avformat_seek_file(is, -1, INT64_MIN, is->start_time, is->start_time, 0);
ret = av_seek_frame(is, -1, is->start_time, 0);
if (ret < 0)
return ret;
@@ -4196,8 +4241,7 @@ static int seek_to_start(InputFile *ifile, AVFormatContext *is)
ifile->time_base = ist->st->time_base;
/* the total duration of the stream, max_pts - min_pts is
* the duration of the stream without the last frame */
if (ist->max_pts > ist->min_pts && ist->max_pts - (uint64_t)ist->min_pts < INT64_MAX - duration)
duration += ist->max_pts - ist->min_pts;
duration += ist->max_pts - ist->min_pts;
ifile->time_base = duration_max(duration, &ifile->duration, ist->st->time_base,
ifile->time_base);
}
@@ -4224,7 +4268,6 @@ static int process_input(int file_index)
int ret, thread_ret, i, j;
int64_t duration;
int64_t pkt_dts;
int disable_discontinuity_correction = copy_ts;
is = ifile->ctx;
ret = get_input_packet(ifile, &pkt);
@@ -4426,20 +4469,10 @@ static int process_input(int file_index)
pkt.dts += duration;
pkt_dts = av_rescale_q_rnd(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
if (copy_ts && pkt_dts != AV_NOPTS_VALUE && ist->next_dts != AV_NOPTS_VALUE &&
(is->iformat->flags & AVFMT_TS_DISCONT) && ist->st->pts_wrap_bits < 60) {
int64_t wrap_dts = av_rescale_q_rnd(pkt.dts + (1LL<<ist->st->pts_wrap_bits),
ist->st->time_base, AV_TIME_BASE_Q,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
if (FFABS(wrap_dts - ist->next_dts) < FFABS(pkt_dts - ist->next_dts)/10)
disable_discontinuity_correction = 0;
}
if ((ist->dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO ||
ist->dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) &&
pkt_dts != AV_NOPTS_VALUE && ist->next_dts != AV_NOPTS_VALUE &&
!disable_discontinuity_correction) {
!copy_ts) {
int64_t delta = pkt_dts - ist->next_dts;
if (is->iformat->flags & AVFMT_TS_DISCONT) {
if (delta < -1LL*dts_delta_threshold*AV_TIME_BASE ||
@@ -4447,10 +4480,7 @@ static int process_input(int file_index)
pkt_dts + AV_TIME_BASE/10 < FFMAX(ist->pts, ist->dts)) {
ifile->ts_offset -= delta;
av_log(NULL, AV_LOG_DEBUG,
"timestamp discontinuity for stream #%d:%d "
"(id=%d, type=%s): %"PRId64", new offset= %"PRId64"\n",
ist->file_index, ist->st->index, ist->st->id,
av_get_media_type_string(ist->dec_ctx->codec_type),
"timestamp discontinuity %"PRId64", new offset= %"PRId64"\n",
delta, ifile->ts_offset);
pkt.dts -= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
if (pkt.pts != AV_NOPTS_VALUE)
@@ -4713,10 +4743,6 @@ static int transcode(void)
av_freep(&ost->enc_ctx->stats_in);
}
total_packets_written += ost->packets_written;
if (!ost->packets_written && (abort_on_flags & ABORT_ON_FLAG_EMPTY_OUTPUT_STREAM)) {
av_log(NULL, AV_LOG_FATAL, "Empty output on stream %d.\n", i);
exit_program(1);
}
}
if (!total_packets_written && (abort_on_flags & ABORT_ON_FLAG_EMPTY_OUTPUT)) {
@@ -4734,6 +4760,7 @@ static int transcode(void)
}
}
av_buffer_unref(&hw_device_ctx);
hw_device_free_all();
/* finished ! */
@@ -4861,6 +4888,11 @@ int main(int argc, char **argv)
exit_program(1);
}
// if (nb_input_files == 0) {
// av_log(NULL, AV_LOG_FATAL, "At least one input file must be specified\n");
// exit_program(1);
// }
for (i = 0; i < nb_output_files; i++) {
if (strcmp(output_files[i]->ctx->oformat->name, "rtp"))
want_sdp = 0;

View File

@@ -61,6 +61,7 @@ enum HWAccelID {
HWACCEL_GENERIC,
HWACCEL_VIDEOTOOLBOX,
HWACCEL_QSV,
HWACCEL_CUVID,
};
typedef struct HWAccel {
@@ -71,7 +72,7 @@ typedef struct HWAccel {
} HWAccel;
typedef struct HWDevice {
const char *name;
char *name;
enum AVHWDeviceType type;
AVBufferRef *device_ref;
} HWDevice;
@@ -348,7 +349,6 @@ typedef struct InputStream {
AVFifoBuffer *sub_queue; ///< queue of AVSubtitle* before filter init
AVFrame *frame;
int w, h;
unsigned int initialize; ///< marks if sub2video_update should force an initialization
} sub2video;
int dr1;
@@ -430,8 +430,7 @@ enum forced_keyframes_const {
FKF_NB
};
#define ABORT_ON_FLAG_EMPTY_OUTPUT (1 << 0)
#define ABORT_ON_FLAG_EMPTY_OUTPUT_STREAM (1 << 1)
#define ABORT_ON_FLAG_EMPTY_OUTPUT (1 << 0)
extern const char *const forced_keyframes_const_names[];
@@ -460,7 +459,8 @@ typedef struct OutputStream {
AVRational mux_timebase;
AVRational enc_timebase;
AVBSFContext *bsf_ctx;
int nb_bitstream_filters;
AVBSFContext **bsf_ctx;
AVCodecContext *enc_ctx;
AVCodecParameters *ref_par; /* associated input codec parameters with encoders options applied */
@@ -615,6 +615,7 @@ extern const AVIOInterruptCB int_cb;
extern const OptionDef options[];
extern const HWAccel hwaccels[];
extern AVBufferRef *hw_device_ctx;
#if CONFIG_QSV
extern char *qsv_device;
#endif
@@ -645,7 +646,7 @@ int filtergraph_is_simple(FilterGraph *fg);
int init_simple_filtergraph(InputStream *ist, OutputStream *ost);
int init_complex_filtergraph(FilterGraph *fg);
void sub2video_update(InputStream *ist, int64_t heartbeat_pts, AVSubtitle *sub);
void sub2video_update(InputStream *ist, AVSubtitle *sub);
int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame *frame);
@@ -653,6 +654,7 @@ int ffmpeg_parse_options(int argc, char **argv);
int videotoolbox_init(AVCodecContext *s);
int qsv_init(AVCodecContext *s);
int cuvid_init(AVCodecContext *s);
HWDevice *hw_device_get_by_name(const char *name);
int hw_device_init_from_string(const char *arg, HWDevice **dev);
@@ -660,7 +662,6 @@ void hw_device_free_all(void);
int hw_device_setup_for_decode(InputStream *ist);
int hw_device_setup_for_encode(OutputStream *ost);
int hw_device_setup_for_filter(FilterGraph *fg);
int hwaccel_decode_init(AVCodecContext *avctx);

73
fftools/ffmpeg_cuvid.c Normal file
View File

@@ -0,0 +1,73 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/hwcontext.h"
#include "libavutil/pixdesc.h"
#include "ffmpeg.h"
static void cuvid_uninit(AVCodecContext *avctx)
{
InputStream *ist = avctx->opaque;
av_buffer_unref(&ist->hw_frames_ctx);
}
int cuvid_init(AVCodecContext *avctx)
{
InputStream *ist = avctx->opaque;
AVHWFramesContext *frames_ctx;
int ret;
av_log(avctx, AV_LOG_VERBOSE, "Initializing cuvid hwaccel\n");
if (!hw_device_ctx) {
ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_CUDA,
ist->hwaccel_device, NULL, 0);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "Error creating a CUDA device\n");
return ret;
}
}
av_buffer_unref(&ist->hw_frames_ctx);
ist->hw_frames_ctx = av_hwframe_ctx_alloc(hw_device_ctx);
if (!ist->hw_frames_ctx) {
av_log(avctx, AV_LOG_ERROR, "Error creating a CUDA frames context\n");
return AVERROR(ENOMEM);
}
frames_ctx = (AVHWFramesContext*)ist->hw_frames_ctx->data;
frames_ctx->format = AV_PIX_FMT_CUDA;
frames_ctx->sw_format = avctx->sw_pix_fmt;
frames_ctx->width = avctx->width;
frames_ctx->height = avctx->height;
av_log(avctx, AV_LOG_DEBUG, "Initializing CUDA frames context: sw_format = %s, width = %d, height = %d\n",
av_get_pix_fmt_name(frames_ctx->sw_format), frames_ctx->width, frames_ctx->height);
ret = av_hwframe_ctx_init(ist->hw_frames_ctx);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "Error initializing a CUDA frame pool\n");
return ret;
}
ist->hwaccel_uninit = cuvid_uninit;
return 0;
}

View File

@@ -99,8 +99,7 @@ void choose_sample_fmt(AVStream *st, AVCodec *codec)
break;
}
if (*p == -1) {
const AVCodecDescriptor *desc = avcodec_descriptor_get(codec->id);
if(desc && (desc->props & AV_CODEC_PROP_LOSSLESS) && av_get_sample_fmt_name(st->codecpar->format) > av_get_sample_fmt_name(codec->sample_fmts[0]))
if((codec->capabilities & AV_CODEC_CAP_LOSSLESS) && av_get_sample_fmt_name(st->codecpar->format) > av_get_sample_fmt_name(codec->sample_fmts[0]))
av_log(NULL, AV_LOG_ERROR, "Conversion will not be lossless.\n");
if(av_get_sample_fmt_name(st->codecpar->format))
av_log(NULL, AV_LOG_WARNING,
@@ -294,17 +293,10 @@ static void init_input_filter(FilterGraph *fg, AVFilterInOut *in)
exit_program(1);
}
ist = input_streams[input_files[file_idx]->ist_index + st->index];
if (ist->user_set_discard == AVDISCARD_ALL) {
av_log(NULL, AV_LOG_FATAL, "Stream specifier '%s' in filtergraph description %s "
"matches a disabled input stream.\n", p, fg->graph_desc);
exit_program(1);
}
} else {
/* find the first unused stream of corresponding type */
for (i = 0; i < nb_input_streams; i++) {
ist = input_streams[i];
if (ist->user_set_discard == AVDISCARD_ALL)
continue;
if (ist->dec_ctx->codec_type == type && ist->discard)
break;
}
@@ -740,13 +732,6 @@ static int sub2video_prepare(InputStream *ist, InputFilter *ifilter)
if (!ist->sub2video.frame)
return AVERROR(ENOMEM);
ist->sub2video.last_pts = INT64_MIN;
ist->sub2video.end_pts = INT64_MIN;
/* sub2video structure has been (re-)initialized.
Mark it as such so that the system will be
initialized with the first received heartbeat. */
ist->sub2video.initialize = 1;
return 0;
}
@@ -793,9 +778,10 @@ static int configure_input_video_filter(FilterGraph *fg, InputFilter *ifilter,
av_bprint_init(&args, 0, AV_BPRINT_SIZE_AUTOMATIC);
av_bprintf(&args,
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:"
"pixel_aspect=%d/%d",
"pixel_aspect=%d/%d:sws_param=flags=%d",
ifilter->width, ifilter->height, ifilter->format,
tb.num, tb.den, sar.num, sar.den);
tb.num, tb.den, sar.num, sar.den,
SWS_BILINEAR + ((ist->dec_ctx->flags&AV_CODEC_FLAG_BITEXACT) ? SWS_BITEXACT:0));
if (fr.num && fr.den)
av_bprintf(&args, ":frame_rate=%d/%d", fr.num, fr.den);
snprintf(name, sizeof(name), "graph %d input from stream %d:%d", fg->index,
@@ -1062,9 +1048,17 @@ int configure_filtergraph(FilterGraph *fg)
if ((ret = avfilter_graph_parse2(fg->graph, graph_desc, &inputs, &outputs)) < 0)
goto fail;
ret = hw_device_setup_for_filter(fg);
if (ret < 0)
goto fail;
if (filter_hw_device || hw_device_ctx) {
AVBufferRef *device = filter_hw_device ? filter_hw_device->device_ref
: hw_device_ctx;
for (i = 0; i < fg->graph->nb_filters; i++) {
fg->graph->filters[i]->hw_device_ctx = av_buffer_ref(device);
if (!fg->graph->filters[i]->hw_device_ctx) {
ret = AVERROR(ENOMEM);
goto fail;
}
}
}
if (simple && (!inputs || inputs->next || !outputs || outputs->next)) {
const char *num_inputs;
@@ -1167,7 +1161,7 @@ int configure_filtergraph(FilterGraph *fg)
while (av_fifo_size(ist->sub2video.sub_queue)) {
AVSubtitle tmp;
av_fifo_generic_read(ist->sub2video.sub_queue, &tmp, sizeof(tmp), NULL);
sub2video_update(ist, INT64_MIN, &tmp);
sub2video_update(ist, &tmp);
avsubtitle_free(&tmp);
}
}

View File

@@ -19,8 +19,6 @@
#include <string.h>
#include "libavutil/avstring.h"
#include "libavutil/pixdesc.h"
#include "libavfilter/buffersink.h"
#include "ffmpeg.h"
@@ -101,7 +99,7 @@ int hw_device_init_from_string(const char *arg, HWDevice **dev_out)
// -> av_hwdevice_ctx_create_derived()
AVDictionary *options = NULL;
const char *type_name = NULL, *name = NULL, *device = NULL;
char *type_name = NULL, *name = NULL, *device = NULL;
enum AVHWDeviceType type;
HWDevice *dev, *src;
AVBufferRef *device_ref = NULL;
@@ -157,12 +155,10 @@ int hw_device_init_from_string(const char *arg, HWDevice **dev_out)
++p;
q = strchr(p, ',');
if (q) {
if (q - p > 0) {
device = av_strndup(p, q - p);
if (!device) {
err = AVERROR(ENOMEM);
goto fail;
}
device = av_strndup(p, q - p);
if (!device) {
err = AVERROR(ENOMEM);
goto fail;
}
err = av_dict_parse_string(&options, q + 1, "=", ",", 0);
if (err < 0) {
@@ -172,8 +168,7 @@ int hw_device_init_from_string(const char *arg, HWDevice **dev_out)
}
err = av_hwdevice_ctx_create(&device_ref, type,
q ? device : p[0] ? p : NULL,
options, 0);
device ? device : p, options, 0);
if (err < 0)
goto fail;
@@ -418,57 +413,18 @@ int hw_device_setup_for_decode(InputStream *ist)
int hw_device_setup_for_encode(OutputStream *ost)
{
const AVCodecHWConfig *config;
HWDevice *dev = NULL;
AVBufferRef *frames_ref = NULL;
int i;
if (ost->filter) {
frames_ref = av_buffersink_get_hw_frames_ctx(ost->filter->filter);
if (frames_ref &&
((AVHWFramesContext*)frames_ref->data)->format ==
ost->enc_ctx->pix_fmt) {
// Matching format, will try to use hw_frames_ctx.
} else {
frames_ref = NULL;
}
}
for (i = 0;; i++) {
config = avcodec_get_hw_config(ost->enc, i);
if (!config)
break;
if (frames_ref &&
config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX &&
(config->pix_fmt == AV_PIX_FMT_NONE ||
config->pix_fmt == ost->enc_ctx->pix_fmt)) {
av_log(ost->enc_ctx, AV_LOG_VERBOSE, "Using input "
"frames context (format %s) with %s encoder.\n",
av_get_pix_fmt_name(ost->enc_ctx->pix_fmt),
ost->enc->name);
ost->enc_ctx->hw_frames_ctx = av_buffer_ref(frames_ref);
if (!ost->enc_ctx->hw_frames_ctx)
return AVERROR(ENOMEM);
return 0;
}
if (!dev &&
config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX)
dev = hw_device_get_by_type(config->device_type);
}
HWDevice *dev;
dev = hw_device_match_by_codec(ost->enc);
if (dev) {
av_log(ost->enc_ctx, AV_LOG_VERBOSE, "Using device %s "
"(type %s) with %s encoder.\n", dev->name,
av_hwdevice_get_type_name(dev->type), ost->enc->name);
ost->enc_ctx->hw_device_ctx = av_buffer_ref(dev->device_ref);
if (!ost->enc_ctx->hw_device_ctx)
return AVERROR(ENOMEM);
return 0;
} else {
// No device required, or no device available.
return 0;
}
return 0;
}
static int hwaccel_retrieve_data(AVCodecContext *avctx, AVFrame *input)
@@ -521,31 +477,3 @@ int hwaccel_decode_init(AVCodecContext *avctx)
return 0;
}
int hw_device_setup_for_filter(FilterGraph *fg)
{
HWDevice *dev;
int i;
// If the user has supplied exactly one hardware device then just
// give it straight to every filter for convenience. If more than
// one device is available then the user needs to pick one explcitly
// with the filter_hw_device option.
if (filter_hw_device)
dev = filter_hw_device;
else if (nb_hw_devices == 1)
dev = hw_devices[0];
else
dev = NULL;
if (dev) {
for (i = 0; i < fg->graph->nb_filters; i++) {
fg->graph->filters[i]->hw_device_ctx =
av_buffer_ref(dev->device_ref);
if (!fg->graph->filters[i]->hw_device_ctx)
return AVERROR(ENOMEM);
}
}
return 0;
}

View File

@@ -1,4 +1,3 @@
/*
* ffmpeg option parsing
*
@@ -44,80 +43,16 @@
#define DEFAULT_PASS_LOGFILENAME_PREFIX "ffmpeg2pass"
#define SPECIFIER_OPT_FMT_str "%s"
#define SPECIFIER_OPT_FMT_i "%i"
#define SPECIFIER_OPT_FMT_i64 "%"PRId64
#define SPECIFIER_OPT_FMT_ui64 "%"PRIu64
#define SPECIFIER_OPT_FMT_f "%f"
#define SPECIFIER_OPT_FMT_dbl "%lf"
static const char *opt_name_codec_names[] = {"c", "codec", "acodec", "vcodec", "scodec", "dcodec", NULL};
static const char *opt_name_audio_channels[] = {"ac", NULL};
static const char *opt_name_audio_sample_rate[] = {"ar", NULL};
static const char *opt_name_frame_rates[] = {"r", NULL};
static const char *opt_name_frame_sizes[] = {"s", NULL};
static const char *opt_name_frame_pix_fmts[] = {"pix_fmt", NULL};
static const char *opt_name_ts_scale[] = {"itsscale", NULL};
static const char *opt_name_hwaccels[] = {"hwaccel", NULL};
static const char *opt_name_hwaccel_devices[] = {"hwaccel_device", NULL};
static const char *opt_name_hwaccel_output_formats[] = {"hwaccel_output_format", NULL};
static const char *opt_name_autorotate[] = {"autorotate", NULL};
static const char *opt_name_max_frames[] = {"frames", "aframes", "vframes", "dframes", NULL};
static const char *opt_name_bitstream_filters[] = {"bsf", "absf", "vbsf", NULL};
static const char *opt_name_codec_tags[] = {"tag", "atag", "vtag", "stag", NULL};
static const char *opt_name_sample_fmts[] = {"sample_fmt", NULL};
static const char *opt_name_qscale[] = {"q", "qscale", NULL};
static const char *opt_name_forced_key_frames[] = {"forced_key_frames", NULL};
static const char *opt_name_force_fps[] = {"force_fps", NULL};
static const char *opt_name_frame_aspect_ratios[] = {"aspect", NULL};
static const char *opt_name_rc_overrides[] = {"rc_override", NULL};
static const char *opt_name_intra_matrices[] = {"intra_matrix", NULL};
static const char *opt_name_inter_matrices[] = {"inter_matrix", NULL};
static const char *opt_name_chroma_intra_matrices[] = {"chroma_intra_matrix", NULL};
static const char *opt_name_top_field_first[] = {"top", NULL};
static const char *opt_name_presets[] = {"pre", "apre", "vpre", "spre", NULL};
static const char *opt_name_copy_initial_nonkeyframes[] = {"copyinkfr", NULL};
static const char *opt_name_copy_prior_start[] = {"copypriorss", NULL};
static const char *opt_name_filters[] = {"filter", "af", "vf", NULL};
static const char *opt_name_filter_scripts[] = {"filter_script", NULL};
static const char *opt_name_reinit_filters[] = {"reinit_filter", NULL};
static const char *opt_name_fix_sub_duration[] = {"fix_sub_duration", NULL};
static const char *opt_name_canvas_sizes[] = {"canvas_size", NULL};
static const char *opt_name_pass[] = {"pass", NULL};
static const char *opt_name_passlogfiles[] = {"passlogfile", NULL};
static const char *opt_name_max_muxing_queue_size[] = {"max_muxing_queue_size", NULL};
static const char *opt_name_guess_layout_max[] = {"guess_layout_max", NULL};
static const char *opt_name_apad[] = {"apad", NULL};
static const char *opt_name_discard[] = {"discard", NULL};
static const char *opt_name_disposition[] = {"disposition", NULL};
static const char *opt_name_time_bases[] = {"time_base", NULL};
static const char *opt_name_enc_time_bases[] = {"enc_time_base", NULL};
#define WARN_MULTIPLE_OPT_USAGE(name, type, so, st)\
{\
char namestr[128] = "";\
const char *spec = so->specifier && so->specifier[0] ? so->specifier : "";\
for (i = 0; opt_name_##name[i]; i++)\
av_strlcatf(namestr, sizeof(namestr), "-%s%s", opt_name_##name[i], opt_name_##name[i+1] ? (opt_name_##name[i+2] ? ", " : " or ") : "");\
av_log(NULL, AV_LOG_WARNING, "Multiple %s options specified for stream %d, only the last option '-%s%s%s "SPECIFIER_OPT_FMT_##type"' will be used.\n",\
namestr, st->index, opt_name_##name[0], spec[0] ? ":" : "", spec, so->u.type);\
}
#define MATCH_PER_STREAM_OPT(name, type, outvar, fmtctx, st)\
{\
int i, ret, matches = 0;\
SpecifierOpt *so;\
int i, ret;\
for (i = 0; i < o->nb_ ## name; i++) {\
char *spec = o->name[i].specifier;\
if ((ret = check_stream_specifier(fmtctx, st, spec)) > 0) {\
if ((ret = check_stream_specifier(fmtctx, st, spec)) > 0)\
outvar = o->name[i].u.type;\
so = &o->name[i];\
matches++;\
} else if (ret < 0)\
else if (ret < 0)\
exit_program(1);\
}\
if (matches > 1)\
WARN_MULTIPLE_OPT_USAGE(name, type, so, st);\
}
#define MATCH_PER_TYPE_OPT(name, type, outvar, fmtctx, mediatype)\
@@ -136,9 +71,13 @@ const HWAccel hwaccels[] = {
#endif
#if CONFIG_LIBMFX
{ "qsv", qsv_init, HWACCEL_QSV, AV_PIX_FMT_QSV },
#endif
#if CONFIG_CUVID
{ "cuvid", cuvid_init, HWACCEL_CUVID, AV_PIX_FMT_CUDA },
#endif
{ 0 },
};
AVBufferRef *hw_device_ctx;
HWDevice *filter_hw_device;
char *vstats_filename;
@@ -232,11 +171,14 @@ static void init_options(OptionsContext *o)
static int show_hwaccels(void *optctx, const char *opt, const char *arg)
{
enum AVHWDeviceType type = AV_HWDEVICE_TYPE_NONE;
int i;
printf("Hardware acceleration methods:\n");
while ((type = av_hwdevice_iterate_types(type)) !=
AV_HWDEVICE_TYPE_NONE)
printf("%s\n", av_hwdevice_get_type_name(type));
for (i = 0; hwaccels[i].name; i++)
printf("%s\n", hwaccels[i].name);
printf("\n");
return 0;
}
@@ -262,9 +204,8 @@ static AVDictionary *strip_specifiers(AVDictionary *dict)
static int opt_abort_on(void *optctx, const char *opt, const char *arg)
{
static const AVOption opts[] = {
{ "abort_on" , NULL, 0, AV_OPT_TYPE_FLAGS, { .i64 = 0 }, INT64_MIN, INT64_MAX, .unit = "flags" },
{ "empty_output" , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = ABORT_ON_FLAG_EMPTY_OUTPUT }, .unit = "flags" },
{ "empty_output_stream", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = ABORT_ON_FLAG_EMPTY_OUTPUT_STREAM }, .unit = "flags" },
{ "abort_on" , NULL, 0, AV_OPT_TYPE_FLAGS, { .i64 = 0 }, INT64_MIN, INT64_MAX, .unit = "flags" },
{ "empty_output" , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = ABORT_ON_FLAG_EMPTY_OUTPUT }, .unit = "flags" },
{ NULL },
};
static const AVClass class = {
@@ -327,7 +268,7 @@ static int opt_map(void *optctx, const char *opt, const char *arg)
{
OptionsContext *o = optctx;
StreamMap *m = NULL;
int i, negative = 0, file_idx, disabled = 0;
int i, negative = 0, file_idx;
int sync_file_idx = -1, sync_stream_idx = 0;
char *p, *sync;
char *map;
@@ -362,11 +303,6 @@ static int opt_map(void *optctx, const char *opt, const char *arg)
"match any streams.\n", arg);
exit_program(1);
}
if (input_streams[input_files[sync_file_idx]->ist_index + sync_stream_idx]->user_set_discard == AVDISCARD_ALL) {
av_log(NULL, AV_LOG_FATAL, "Sync stream specification in map %s matches a disabled input "
"stream.\n", arg);
exit_program(1);
}
}
@@ -403,10 +339,6 @@ static int opt_map(void *optctx, const char *opt, const char *arg)
if (check_stream_specifier(input_files[file_idx]->ctx, input_files[file_idx]->ctx->streams[i],
*p == ':' ? p + 1 : p) <= 0)
continue;
if (input_streams[input_files[file_idx]->ist_index + i]->user_set_discard == AVDISCARD_ALL) {
disabled = 1;
continue;
}
GROW_ARRAY(o->stream_maps, o->nb_stream_maps);
m = &o->stream_maps[o->nb_stream_maps - 1];
@@ -426,10 +358,6 @@ static int opt_map(void *optctx, const char *opt, const char *arg)
if (!m) {
if (allow_unused) {
av_log(NULL, AV_LOG_VERBOSE, "Stream map '%s' matches no streams; ignoring.\n", arg);
} else if (disabled) {
av_log(NULL, AV_LOG_FATAL, "Stream map '%s' matches disabled streams.\n"
"To ignore this, add a trailing '?' to the map.\n", arg);
exit_program(1);
} else {
av_log(NULL, AV_LOG_FATAL, "Stream map '%s' matches no streams.\n"
"To ignore this, add a trailing '?' to the map.\n", arg);
@@ -509,8 +437,7 @@ static int opt_map_channel(void *optctx, const char *opt, const char *arg)
/* allow trailing ? to map_channel */
if (allow_unused = strchr(mapchan, '?'))
*allow_unused = 0;
if (m->channel_idx < 0 || m->channel_idx >= st->codecpar->channels ||
input_streams[input_files[m->file_idx]->ist_index + m->stream_idx]->user_set_discard == AVDISCARD_ALL) {
if (m->channel_idx < 0 || m->channel_idx >= st->codecpar->channels) {
if (allow_unused) {
av_log(NULL, AV_LOG_VERBOSE, "mapchan: invalid audio channel #%d.%d.%d\n",
m->file_idx, m->stream_idx, m->channel_idx);
@@ -536,15 +463,21 @@ static int opt_sdp_file(void *optctx, const char *opt, const char *arg)
#if CONFIG_VAAPI
static int opt_vaapi_device(void *optctx, const char *opt, const char *arg)
{
HWDevice *dev;
const char *prefix = "vaapi:";
char *tmp;
int err;
tmp = av_asprintf("%s%s", prefix, arg);
if (!tmp)
return AVERROR(ENOMEM);
err = hw_device_init_from_string(tmp, NULL);
err = hw_device_init_from_string(tmp, &dev);
av_free(tmp);
return err;
if (err < 0)
return err;
hw_device_ctx = av_buffer_ref(dev->device_ref);
if (!hw_device_ctx)
return AVERROR(ENOMEM);
return 0;
}
#endif
@@ -813,13 +746,6 @@ static void add_input_streams(OptionsContext *o, AVFormatContext *ic)
MATCH_PER_STREAM_OPT(discard, str, discard_str, ic, st);
ist->user_set_discard = AVDISCARD_NONE;
if ((o->video_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) ||
(o->audio_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) ||
(o->subtitle_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE) ||
(o->data_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_DATA))
ist->user_set_discard = AVDISCARD_ALL;
if (discard_str && av_opt_eval_int(&cc, discard_opt, discard_str, &ist->user_set_discard) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error parsing discard %s.\n",
discard_str);
@@ -872,28 +798,9 @@ static void add_input_streams(OptionsContext *o, AVFormatContext *ic)
MATCH_PER_STREAM_OPT(top_field_first, i, ist->top_field_first, ic, st);
MATCH_PER_STREAM_OPT(hwaccels, str, hwaccel, ic, st);
MATCH_PER_STREAM_OPT(hwaccel_output_formats, str,
hwaccel_output_format, ic, st);
if (!hwaccel_output_format && hwaccel && !strcmp(hwaccel, "cuvid")) {
av_log(NULL, AV_LOG_WARNING,
"WARNING: defaulting hwaccel_output_format to cuda for compatibility "
"with old commandlines. This behaviour is DEPRECATED and will be removed "
"in the future. Please explicitly set \"-hwaccel_output_format cuda\".\n");
ist->hwaccel_output_format = AV_PIX_FMT_CUDA;
} else if (hwaccel_output_format) {
ist->hwaccel_output_format = av_get_pix_fmt(hwaccel_output_format);
if (ist->hwaccel_output_format == AV_PIX_FMT_NONE) {
av_log(NULL, AV_LOG_FATAL, "Unrecognised hwaccel output "
"format: %s", hwaccel_output_format);
}
} else {
ist->hwaccel_output_format = AV_PIX_FMT_NONE;
}
if (hwaccel) {
// The NVDEC hwaccels use a CUDA device, so remap the name here.
if (!strcmp(hwaccel, "nvdec") || !strcmp(hwaccel, "cuvid"))
if (!strcmp(hwaccel, "nvdec"))
hwaccel = "cuda";
if (!strcmp(hwaccel, "none"))
@@ -927,6 +834,8 @@ static void add_input_streams(OptionsContext *o, AVFormatContext *ic)
AV_HWDEVICE_TYPE_NONE)
av_log(NULL, AV_LOG_FATAL, "%s ",
av_hwdevice_get_type_name(type));
for (i = 0; hwaccels[i].name; i++)
av_log(NULL, AV_LOG_FATAL, "%s ", hwaccels[i].name);
av_log(NULL, AV_LOG_FATAL, "\n");
exit_program(1);
}
@@ -940,6 +849,18 @@ static void add_input_streams(OptionsContext *o, AVFormatContext *ic)
exit_program(1);
}
MATCH_PER_STREAM_OPT(hwaccel_output_formats, str,
hwaccel_output_format, ic, st);
if (hwaccel_output_format) {
ist->hwaccel_output_format = av_get_pix_fmt(hwaccel_output_format);
if (ist->hwaccel_output_format == AV_PIX_FMT_NONE) {
av_log(NULL, AV_LOG_FATAL, "Unrecognised hwaccel output "
"format: %s", hwaccel_output_format);
}
} else {
ist->hwaccel_output_format = AV_PIX_FMT_NONE;
}
ist->hwaccel_pix_fmt = AV_PIX_FMT_NONE;
break;
@@ -989,7 +910,7 @@ static void assert_file_overwrite(const char *filename)
if (!file_overwrite) {
if (proto_name && !strcmp(proto_name, "file") && avio_check(filename, 0) == 0) {
if (stdin_interaction && !no_file_overwrite) {
fprintf(stderr,"File '%s' already exists. Overwrite? [y/N] ", filename);
fprintf(stderr,"File '%s' already exists. Overwrite ? [y/N] ", filename);
fflush(stderr);
term_exit();
signal(SIGINT, SIG_DFL);
@@ -1529,12 +1450,54 @@ static OutputStream *new_output_stream(OptionsContext *o, AVFormatContext *oc, e
MATCH_PER_STREAM_OPT(copy_prior_start, i, ost->copy_prior_start, oc ,st);
MATCH_PER_STREAM_OPT(bitstream_filters, str, bsfs, oc, st);
if (bsfs && *bsfs) {
ret = av_bsf_list_parse_str(bsfs, &ost->bsf_ctx);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error parsing bitstream filter sequence '%s': %s\n", bsfs, av_err2str(ret));
while (bsfs && *bsfs) {
const AVBitStreamFilter *filter;
char *bsf, *bsf_options_str, *bsf_name;
bsf = av_get_token(&bsfs, ",");
if (!bsf)
exit_program(1);
bsf_name = av_strtok(bsf, "=", &bsf_options_str);
if (!bsf_name)
exit_program(1);
filter = av_bsf_get_by_name(bsf_name);
if (!filter) {
av_log(NULL, AV_LOG_FATAL, "Unknown bitstream filter %s\n", bsf_name);
exit_program(1);
}
ost->bsf_ctx = av_realloc_array(ost->bsf_ctx,
ost->nb_bitstream_filters + 1,
sizeof(*ost->bsf_ctx));
if (!ost->bsf_ctx)
exit_program(1);
ret = av_bsf_alloc(filter, &ost->bsf_ctx[ost->nb_bitstream_filters]);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error allocating a bitstream filter context\n");
exit_program(1);
}
ost->nb_bitstream_filters++;
if (bsf_options_str && filter->priv_class) {
const AVOption *opt = av_opt_next(ost->bsf_ctx[ost->nb_bitstream_filters-1]->priv_data, NULL);
const char * shorthand[2] = {NULL};
if (opt)
shorthand[0] = opt->name;
ret = av_opt_set_from_string(ost->bsf_ctx[ost->nb_bitstream_filters-1]->priv_data, bsf_options_str, shorthand, "=", ":");
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error parsing options for bitstream filter %s\n", bsf_name);
exit_program(1);
}
}
av_freep(&bsf);
if (*bsfs)
bsfs++;
}
MATCH_PER_STREAM_OPT(codec_tags, str, codec_tag, oc, st);
@@ -2210,10 +2173,7 @@ static int open_output_file(OptionsContext *o, const char *filename)
for (i = 0; i < nb_input_streams; i++) {
int new_area;
ist = input_streams[i];
new_area = ist->st->codecpar->width * ist->st->codecpar->height + 100000000*!!ist->st->codec_info_nb_frames
+ 5000000*!!(ist->st->disposition & AV_DISPOSITION_DEFAULT);
if (ist->user_set_discard == AVDISCARD_ALL)
continue;
new_area = ist->st->codecpar->width * ist->st->codecpar->height + 100000000*!!ist->st->codec_info_nb_frames;
if((qcr!=MKTAG('A', 'P', 'I', 'C')) && (ist->st->disposition & AV_DISPOSITION_ATTACHED_PIC))
new_area = 1;
if (ist->st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO &&
@@ -2234,10 +2194,7 @@ static int open_output_file(OptionsContext *o, const char *filename)
for (i = 0; i < nb_input_streams; i++) {
int score;
ist = input_streams[i];
score = ist->st->codecpar->channels + 100000000*!!ist->st->codec_info_nb_frames
+ 5000000*!!(ist->st->disposition & AV_DISPOSITION_DEFAULT);
if (ist->user_set_discard == AVDISCARD_ALL)
continue;
score = ist->st->codecpar->channels + 100000000*!!ist->st->codec_info_nb_frames;
if (ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO &&
score > best_score) {
best_score = score;
@@ -2259,8 +2216,6 @@ static int open_output_file(OptionsContext *o, const char *filename)
AVCodec const *output_codec =
avcodec_find_encoder(oc->oformat->subtitle_codec);
int input_props = 0, output_props = 0;
if (input_streams[i]->user_set_discard == AVDISCARD_ALL)
continue;
if (output_codec)
output_descriptor = avcodec_descriptor_get(output_codec->id);
if (input_descriptor)
@@ -2282,8 +2237,6 @@ static int open_output_file(OptionsContext *o, const char *filename)
if (!o->data_disable ) {
enum AVCodecID codec_id = av_guess_codec(oc->oformat, NULL, filename, NULL, AVMEDIA_TYPE_DATA);
for (i = 0; codec_id != AV_CODEC_ID_NONE && i < nb_input_streams; i++) {
if (input_streams[i]->user_set_discard == AVDISCARD_ALL)
continue;
if (input_streams[i]->st->codecpar->codec_type == AVMEDIA_TYPE_DATA
&& input_streams[i]->st->codecpar->codec_id == codec_id )
new_data_stream(o, oc, i);
@@ -2322,11 +2275,6 @@ loop_end:
int src_idx = input_files[map->file_index]->ist_index + map->stream_index;
ist = input_streams[input_files[map->file_index]->ist_index + map->stream_index];
if (ist->user_set_discard == AVDISCARD_ALL) {
av_log(NULL, AV_LOG_FATAL, "Stream #%d:%d is disabled and cannot be mapped.\n",
map->file_index, map->stream_index);
exit_program(1);
}
if(o->subtitle_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE)
continue;
if(o-> audio_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
@@ -2384,14 +2332,12 @@ loop_end:
o->attachments[i]);
exit_program(1);
}
if (len > INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE ||
!(attachment = av_malloc(len + AV_INPUT_BUFFER_PADDING_SIZE))) {
av_log(NULL, AV_LOG_FATAL, "Attachment %s too large.\n",
if (!(attachment = av_malloc(len))) {
av_log(NULL, AV_LOG_FATAL, "Attachment %s too large to fit into memory.\n",
o->attachments[i]);
exit_program(1);
}
avio_read(pb, attachment, len);
memset(attachment + len, 0, AV_INPUT_BUFFER_PADDING_SIZE);
ost = new_attachment_stream(o, oc, -1);
ost->stream_copy = 0;
@@ -2783,14 +2729,13 @@ static int opt_target(void *optctx, const char *opt, const char *arg)
} else {
/* Try to determine PAL/NTSC by peeking in the input files */
if (nb_input_files) {
int i, j;
int i, j, fr;
for (j = 0; j < nb_input_files; j++) {
for (i = 0; i < input_files[j]->nb_streams; i++) {
AVStream *st = input_files[j]->ctx->streams[i];
int64_t fr;
if (st->codecpar->codec_type != AVMEDIA_TYPE_VIDEO)
continue;
fr = st->time_base.den * 1000LL / st->time_base.num;
fr = st->time_base.den * 1000 / st->time_base.num;
if (fr == 25000) {
norm = PAL;
break;
@@ -3020,11 +2965,8 @@ static int opt_preset(void *optctx, const char *opt, const char *arg)
static int opt_old2new(void *optctx, const char *opt, const char *arg)
{
OptionsContext *o = optctx;
int ret;
char *s = av_asprintf("%s:%c", opt + 1, *opt);
if (!s)
return AVERROR(ENOMEM);
ret = parse_option(o, s, arg, options);
int ret = parse_option(o, s, arg, options);
av_free(s);
return ret;
}
@@ -3055,8 +2997,6 @@ static int opt_qscale(void *optctx, const char *opt, const char *arg)
return parse_option(o, "q:v", arg, options);
}
s = av_asprintf("q%s", opt + 6);
if (!s)
return AVERROR(ENOMEM);
ret = parse_option(o, s, arg, options);
av_free(s);
return ret;
@@ -3101,11 +3041,8 @@ static int opt_vsync(void *optctx, const char *opt, const char *arg)
static int opt_timecode(void *optctx, const char *opt, const char *arg)
{
OptionsContext *o = optctx;
int ret;
char *tcr = av_asprintf("timecode=%s", arg);
if (!tcr)
return AVERROR(ENOMEM);
ret = parse_option(o, "metadata:g", tcr, options);
int ret = parse_option(o, "metadata:g", tcr, options);
if (ret >= 0)
ret = av_dict_set(&o->g->codec_opts, "gop_timecode", arg, 0);
av_free(tcr);
@@ -3207,7 +3144,7 @@ void show_help_default(const char *opt, const char *arg)
" -h -- print basic options\n"
" -h long -- print more options\n"
" -h full -- print all options (including all format and codec specific options, very long)\n"
" -h type=name -- print all options for the named decoder/encoder/demuxer/muxer/filter/bsf/protocol\n"
" -h type=name -- print all options for the named decoder/encoder/demuxer/muxer/filter/bsf\n"
" See man %s for detailed description of the options.\n"
"\n", program_name);
@@ -3215,7 +3152,7 @@ void show_help_default(const char *opt, const char *arg)
OPT_EXIT, 0, 0);
show_help_options(options, "Global options (affect whole program "
"instead of just one file):",
"instead of just one file:",
0, per_file | OPT_EXIT | OPT_EXPERT, 0);
if (show_advanced)
show_help_options(options, "Advanced global options:", OPT_EXPERT,
@@ -3291,7 +3228,6 @@ static int open_files(OptionGroupList *l, const char *inout,
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error parsing options for %s file "
"%s.\n", inout, g->arg);
uninit_options(&o);
return ret;
}
@@ -3465,7 +3401,7 @@ const OptionDef options[] = {
{ "stdin", OPT_BOOL | OPT_EXPERT, { &stdin_interaction },
"enable or disable interaction on standard input" },
{ "timelimit", HAS_ARG | OPT_EXPERT, { .func_arg = opt_timelimit },
"set max runtime in seconds in CPU user time", "limit" },
"set max runtime in seconds", "limit" },
{ "dump", OPT_BOOL | OPT_EXPERT, { &do_pkt_dump },
"dump each input packet" },
{ "hex", OPT_BOOL | OPT_EXPERT, { &do_hex_dump },

View File

@@ -28,7 +28,6 @@
#include "ffmpeg.h"
static AVBufferRef *hw_device_ctx;
char *qsv_device = NULL;
static int qsv_get_buffer(AVCodecContext *s, AVFrame *frame, int flags)

View File

@@ -51,12 +51,7 @@ static int videotoolbox_retrieve_data(AVCodecContext *s, AVFrame *frame)
case kCVPixelFormatType_422YpCbCr8: vt->tmp_frame->format = AV_PIX_FMT_UYVY422; break;
case kCVPixelFormatType_32BGRA: vt->tmp_frame->format = AV_PIX_FMT_BGRA; break;
#ifdef kCFCoreFoundationVersionNumber10_7
case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange:
case kCVPixelFormatType_420YpCbCr8BiPlanarFullRange: vt->tmp_frame->format = AV_PIX_FMT_NV12; break;
#endif
#if HAVE_KCVPIXELFORMATTYPE_420YPCBCR10BIPLANARVIDEORANGE
case kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange:
case kCVPixelFormatType_420YpCbCr10BiPlanarFullRange: vt->tmp_frame->format = AV_PIX_FMT_P010; break;
case kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange: vt->tmp_frame->format = AV_PIX_FMT_NV12; break;
#endif
default:
av_log(NULL, AV_LOG_ERROR,
@@ -67,7 +62,7 @@ static int videotoolbox_retrieve_data(AVCodecContext *s, AVFrame *frame)
vt->tmp_frame->width = frame->width;
vt->tmp_frame->height = frame->height;
ret = av_frame_get_buffer(vt->tmp_frame, 0);
ret = av_frame_get_buffer(vt->tmp_frame, 32);
if (ret < 0)
return ret;

View File

@@ -40,7 +40,6 @@
#include "libavutil/samplefmt.h"
#include "libavutil/avassert.h"
#include "libavutil/time.h"
#include "libavutil/bprint.h"
#include "libavformat/avformat.h"
#include "libavdevice/avdevice.h"
#include "libswscale/swscale.h"
@@ -325,9 +324,8 @@ static int seek_by_bytes = -1;
static float seek_interval = 10;
static int display_disable;
static int borderless;
static int alwaysontop;
static int startup_volume = 100;
static int show_status = -1;
static int show_status = 1;
static int av_sync_type = AV_SYNC_AUDIO_MASTER;
static int64_t start_time = AV_NOPTS_VALUE;
static int64_t duration = AV_NOPTS_VALUE;
@@ -355,7 +353,6 @@ static char *afilters = NULL;
#endif
static int autorotate = 1;
static int find_stream_info = 1;
static int filter_nbthreads = 0;
/* current context */
static int is_full_screen;
@@ -645,10 +642,7 @@ static int decoder_decode_frame(Decoder *d, AVFrame *frame, AVSubtitle *sub) {
if (packet_queue_get(d->queue, &pkt, 1, &d->pkt_serial) < 0)
return -1;
}
if (d->queue->serial == d->pkt_serial)
break;
av_packet_unref(&pkt);
} while (1);
} while (d->queue->serial != d->pkt_serial);
if (pkt.data == flush_pkt.data) {
avcodec_flush_buffers(d->avctx);
@@ -867,27 +861,31 @@ static void calculate_display_rect(SDL_Rect *rect,
int scr_xleft, int scr_ytop, int scr_width, int scr_height,
int pic_width, int pic_height, AVRational pic_sar)
{
AVRational aspect_ratio = pic_sar;
int64_t width, height, x, y;
float aspect_ratio;
int width, height, x, y;
if (av_cmp_q(aspect_ratio, av_make_q(0, 1)) <= 0)
aspect_ratio = av_make_q(1, 1);
if (pic_sar.num == 0)
aspect_ratio = 0;
else
aspect_ratio = av_q2d(pic_sar);
aspect_ratio = av_mul_q(aspect_ratio, av_make_q(pic_width, pic_height));
if (aspect_ratio <= 0.0)
aspect_ratio = 1.0;
aspect_ratio *= (float)pic_width / (float)pic_height;
/* XXX: we suppose the screen has a 1.0 pixel ratio */
height = scr_height;
width = av_rescale(height, aspect_ratio.num, aspect_ratio.den) & ~1;
width = lrint(height * aspect_ratio) & ~1;
if (width > scr_width) {
width = scr_width;
height = av_rescale(width, aspect_ratio.den, aspect_ratio.num) & ~1;
height = lrint(width / aspect_ratio) & ~1;
}
x = (scr_width - width) / 2;
y = (scr_height - height) / 2;
rect->x = scr_xleft + x;
rect->y = scr_ytop + y;
rect->w = FFMAX((int)width, 1);
rect->h = FFMAX((int)height, 1);
rect->w = FFMAX(width, 1);
rect->h = FFMAX(height, 1);
}
static void get_sdl_pix_fmt_and_blendmode(int format, Uint32 *sdl_pix_fmt, SDL_BlendMode *sdl_blendmode)
@@ -1328,11 +1326,7 @@ static void sigterm_handler(int sig)
static void set_default_window_size(int width, int height, AVRational sar)
{
SDL_Rect rect;
int max_width = screen_width ? screen_width : INT_MAX;
int max_height = screen_height ? screen_height : INT_MAX;
if (max_width == INT_MAX && max_height == INT_MAX)
max_height = height;
calculate_display_rect(&rect, 0, 0, max_width, max_height, width, height, sar);
calculate_display_rect(&rect, 0, 0, INT_MAX, height, width, height, sar);
default_width = rect.w;
default_height = rect.h;
}
@@ -1341,8 +1335,13 @@ static int video_open(VideoState *is)
{
int w,h;
w = screen_width ? screen_width : default_width;
h = screen_height ? screen_height : default_height;
if (screen_width) {
w = screen_width;
h = screen_height;
} else {
w = default_width;
h = default_height;
}
if (!window_title)
window_title = input_filename;
@@ -1693,7 +1692,6 @@ display:
}
is->force_refresh = 0;
if (show_status) {
AVBPrint buf;
static int64_t last_time;
int64_t cur_time;
int aqsize, vqsize, sqsize;
@@ -1717,28 +1715,18 @@ display:
av_diff = get_master_clock(is) - get_clock(&is->vidclk);
else if (is->audio_st)
av_diff = get_master_clock(is) - get_clock(&is->audclk);
av_bprint_init(&buf, 0, AV_BPRINT_SIZE_AUTOMATIC);
av_bprintf(&buf,
"%7.2f %s:%7.3f fd=%4d aq=%5dKB vq=%5dKB sq=%5dB f=%"PRId64"/%"PRId64" \r",
get_master_clock(is),
(is->audio_st && is->video_st) ? "A-V" : (is->video_st ? "M-V" : (is->audio_st ? "M-A" : " ")),
av_diff,
is->frame_drops_early + is->frame_drops_late,
aqsize / 1024,
vqsize / 1024,
sqsize,
is->video_st ? is->viddec.avctx->pts_correction_num_faulty_dts : 0,
is->video_st ? is->viddec.avctx->pts_correction_num_faulty_pts : 0);
if (show_status == 1 && AV_LOG_INFO > av_log_get_level())
fprintf(stderr, "%s", buf.str);
else
av_log(NULL, AV_LOG_INFO, "%s", buf.str);
fflush(stderr);
av_bprint_finalize(&buf, NULL);
av_log(NULL, AV_LOG_INFO,
"%7.2f %s:%7.3f fd=%4d aq=%5dKB vq=%5dKB sq=%5dB f=%"PRId64"/%"PRId64" \r",
get_master_clock(is),
(is->audio_st && is->video_st) ? "A-V" : (is->video_st ? "M-V" : (is->audio_st ? "M-A" : " ")),
av_diff,
is->frame_drops_early + is->frame_drops_late,
aqsize / 1024,
vqsize / 1024,
sqsize,
is->video_st ? is->viddec.avctx->pts_correction_num_faulty_dts : 0,
is->video_st ? is->viddec.avctx->pts_correction_num_faulty_pts : 0);
fflush(stdout);
last_time = cur_time;
}
}
@@ -1971,7 +1959,6 @@ static int configure_audio_filters(VideoState *is, const char *afilters, int for
avfilter_graph_free(&is->agraph);
if (!(is->agraph = avfilter_graph_alloc()))
return AVERROR(ENOMEM);
is->agraph->nb_threads = filter_nbthreads;
while ((e = av_dict_get(swr_opts, "", e, AV_DICT_IGNORE_SUFFIX)))
av_strlcatf(aresample_swr_opts, sizeof(aresample_swr_opts), "%s=%s:", e->key, e->value);
@@ -2121,10 +2108,10 @@ static int audio_thread(void *arg)
return ret;
}
static int decoder_start(Decoder *d, int (*fn)(void *), const char *thread_name, void* arg)
static int decoder_start(Decoder *d, int (*fn)(void *), void *arg)
{
packet_queue_start(d->queue);
d->decoder_tid = SDL_CreateThread(fn, thread_name, arg);
d->decoder_tid = SDL_CreateThread(fn, "decoder", arg);
if (!d->decoder_tid) {
av_log(NULL, AV_LOG_ERROR, "SDL_CreateThread(): %s\n", SDL_GetError());
return AVERROR(ENOMEM);
@@ -2143,17 +2130,26 @@ static int video_thread(void *arg)
AVRational frame_rate = av_guess_frame_rate(is->ic, is->video_st, NULL);
#if CONFIG_AVFILTER
AVFilterGraph *graph = NULL;
AVFilterGraph *graph = avfilter_graph_alloc();
AVFilterContext *filt_out = NULL, *filt_in = NULL;
int last_w = 0;
int last_h = 0;
enum AVPixelFormat last_format = -2;
int last_serial = -1;
int last_vfilter_idx = 0;
if (!graph) {
av_frame_free(&frame);
return AVERROR(ENOMEM);
}
#endif
if (!frame)
if (!frame) {
#if CONFIG_AVFILTER
avfilter_graph_free(&graph);
#endif
return AVERROR(ENOMEM);
}
for (;;) {
ret = get_video_frame(is, frame);
@@ -2176,11 +2172,6 @@ static int video_thread(void *arg)
(const char *)av_x_if_null(av_get_pix_fmt_name(frame->format), "none"), is->viddec.pkt_serial);
avfilter_graph_free(&graph);
graph = avfilter_graph_alloc();
if (!graph) {
ret = AVERROR(ENOMEM);
goto the_end;
}
graph->nb_threads = filter_nbthreads;
if ((ret = configure_video_filters(graph, is, vfilters_list ? vfilters_list[is->vfilter_idx] : NULL, frame)) < 0) {
SDL_Event event;
event.type = FF_QUIT_EVENT;
@@ -2690,7 +2681,7 @@ static int stream_component_open(VideoState *is, int stream_index)
is->auddec.start_pts = is->audio_st->start_time;
is->auddec.start_pts_tb = is->audio_st->time_base;
}
if ((ret = decoder_start(&is->auddec, audio_thread, "audio_decoder", is)) < 0)
if ((ret = decoder_start(&is->auddec, audio_thread, is)) < 0)
goto out;
SDL_PauseAudioDevice(audio_dev, 0);
break;
@@ -2699,7 +2690,7 @@ static int stream_component_open(VideoState *is, int stream_index)
is->video_st = ic->streams[stream_index];
decoder_init(&is->viddec, avctx, &is->videoq, is->continue_read_thread);
if ((ret = decoder_start(&is->viddec, video_thread, "video_decoder", is)) < 0)
if ((ret = decoder_start(&is->viddec, video_thread, is)) < 0)
goto out;
is->queue_attachments_req = 1;
break;
@@ -2708,7 +2699,7 @@ static int stream_component_open(VideoState *is, int stream_index)
is->subtitle_st = ic->streams[stream_index];
decoder_init(&is->subdec, avctx, &is->subtitleq, is->continue_read_thread);
if ((ret = decoder_start(&is->subdec, subtitle_thread, "subtitle_decoder", is)) < 0)
if ((ret = decoder_start(&is->subdec, subtitle_thread, is)) < 0)
goto out;
break;
default:
@@ -2775,6 +2766,9 @@ static int read_thread(void *arg)
}
memset(st_index, -1, sizeof(st_index));
is->last_video_stream = is->video_stream = -1;
is->last_audio_stream = is->audio_stream = -1;
is->last_subtitle_stream = is->subtitle_stream = -1;
is->eof = 0;
ic = avformat_alloc_context();
@@ -2986,7 +2980,7 @@ static int read_thread(void *arg)
}
if (is->queue_attachments_req) {
if (is->video_st && is->video_st->disposition & AV_DISPOSITION_ATTACHED_PIC) {
AVPacket copy;
AVPacket copy = { 0 };
if ((ret = av_packet_ref(&copy, &is->video_st->attached_pic)) < 0)
goto fail;
packet_queue_put(&is->videoq, &copy);
@@ -3080,9 +3074,6 @@ static VideoState *stream_open(const char *filename, AVInputFormat *iformat)
is = av_mallocz(sizeof(VideoState));
if (!is)
return NULL;
is->last_video_stream = is->video_stream = -1;
is->last_audio_stream = is->audio_stream = -1;
is->last_subtitle_stream = is->subtitle_stream = -1;
is->filename = av_strdup(filename);
if (!is->filename)
goto fail;
@@ -3451,7 +3442,7 @@ static void event_loop(VideoState *cur_stream)
break;
case SDL_WINDOWEVENT:
switch (event.window.event) {
case SDL_WINDOWEVENT_SIZE_CHANGED:
case SDL_WINDOWEVENT_RESIZED:
screen_width = cur_stream->width = event.window.data1;
screen_height = cur_stream->height = event.window.data2;
if (cur_stream->vis_texture) {
@@ -3597,7 +3588,6 @@ static const OptionDef options[] = {
{ "seek_interval", OPT_FLOAT | HAS_ARG, { &seek_interval }, "set seek interval for left/right keys, in seconds", "seconds" },
{ "nodisp", OPT_BOOL, { &display_disable }, "disable graphical display" },
{ "noborder", OPT_BOOL, { &borderless }, "borderless window" },
{ "alwaysontop", OPT_BOOL, { &alwaysontop }, "window always on top" },
{ "volume", OPT_INT | HAS_ARG, { &startup_volume}, "set startup volume 0=min 100=max", "volume" },
{ "f", HAS_ARG, { .func_arg = opt_format }, "force format", "fmt" },
{ "pix_fmt", HAS_ARG | OPT_EXPERT | OPT_VIDEO, { .func_arg = opt_frame_pix_fmt }, "set pixel format", "format" },
@@ -3631,7 +3621,6 @@ static const OptionDef options[] = {
{ "autorotate", OPT_BOOL, { &autorotate }, "automatically rotate video", "" },
{ "find_stream_info", OPT_BOOL | OPT_INPUT | OPT_EXPERT, { &find_stream_info },
"read and decode the streams to fill missing information with heuristics" },
{ "filter_threads", HAS_ARG | OPT_INT | OPT_EXPERT, { &filter_nbthreads }, "number of filter threads per graph" },
{ NULL, },
};
@@ -3739,12 +3728,6 @@ int main(int argc, char **argv)
if (!display_disable) {
int flags = SDL_WINDOW_HIDDEN;
if (alwaysontop)
#if SDL_VERSION_ATLEAST(2,0,5)
flags |= SDL_WINDOW_ALWAYS_ON_TOP;
#else
av_log(NULL, AV_LOG_WARNING, "Your SDL version doesn't support SDL_WINDOW_ALWAYS_ON_TOP. Feature will be inactive.\n");
#endif
if (borderless)
flags |= SDL_WINDOW_BORDERLESS;
else

View File

@@ -36,7 +36,6 @@
#include "libavutil/display.h"
#include "libavutil/hash.h"
#include "libavutil/mastering_display_metadata.h"
#include "libavutil/dovi_meta.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "libavutil/spherical.h"
@@ -166,8 +165,6 @@ typedef enum {
SECTION_ID_FRAME_TAGS,
SECTION_ID_FRAME_SIDE_DATA_LIST,
SECTION_ID_FRAME_SIDE_DATA,
SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST,
SECTION_ID_FRAME_SIDE_DATA_TIMECODE,
SECTION_ID_FRAME_LOG,
SECTION_ID_FRAME_LOGS,
SECTION_ID_LIBRARY_VERSION,
@@ -212,9 +209,7 @@ static struct section sections[] = {
[SECTION_ID_FRAME] = { SECTION_ID_FRAME, "frame", 0, { SECTION_ID_FRAME_TAGS, SECTION_ID_FRAME_SIDE_DATA_LIST, SECTION_ID_FRAME_LOGS, -1 } },
[SECTION_ID_FRAME_TAGS] = { SECTION_ID_FRAME_TAGS, "tags", SECTION_FLAG_HAS_VARIABLE_FIELDS, { -1 }, .element_name = "tag", .unique_name = "frame_tags" },
[SECTION_ID_FRAME_SIDE_DATA_LIST] ={ SECTION_ID_FRAME_SIDE_DATA_LIST, "side_data_list", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_SIDE_DATA, -1 }, .element_name = "side_data", .unique_name = "frame_side_data_list" },
[SECTION_ID_FRAME_SIDE_DATA] = { SECTION_ID_FRAME_SIDE_DATA, "side_data", 0, { SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST, -1 } },
[SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST] = { SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST, "timecodes", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_SIDE_DATA_TIMECODE, -1 } },
[SECTION_ID_FRAME_SIDE_DATA_TIMECODE] = { SECTION_ID_FRAME_SIDE_DATA_TIMECODE, "timecode", 0, { -1 } },
[SECTION_ID_FRAME_SIDE_DATA] = { SECTION_ID_FRAME_SIDE_DATA, "side_data", 0, { -1 } },
[SECTION_ID_FRAME_LOGS] = { SECTION_ID_FRAME_LOGS, "logs", SECTION_FLAG_IS_ARRAY, { SECTION_ID_FRAME_LOG, -1 } },
[SECTION_ID_FRAME_LOG] = { SECTION_ID_FRAME_LOG, "log", 0, { -1 }, },
[SECTION_ID_LIBRARY_VERSIONS] = { SECTION_ID_LIBRARY_VERSIONS, "library_versions", SECTION_FLAG_IS_ARRAY, { SECTION_ID_LIBRARY_VERSION, -1 } },
@@ -255,7 +250,6 @@ static const OptionDef *options;
/* FFprobe context */
static const char *input_filename;
static const char *print_input_filename;
static AVInputFormat *iformat = NULL;
static struct AVHashContext *hash;
@@ -1085,12 +1079,12 @@ typedef struct CompactContext {
#define OFFSET(x) offsetof(CompactContext, x)
static const AVOption compact_options[]= {
{"item_sep", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str="|"}, 0, 0 },
{"s", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str="|"}, 0, 0 },
{"item_sep", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str="|"}, CHAR_MIN, CHAR_MAX },
{"s", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str="|"}, CHAR_MIN, CHAR_MAX },
{"nokey", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 },
{"nk", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1 },
{"escape", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="c"}, 0, 0 },
{"e", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="c"}, 0, 0 },
{"escape", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="c"}, CHAR_MIN, CHAR_MAX },
{"e", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="c"}, CHAR_MIN, CHAR_MAX },
{"print_section", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{"p", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{NULL},
@@ -1201,12 +1195,12 @@ static const Writer compact_writer = {
#define OFFSET(x) offsetof(CompactContext, x)
static const AVOption csv_options[] = {
{"item_sep", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str=","}, 0, 0 },
{"s", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str=","}, 0, 0 },
{"item_sep", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str=","}, CHAR_MIN, CHAR_MAX },
{"s", "set item separator", OFFSET(item_sep_str), AV_OPT_TYPE_STRING, {.str=","}, CHAR_MIN, CHAR_MAX },
{"nokey", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{"nk", "force no key printing", OFFSET(nokey), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{"escape", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="csv"}, 0, 0 },
{"e", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="csv"}, 0, 0 },
{"escape", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="csv"}, CHAR_MIN, CHAR_MAX },
{"e", "set escape mode", OFFSET(escape_mode_str), AV_OPT_TYPE_STRING, {.str="csv"}, CHAR_MIN, CHAR_MAX },
{"print_section", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{"p", "print section name", OFFSET(print_section), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{NULL},
@@ -1239,8 +1233,8 @@ typedef struct FlatContext {
#define OFFSET(x) offsetof(FlatContext, x)
static const AVOption flat_options[]= {
{"sep_char", "set separator", OFFSET(sep_str), AV_OPT_TYPE_STRING, {.str="."}, 0, 0 },
{"s", "set separator", OFFSET(sep_str), AV_OPT_TYPE_STRING, {.str="."}, 0, 0 },
{"sep_char", "set separator", OFFSET(sep_str), AV_OPT_TYPE_STRING, {.str="."}, CHAR_MIN, CHAR_MAX },
{"s", "set separator", OFFSET(sep_str), AV_OPT_TYPE_STRING, {.str="."}, CHAR_MIN, CHAR_MAX },
{"hierarchical", "specify if the section specification should be hierarchical", OFFSET(hierarchical), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{"h", "specify if the section specification should be hierarchical", OFFSET(hierarchical), AV_OPT_TYPE_BOOL, {.i64=1}, 0, 1 },
{NULL},
@@ -1537,7 +1531,7 @@ static void json_print_section_header(WriterContext *wctx)
if (parent_section && parent_section->id == SECTION_ID_PACKETS_AND_FRAMES) {
if (!json->compact)
JSON_INDENT();
printf("\"type\": \"%s\"", section->name);
printf("\"type\": \"%s\"%s", section->name, json->item_sep);
}
}
av_bprint_finalize(&buf, NULL);
@@ -1581,10 +1575,8 @@ static inline void json_print_item_str(WriterContext *wctx,
static void json_print_str(WriterContext *wctx, const char *key, const char *value)
{
JSONContext *json = wctx->priv;
const struct section *parent_section = wctx->level ?
wctx->section[wctx->level-1] : NULL;
if (wctx->nb_item[wctx->level] || (parent_section && parent_section->id == SECTION_ID_PACKETS_AND_FRAMES))
if (wctx->nb_item[wctx->level])
printf("%s", json->item_sep);
if (!json->compact)
JSON_INDENT();
@@ -1594,11 +1586,9 @@ static void json_print_str(WriterContext *wctx, const char *key, const char *val
static void json_print_int(WriterContext *wctx, const char *key, long long int value)
{
JSONContext *json = wctx->priv;
const struct section *parent_section = wctx->level ?
wctx->section[wctx->level-1] : NULL;
AVBPrint buf;
if (wctx->nb_item[wctx->level] || (parent_section && parent_section->id == SECTION_ID_PACKETS_AND_FRAMES))
if (wctx->nb_item[wctx->level])
printf("%s", json->item_sep);
if (!json->compact)
JSON_INDENT();
@@ -1929,16 +1919,6 @@ static void print_pkt_side_data(WriterContext *w,
AVContentLightMetadata *metadata = (AVContentLightMetadata *)sd->data;
print_int("max_content", metadata->MaxCLL);
print_int("max_average", metadata->MaxFALL);
} else if (sd->type == AV_PKT_DATA_DOVI_CONF) {
AVDOVIDecoderConfigurationRecord *dovi = (AVDOVIDecoderConfigurationRecord *)sd->data;
print_int("dv_version_major", dovi->dv_version_major);
print_int("dv_version_minor", dovi->dv_version_minor);
print_int("dv_profile", dovi->dv_profile);
print_int("dv_level", dovi->dv_level);
print_int("rpu_present_flag", dovi->rpu_present_flag);
print_int("el_present_flag", dovi->el_present_flag);
print_int("bl_present_flag", dovi->bl_present_flag);
print_int("dv_bl_signal_compatibility_id", dovi->dv_bl_signal_compatibility_id);
}
writer_print_section_footer(w);
}
@@ -2219,18 +2199,6 @@ static void show_frame(WriterContext *w, AVFrame *frame, AVStream *stream,
char tcbuf[AV_TIMECODE_STR_SIZE];
av_timecode_make_mpeg_tc_string(tcbuf, *(int64_t *)(sd->data));
print_str("timecode", tcbuf);
} else if (sd->type == AV_FRAME_DATA_S12M_TIMECODE && sd->size == 16) {
uint32_t *tc = (uint32_t*)sd->data;
int m = FFMIN(tc[0],3);
writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_TIMECODE_LIST);
for (int j = 1; j <= m ; j++) {
char tcbuf[AV_TIMECODE_STR_SIZE];
av_timecode_make_smpte_tc_string(tcbuf, tc[j], 0);
writer_print_section_header(w, SECTION_ID_FRAME_SIDE_DATA_TIMECODE);
print_str("value", tcbuf);
writer_print_section_footer(w);
}
writer_print_section_footer(w);
} else if (sd->type == AV_FRAME_DATA_MASTERING_DISPLAY_METADATA) {
AVMasteringDisplayMetadata *metadata = (AVMasteringDisplayMetadata *)sd->data;
@@ -2445,7 +2413,9 @@ static int read_interval_packets(WriterContext *w, InputFile *ifile,
}
av_packet_unref(&pkt);
}
av_packet_unref(&pkt);
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
//Flush remaining frames that are cached in the decoder
for (i = 0; i < fmt_ctx->nb_streams; i++) {
pkt.stream_index = i;
@@ -2547,7 +2517,6 @@ static int show_stream(WriterContext *w, AVFormatContext *fmt_ctx, int stream_id
if (dec_ctx) {
print_int("coded_width", dec_ctx->coded_width);
print_int("coded_height", dec_ctx->coded_height);
print_int("closed_captions", !!(dec_ctx->properties & FF_CODEC_PROPERTY_CLOSED_CAPTIONS));
}
#endif
print_int("has_b_frames", par->video_delay);
@@ -2677,20 +2646,20 @@ static int show_stream(WriterContext *w, AVFormatContext *fmt_ctx, int stream_id
} while (0)
if (do_show_stream_disposition) {
writer_print_section_header(w, in_program ? SECTION_ID_PROGRAM_STREAM_DISPOSITION : SECTION_ID_STREAM_DISPOSITION);
PRINT_DISPOSITION(DEFAULT, "default");
PRINT_DISPOSITION(DUB, "dub");
PRINT_DISPOSITION(ORIGINAL, "original");
PRINT_DISPOSITION(COMMENT, "comment");
PRINT_DISPOSITION(LYRICS, "lyrics");
PRINT_DISPOSITION(KARAOKE, "karaoke");
PRINT_DISPOSITION(FORCED, "forced");
PRINT_DISPOSITION(HEARING_IMPAIRED, "hearing_impaired");
PRINT_DISPOSITION(VISUAL_IMPAIRED, "visual_impaired");
PRINT_DISPOSITION(CLEAN_EFFECTS, "clean_effects");
PRINT_DISPOSITION(ATTACHED_PIC, "attached_pic");
PRINT_DISPOSITION(TIMED_THUMBNAILS, "timed_thumbnails");
writer_print_section_footer(w);
writer_print_section_header(w, in_program ? SECTION_ID_PROGRAM_STREAM_DISPOSITION : SECTION_ID_STREAM_DISPOSITION);
PRINT_DISPOSITION(DEFAULT, "default");
PRINT_DISPOSITION(DUB, "dub");
PRINT_DISPOSITION(ORIGINAL, "original");
PRINT_DISPOSITION(COMMENT, "comment");
PRINT_DISPOSITION(LYRICS, "lyrics");
PRINT_DISPOSITION(KARAOKE, "karaoke");
PRINT_DISPOSITION(FORCED, "forced");
PRINT_DISPOSITION(HEARING_IMPAIRED, "hearing_impaired");
PRINT_DISPOSITION(VISUAL_IMPAIRED, "visual_impaired");
PRINT_DISPOSITION(CLEAN_EFFECTS, "clean_effects");
PRINT_DISPOSITION(ATTACHED_PIC, "attached_pic");
PRINT_DISPOSITION(TIMED_THUMBNAILS, "timed_thumbnails");
writer_print_section_footer(w);
}
if (do_show_stream_tags)
@@ -2849,8 +2818,7 @@ static void show_error(WriterContext *w, int err)
writer_print_section_footer(w);
}
static int open_input_file(InputFile *ifile, const char *filename,
const char *print_filename)
static int open_input_file(InputFile *ifile, const char *filename)
{
int err, i;
AVFormatContext *fmt_ctx = NULL;
@@ -2872,10 +2840,6 @@ static int open_input_file(InputFile *ifile, const char *filename,
print_error(filename, err);
return err;
}
if (print_filename) {
av_freep(&fmt_ctx->url);
fmt_ctx->url = av_strdup(print_filename);
}
ifile->fmt_ctx = fmt_ctx;
if (scan_all_pmts_set)
av_dict_set(&format_opts, "scan_all_pmts", NULL, AV_DICT_MATCH_CASE);
@@ -2952,7 +2916,6 @@ static int open_input_file(InputFile *ifile, const char *filename,
ist->dec_ctx->pkt_timebase = stream->time_base;
ist->dec_ctx->framerate = stream->avg_frame_rate;
#if FF_API_LAVF_AVCTX
ist->dec_ctx->properties = stream->codec->properties;
ist->dec_ctx->coded_width = stream->codec->coded_width;
ist->dec_ctx->coded_height = stream->codec->coded_height;
#endif
@@ -2990,8 +2953,7 @@ static void close_input_file(InputFile *ifile)
avformat_close_input(&ifile->fmt_ctx);
}
static int probe_file(WriterContext *wctx, const char *filename,
const char *print_filename)
static int probe_file(WriterContext *wctx, const char *filename)
{
InputFile ifile = { 0 };
int ret, i;
@@ -3000,7 +2962,7 @@ static int probe_file(WriterContext *wctx, const char *filename,
do_read_frames = do_show_frames || do_count_frames;
do_read_packets = do_show_packets || do_count_packets;
ret = open_input_file(&ifile, filename, print_filename);
ret = open_input_file(&ifile, filename);
if (ret < 0)
goto end;
@@ -3306,12 +3268,6 @@ static int opt_input_file_i(void *optctx, const char *opt, const char *arg)
return 0;
}
static int opt_print_filename(void *optctx, const char *opt, const char *arg)
{
print_input_filename = arg;
return 0;
}
void show_help_default(const char *opt, const char *arg)
{
av_log_set_callback(log_callback_help);
@@ -3501,7 +3457,7 @@ static int opt_sections(void *optctx, const char *opt, const char *arg)
return 0;
}
static int opt_show_versions(void *optctx, const char *opt, const char *arg)
static int opt_show_versions(const char *opt, const char *arg)
{
mark_section_show_entries(SECTION_ID_PROGRAM_VERSION, 1, NULL);
mark_section_show_entries(SECTION_ID_LIBRARY_VERSION, 1, NULL);
@@ -3509,7 +3465,7 @@ static int opt_show_versions(void *optctx, const char *opt, const char *arg)
}
#define DEFINE_OPT_SHOW_SECTION(section, target_section_id) \
static int opt_show_##section(void *optctx, const char *opt, const char *arg) \
static int opt_show_##section(const char *opt, const char *arg) \
{ \
mark_section_show_entries(SECTION_ID_##target_section_id, 1, NULL); \
return 0; \
@@ -3537,40 +3493,39 @@ static const OptionDef real_options[] = {
"use sexagesimal format HOURS:MM:SS.MICROSECONDS for time units" },
{ "pretty", 0, {.func_arg = opt_pretty},
"prettify the format of displayed values, make it more human readable" },
{ "print_format", OPT_STRING | HAS_ARG, { &print_format },
{ "print_format", OPT_STRING | HAS_ARG, {(void*)&print_format},
"set the output printing format (available formats are: default, compact, csv, flat, ini, json, xml)", "format" },
{ "of", OPT_STRING | HAS_ARG, { &print_format }, "alias for -print_format", "format" },
{ "select_streams", OPT_STRING | HAS_ARG, { &stream_specifier }, "select the specified streams", "stream_specifier" },
{ "of", OPT_STRING | HAS_ARG, {(void*)&print_format}, "alias for -print_format", "format" },
{ "select_streams", OPT_STRING | HAS_ARG, {(void*)&stream_specifier}, "select the specified streams", "stream_specifier" },
{ "sections", OPT_EXIT, {.func_arg = opt_sections}, "print sections structure and section information, and exit" },
{ "show_data", OPT_BOOL, { &do_show_data }, "show packets data" },
{ "show_data_hash", OPT_STRING | HAS_ARG, { &show_data_hash }, "show packets data hash" },
{ "show_error", 0, { .func_arg = &opt_show_error }, "show probing error" },
{ "show_format", 0, { .func_arg = &opt_show_format }, "show format/container info" },
{ "show_frames", 0, { .func_arg = &opt_show_frames }, "show frames info" },
{ "show_data", OPT_BOOL, {(void*)&do_show_data}, "show packets data" },
{ "show_data_hash", OPT_STRING | HAS_ARG, {(void*)&show_data_hash}, "show packets data hash" },
{ "show_error", 0, {(void*)&opt_show_error}, "show probing error" },
{ "show_format", 0, {(void*)&opt_show_format}, "show format/container info" },
{ "show_frames", 0, {(void*)&opt_show_frames}, "show frames info" },
{ "show_format_entry", HAS_ARG, {.func_arg = opt_show_format_entry},
"show a particular entry from the format/container info", "entry" },
{ "show_entries", HAS_ARG, {.func_arg = opt_show_entries},
"show a set of specified entries", "entry_list" },
#if HAVE_THREADS
{ "show_log", OPT_INT|HAS_ARG, { &do_show_log }, "show log" },
{ "show_log", OPT_INT|HAS_ARG, {(void*)&do_show_log}, "show log" },
#endif
{ "show_packets", 0, { .func_arg = &opt_show_packets }, "show packets info" },
{ "show_programs", 0, { .func_arg = &opt_show_programs }, "show programs info" },
{ "show_streams", 0, { .func_arg = &opt_show_streams }, "show streams info" },
{ "show_chapters", 0, { .func_arg = &opt_show_chapters }, "show chapters info" },
{ "count_frames", OPT_BOOL, { &do_count_frames }, "count the number of frames per stream" },
{ "count_packets", OPT_BOOL, { &do_count_packets }, "count the number of packets per stream" },
{ "show_program_version", 0, { .func_arg = &opt_show_program_version }, "show ffprobe version" },
{ "show_library_versions", 0, { .func_arg = &opt_show_library_versions }, "show library versions" },
{ "show_versions", 0, { .func_arg = &opt_show_versions }, "show program and library versions" },
{ "show_pixel_formats", 0, { .func_arg = &opt_show_pixel_formats }, "show pixel format descriptions" },
{ "show_private_data", OPT_BOOL, { &show_private_data }, "show private data" },
{ "private", OPT_BOOL, { &show_private_data }, "same as show_private_data" },
{ "show_packets", 0, {(void*)&opt_show_packets}, "show packets info" },
{ "show_programs", 0, {(void*)&opt_show_programs}, "show programs info" },
{ "show_streams", 0, {(void*)&opt_show_streams}, "show streams info" },
{ "show_chapters", 0, {(void*)&opt_show_chapters}, "show chapters info" },
{ "count_frames", OPT_BOOL, {(void*)&do_count_frames}, "count the number of frames per stream" },
{ "count_packets", OPT_BOOL, {(void*)&do_count_packets}, "count the number of packets per stream" },
{ "show_program_version", 0, {(void*)&opt_show_program_version}, "show ffprobe version" },
{ "show_library_versions", 0, {(void*)&opt_show_library_versions}, "show library versions" },
{ "show_versions", 0, {(void*)&opt_show_versions}, "show program and library versions" },
{ "show_pixel_formats", 0, {(void*)&opt_show_pixel_formats}, "show pixel format descriptions" },
{ "show_private_data", OPT_BOOL, {(void*)&show_private_data}, "show private data" },
{ "private", OPT_BOOL, {(void*)&show_private_data}, "same as show_private_data" },
{ "bitexact", OPT_BOOL, {&do_bitexact}, "force bitexact output" },
{ "read_intervals", HAS_ARG, {.func_arg = opt_read_intervals}, "set read intervals", "read_intervals" },
{ "default", HAS_ARG | OPT_AUDIO | OPT_VIDEO | OPT_EXPERT, {.func_arg = opt_default}, "generic catch all option", "" },
{ "i", HAS_ARG, {.func_arg = opt_input_file_i}, "read specified file", "input_file"},
{ "print_filename", HAS_ARG, {.func_arg = opt_print_filename}, "override the printed input filename", "print_file"},
{ "find_stream_info", OPT_BOOL | OPT_INPUT | OPT_EXPERT, { &find_stream_info },
"read and decode the streams to fill missing information with heuristics" },
{ NULL, },
@@ -3719,7 +3674,7 @@ int main(int argc, char **argv)
av_log(NULL, AV_LOG_ERROR, "Use -h to get full help or, even better, run 'man %s'.\n", program_name);
ret = AVERROR(EINVAL);
} else if (input_filename) {
ret = probe_file(wctx, input_filename, print_input_filename);
ret = probe_file(wctx, input_filename);
if (ret < 0 && do_show_error)
show_error(wctx, ret);
}

View File

@@ -145,7 +145,7 @@ typedef struct FourXContext {
int mv[256];
VLC pre_vlc;
int last_dc;
DECLARE_ALIGNED(32, int16_t, block)[6][64];
DECLARE_ALIGNED(16, int16_t, block)[6][64];
void *bitstream_buffer;
unsigned int bitstream_buffer_size;
int version;
@@ -158,7 +158,7 @@ typedef struct FourXContext {
#define FIX_1_847759065 121095
#define FIX_2_613125930 171254
#define MULTIPLY(var, const) ((int)((var) * (unsigned)(const)) >> 16)
#define MULTIPLY(var, const) (((var) * (const)) >> 16)
static void idct(int16_t block[64])
{
@@ -351,8 +351,6 @@ static int decode_p_block(FourXContext *f, uint16_t *dst, const uint16_t *src,
index = size2index[log2h][log2w];
av_assert0(index >= 0);
if (get_bits_left(&f->gb) < 1)
return AVERROR_INVALIDDATA;
h = 1 << log2h;
code = get_vlc2(&f->gb, block_type_vlc[1 - (f->version > 1)][index].table,
BLOCK_TYPE_VLC_BITS, 1);
@@ -525,10 +523,6 @@ static int decode_i_block(FourXContext *f, int16_t *block)
break;
if (code == 0xf0) {
i += 16;
if (i >= 64) {
av_log(f->avctx, AV_LOG_ERROR, "run %d overflow\n", i);
return 0;
}
} else {
if (code & 0xf) {
level = get_xbits(&f->gb, code & 0xf);
@@ -703,7 +697,6 @@ static const uint8_t *read_huffman_tables(FourXContext *f,
len_tab[j] = len;
}
ff_free_vlc(&f->pre_vlc);
if (init_vlc(&f->pre_vlc, ACDC_VLC_BITS, 257, len_tab, 1, 1,
bits_tab, 4, 4, 0))
return NULL;

View File

@@ -164,7 +164,8 @@ static av_cold int eightsvx_decode_init(AVCodecContext *avctx)
case AV_CODEC_ID_8SVX_FIB: esc->table = fibonacci; break;
case AV_CODEC_ID_8SVX_EXP: esc->table = exponential; break;
default:
av_assert1(0);
av_log(avctx, AV_LOG_ERROR, "Invalid codec id %d.\n", avctx->codec->id);
return AVERROR_INVALIDDATA;
}
avctx->sample_fmt = AV_SAMPLE_FMT_U8P;

View File

@@ -6,18 +6,12 @@ HEADERS = ac3_parser.h \
avcodec.h \
avdct.h \
avfft.h \
bsf.h \
codec.h \
codec_desc.h \
codec_id.h \
codec_par.h \
d3d11va.h \
dirac.h \
dv_profile.h \
dxva2.h \
jni.h \
mediacodec.h \
packet.h \
qsv.h \
vaapi.h \
vdpau.h \
@@ -173,16 +167,12 @@ OBJS-$(CONFIG_AAC_ENCODER) += aacenc.o aaccoder.o aacenctab.o \
aacenc_ltp.o \
aacenc_pred.o \
psymodel.o mpeg4audio.o kbdwin.o cbrt_data.o
OBJS-$(CONFIG_AAC_MF_ENCODER) += mfenc.o mf_utils.o
OBJS-$(CONFIG_AASC_DECODER) += aasc.o msrledec.o
OBJS-$(CONFIG_AC3_DECODER) += ac3dec_float.o ac3dec_data.o ac3.o kbdwin.o ac3tab.o
OBJS-$(CONFIG_AC3_FIXED_DECODER) += ac3dec_fixed.o ac3dec_data.o ac3.o kbdwin.o ac3tab.o
OBJS-$(CONFIG_AC3_ENCODER) += ac3enc_float.o ac3enc.o ac3tab.o \
ac3.o kbdwin.o
OBJS-$(CONFIG_AC3_FIXED_ENCODER) += ac3enc_fixed.o ac3enc.o ac3tab.o ac3.o
OBJS-$(CONFIG_AC3_MF_ENCODER) += mfenc.o mf_utils.o
OBJS-$(CONFIG_ACELP_KELVIN_DECODER) += g729dec.o lsp.o celp_math.o celp_filters.o acelp_filters.o acelp_pitch_delay.o acelp_vectors.o g729postfilter.o
OBJS-$(CONFIG_AGM_DECODER) += agm.o
OBJS-$(CONFIG_AIC_DECODER) += aic.o
OBJS-$(CONFIG_ALAC_DECODER) += alac.o alac_data.o alacdsp.o
OBJS-$(CONFIG_ALAC_ENCODER) += alacenc.o alac_data.o
@@ -202,13 +192,12 @@ OBJS-$(CONFIG_AMV_ENCODER) += mjpegenc.o mjpegenc_common.o \
OBJS-$(CONFIG_ANM_DECODER) += anm.o
OBJS-$(CONFIG_ANSI_DECODER) += ansi.o cga_data.o
OBJS-$(CONFIG_APE_DECODER) += apedec.o
OBJS-$(CONFIG_APTX_DECODER) += aptxdec.o aptx.o
OBJS-$(CONFIG_APTX_ENCODER) += aptxenc.o aptx.o
OBJS-$(CONFIG_APTX_HD_DECODER) += aptxdec.o aptx.o
OBJS-$(CONFIG_APTX_HD_ENCODER) += aptxenc.o aptx.o
OBJS-$(CONFIG_APTX_DECODER) += aptx.o
OBJS-$(CONFIG_APTX_ENCODER) += aptx.o
OBJS-$(CONFIG_APTX_HD_DECODER) += aptx.o
OBJS-$(CONFIG_APTX_HD_ENCODER) += aptx.o
OBJS-$(CONFIG_APNG_DECODER) += png.o pngdec.o pngdsp.o
OBJS-$(CONFIG_APNG_ENCODER) += png.o pngenc.o
OBJS-$(CONFIG_ARBC_DECODER) += arbc.o
OBJS-$(CONFIG_SSA_DECODER) += assdec.o ass.o
OBJS-$(CONFIG_SSA_ENCODER) += assenc.o ass.o
OBJS-$(CONFIG_ASS_DECODER) += assdec.o ass.o
@@ -250,9 +239,8 @@ OBJS-$(CONFIG_BRENDER_PIX_DECODER) += brenderpix.o
OBJS-$(CONFIG_C93_DECODER) += c93.o
OBJS-$(CONFIG_CAVS_DECODER) += cavs.o cavsdec.o cavsdsp.o \
cavsdata.o
OBJS-$(CONFIG_CCAPTION_DECODER) += ccaption_dec.o ass.o
OBJS-$(CONFIG_CCAPTION_DECODER) += ccaption_dec.o
OBJS-$(CONFIG_CDGRAPHICS_DECODER) += cdgraphics.o
OBJS-$(CONFIG_CDTOONS_DECODER) += cdtoons.o
OBJS-$(CONFIG_CDXL_DECODER) += cdxl.o
OBJS-$(CONFIG_CFHD_DECODER) += cfhd.o cfhddata.o
OBJS-$(CONFIG_CINEPAK_DECODER) += cinepak.o
@@ -273,7 +261,6 @@ OBJS-$(CONFIG_DCA_DECODER) += dcadec.o dca.o dcadata.o dcahuff.o \
OBJS-$(CONFIG_DCA_ENCODER) += dcaenc.o dca.o dcadata.o dcahuff.o \
dcaadpcm.o
OBJS-$(CONFIG_DDS_DECODER) += dds.o
OBJS-$(CONFIG_DERF_DPCM_DECODER) += dpcm.o
OBJS-$(CONFIG_DIRAC_DECODER) += diracdec.o dirac.o diracdsp.o diractab.o \
dirac_arith.o dirac_dwt.o dirac_vlc.o
OBJS-$(CONFIG_DFA_DECODER) += dfa.o
@@ -292,8 +279,8 @@ OBJS-$(CONFIG_DSS_SP_DECODER) += dss_sp.o
OBJS-$(CONFIG_DST_DECODER) += dstdec.o dsd.o
OBJS-$(CONFIG_DVBSUB_DECODER) += dvbsubdec.o
OBJS-$(CONFIG_DVBSUB_ENCODER) += dvbsub.o
OBJS-$(CONFIG_DVDSUB_DECODER) += dvdsubdec.o dvdsub.o
OBJS-$(CONFIG_DVDSUB_ENCODER) += dvdsubenc.o dvdsub.o
OBJS-$(CONFIG_DVDSUB_DECODER) += dvdsubdec.o
OBJS-$(CONFIG_DVDSUB_ENCODER) += dvdsubenc.o
OBJS-$(CONFIG_DVAUDIO_DECODER) += dvaudiodec.o
OBJS-$(CONFIG_DVVIDEO_DECODER) += dvdec.o dv.o dvdata.o
OBJS-$(CONFIG_DVVIDEO_ENCODER) += dvenc.o dv.o dvdata.o
@@ -361,7 +348,6 @@ OBJS-$(CONFIG_H264_DECODER) += h264dec.o h264_cabac.o h264_cavlc.o \
OBJS-$(CONFIG_H264_AMF_ENCODER) += amfenc_h264.o
OBJS-$(CONFIG_H264_CUVID_DECODER) += cuviddec.o
OBJS-$(CONFIG_H264_MEDIACODEC_DECODER) += mediacodecdec.o
OBJS-$(CONFIG_H264_MF_ENCODER) += mfenc.o mf_utils.o
OBJS-$(CONFIG_H264_MMAL_DECODER) += mmaldec.o
OBJS-$(CONFIG_H264_NVENC_ENCODER) += nvenc_h264.o
OBJS-$(CONFIG_NVENC_ENCODER) += nvenc_h264.o
@@ -376,15 +362,12 @@ OBJS-$(CONFIG_H264_V4L2M2M_DECODER) += v4l2_m2m_dec.o
OBJS-$(CONFIG_H264_V4L2M2M_ENCODER) += v4l2_m2m_enc.o
OBJS-$(CONFIG_HAP_DECODER) += hapdec.o hap.o
OBJS-$(CONFIG_HAP_ENCODER) += hapenc.o hap.o
OBJS-$(CONFIG_HCA_DECODER) += hcadec.o
OBJS-$(CONFIG_HCOM_DECODER) += hcom.o
OBJS-$(CONFIG_HEVC_DECODER) += hevcdec.o hevc_mvs.o \
hevc_cabac.o hevc_refs.o hevcpred.o \
hevcdsp.o hevc_filter.o hevc_data.o
OBJS-$(CONFIG_HEVC_AMF_ENCODER) += amfenc_hevc.o
OBJS-$(CONFIG_HEVC_CUVID_DECODER) += cuviddec.o
OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o
OBJS-$(CONFIG_HEVC_MF_ENCODER) += mfenc.o mf_utils.o
OBJS-$(CONFIG_HEVC_NVENC_ENCODER) += nvenc_hevc.o
OBJS-$(CONFIG_NVENC_HEVC_ENCODER) += nvenc_hevc.o
OBJS-$(CONFIG_HEVC_QSV_DECODER) += qsvdec_h2645.o
@@ -400,14 +383,12 @@ OBJS-$(CONFIG_HQ_HQA_DECODER) += hq_hqa.o hq_hqadata.o hq_hqadsp.o \
OBJS-$(CONFIG_HQX_DECODER) += hqx.o hqxvlc.o hqxdsp.o canopus.o
OBJS-$(CONFIG_HUFFYUV_DECODER) += huffyuv.o huffyuvdec.o
OBJS-$(CONFIG_HUFFYUV_ENCODER) += huffyuv.o huffyuvenc.o
OBJS-$(CONFIG_HYMT_DECODER) += huffyuv.o huffyuvdec.o
OBJS-$(CONFIG_IDCIN_DECODER) += idcinvideo.o
OBJS-$(CONFIG_IDF_DECODER) += bintext.o cga_data.o
OBJS-$(CONFIG_IFF_ILBM_DECODER) += iff.o
OBJS-$(CONFIG_ILBC_DECODER) += ilbcdec.o
OBJS-$(CONFIG_IMC_DECODER) += imc.o
OBJS-$(CONFIG_IMM4_DECODER) += imm4.o
OBJS-$(CONFIG_IMM5_DECODER) += imm5.o
OBJS-$(CONFIG_INDEO2_DECODER) += indeo2.o
OBJS-$(CONFIG_INDEO3_DECODER) += indeo3.o
OBJS-$(CONFIG_INDEO4_DECODER) += indeo4.o ivi.o
@@ -428,7 +409,6 @@ OBJS-$(CONFIG_KMVC_DECODER) += kmvc.o
OBJS-$(CONFIG_LAGARITH_DECODER) += lagarith.o lagarithrac.o
OBJS-$(CONFIG_LJPEG_ENCODER) += ljpegenc.o mjpegenc_common.o
OBJS-$(CONFIG_LOCO_DECODER) += loco.o
OBJS-$(CONFIG_LSCR_DECODER) += png.o pngdec.o pngdsp.o
OBJS-$(CONFIG_M101_DECODER) += m101.o
OBJS-$(CONFIG_MACE3_DECODER) += mace.o
OBJS-$(CONFIG_MACE6_DECODER) += mace.o
@@ -440,7 +420,6 @@ OBJS-$(CONFIG_METASOUND_DECODER) += metasound.o metasound_data.o \
OBJS-$(CONFIG_MICRODVD_DECODER) += microdvddec.o ass.o
OBJS-$(CONFIG_MIMIC_DECODER) += mimic.o
OBJS-$(CONFIG_MJPEG_DECODER) += mjpegdec.o
OBJS-$(CONFIG_MJPEG_QSV_DECODER) += qsvdec_other.o
OBJS-$(CONFIG_MJPEG_ENCODER) += mjpegenc.o mjpegenc_common.o \
mjpegenc_huffman.o
OBJS-$(CONFIG_MJPEGB_DECODER) += mjpegbdec.o
@@ -462,7 +441,6 @@ OBJS-$(CONFIG_MP2FIXED_ENCODER) += mpegaudioenc_fixed.o mpegaudio.o \
mpegaudiodata.o mpegaudiodsp_data.o
OBJS-$(CONFIG_MP2FLOAT_DECODER) += mpegaudiodec_float.o
OBJS-$(CONFIG_MP3_DECODER) += mpegaudiodec_fixed.o
OBJS-$(CONFIG_MP3_MF_ENCODER) += mfenc.o mf_utils.o
OBJS-$(CONFIG_MP3ADU_DECODER) += mpegaudiodec_fixed.o
OBJS-$(CONFIG_MP3ADUFLOAT_DECODER) += mpegaudiodec_float.o
OBJS-$(CONFIG_MP3FLOAT_DECODER) += mpegaudiodec_float.o
@@ -505,23 +483,18 @@ OBJS-$(CONFIG_MSVIDEO1_DECODER) += msvideo1.o
OBJS-$(CONFIG_MSVIDEO1_ENCODER) += msvideo1enc.o elbg.o
OBJS-$(CONFIG_MSZH_DECODER) += lcldec.o
OBJS-$(CONFIG_MTS2_DECODER) += mss4.o
OBJS-$(CONFIG_MV30_DECODER) += mv30.o
OBJS-$(CONFIG_MVC1_DECODER) += mvcdec.o
OBJS-$(CONFIG_MVC2_DECODER) += mvcdec.o
OBJS-$(CONFIG_MVDV_DECODER) += midivid.o
OBJS-$(CONFIG_MVHA_DECODER) += mvha.o
OBJS-$(CONFIG_MWSC_DECODER) += mwsc.o
OBJS-$(CONFIG_MXPEG_DECODER) += mxpegdec.o
OBJS-$(CONFIG_NELLYMOSER_DECODER) += nellymoserdec.o nellymoser.o
OBJS-$(CONFIG_NELLYMOSER_ENCODER) += nellymoserenc.o nellymoser.o
OBJS-$(CONFIG_NOTCHLC_DECODER) += notchlc.o
OBJS-$(CONFIG_NUV_DECODER) += nuv.o rtjpeg.o
OBJS-$(CONFIG_ON2AVC_DECODER) += on2avc.o on2avcdata.o
OBJS-$(CONFIG_OPUS_DECODER) += opusdec.o opus.o opus_celt.o opus_rc.o \
opus_pvq.o opus_silk.o opustab.o vorbis_data.o \
opusdsp.o
opus_pvq.o opus_silk.o opustab.o vorbis_data.o
OBJS-$(CONFIG_OPUS_ENCODER) += opusenc.o opus.o opus_rc.o opustab.o opus_pvq.o \
opusenc_psy.o vorbis_data.o
opusenc_psy.o
OBJS-$(CONFIG_PAF_AUDIO_DECODER) += pafaudio.o
OBJS-$(CONFIG_PAF_VIDEO_DECODER) += pafvideo.o
OBJS-$(CONFIG_PAM_DECODER) += pnmdec.o pnm.o
@@ -530,7 +503,6 @@ OBJS-$(CONFIG_PBM_DECODER) += pnmdec.o pnm.o
OBJS-$(CONFIG_PBM_ENCODER) += pnmenc.o
OBJS-$(CONFIG_PCX_DECODER) += pcx.o
OBJS-$(CONFIG_PCX_ENCODER) += pcxenc.o
OBJS-$(CONFIG_PFM_DECODER) += pnmdec.o pnm.o
OBJS-$(CONFIG_PGM_DECODER) += pnmdec.o pnm.o
OBJS-$(CONFIG_PGM_ENCODER) += pnmenc.o
OBJS-$(CONFIG_PGMYUV_DECODER) += pnmdec.o pnm.o
@@ -600,14 +572,13 @@ OBJS-$(CONFIG_SIPR_DECODER) += sipr.o acelp_pitch_delay.o \
celp_math.o acelp_vectors.o \
acelp_filters.o celp_filters.o \
sipr16k.o
OBJS-$(CONFIG_SIREN_DECODER) += siren.o
OBJS-$(CONFIG_SMACKAUD_DECODER) += smacker.o
OBJS-$(CONFIG_SMACKER_DECODER) += smacker.o
OBJS-$(CONFIG_SMC_DECODER) += smc.o
OBJS-$(CONFIG_SMVJPEG_DECODER) += smvjpegdec.o
OBJS-$(CONFIG_SNOW_DECODER) += snowdec.o snow.o snow_dwt.o
OBJS-$(CONFIG_SNOW_ENCODER) += snowenc.o snow.o snow_dwt.o \
h263.o h263data.o ituh263enc.o
h263.o ituh263enc.o
OBJS-$(CONFIG_SOL_DPCM_DECODER) += dpcm.o
OBJS-$(CONFIG_SONIC_DECODER) += sonic.o
OBJS-$(CONFIG_SONIC_ENCODER) += sonic.o
@@ -627,10 +598,10 @@ OBJS-$(CONFIG_SUNRAST_ENCODER) += sunrastenc.o
OBJS-$(CONFIG_LIBRSVG_DECODER) += librsvgdec.o
OBJS-$(CONFIG_SBC_DECODER) += sbcdec.o sbcdec_data.o sbc.o
OBJS-$(CONFIG_SBC_ENCODER) += sbcenc.o sbc.o sbcdsp.o sbcdsp_data.o
OBJS-$(CONFIG_SVQ1_DECODER) += svq1dec.o svq1.o h263data.o
OBJS-$(CONFIG_SVQ1_DECODER) += svq1dec.o svq1.o svq13.o h263data.o
OBJS-$(CONFIG_SVQ1_ENCODER) += svq1enc.o svq1.o h263data.o \
h263.o ituh263enc.o
OBJS-$(CONFIG_SVQ3_DECODER) += svq3.o mpegutils.o h264data.o
OBJS-$(CONFIG_SVQ3_DECODER) += svq3.o svq13.o mpegutils.o h264data.o
OBJS-$(CONFIG_TEXT_DECODER) += textdec.o ass.o
OBJS-$(CONFIG_TEXT_ENCODER) += srtenc.o ass_split.o
OBJS-$(CONFIG_TAK_DECODER) += takdec.o tak.o takdsp.o
@@ -639,7 +610,7 @@ OBJS-$(CONFIG_TARGA_ENCODER) += targaenc.o rle.o
OBJS-$(CONFIG_TARGA_Y216_DECODER) += targa_y216dec.o
OBJS-$(CONFIG_TDSC_DECODER) += tdsc.o
OBJS-$(CONFIG_TIERTEXSEQVIDEO_DECODER) += tiertexseqv.o
OBJS-$(CONFIG_TIFF_DECODER) += tiff.o lzw.o faxcompr.o tiff_data.o tiff_common.o mjpegdec.o
OBJS-$(CONFIG_TIFF_DECODER) += tiff.o lzw.o faxcompr.o tiff_data.o tiff_common.o
OBJS-$(CONFIG_TIFF_ENCODER) += tiffenc.o rle.o lzwenc.o tiff_data.o
OBJS-$(CONFIG_TMV_DECODER) += tmv.o cga_data.o
OBJS-$(CONFIG_TRUEHD_DECODER) += mlpdec.o mlpdsp.o
@@ -705,11 +676,10 @@ OBJS-$(CONFIG_VP9_CUVID_DECODER) += cuviddec.o
OBJS-$(CONFIG_VP9_MEDIACODEC_DECODER) += mediacodecdec.o
OBJS-$(CONFIG_VP9_RKMPP_DECODER) += rkmppdec.o
OBJS-$(CONFIG_VP9_VAAPI_ENCODER) += vaapi_encode_vp9.o
OBJS-$(CONFIG_VP9_QSV_ENCODER) += qsvenc_vp9.o
OBJS-$(CONFIG_VPLAYER_DECODER) += textdec.o ass.o
OBJS-$(CONFIG_VP9_V4L2M2M_DECODER) += v4l2_m2m_dec.o
OBJS-$(CONFIG_VQA_DECODER) += vqavideo.o
OBJS-$(CONFIG_WAVPACK_DECODER) += wavpack.o dsd.o
OBJS-$(CONFIG_WAVPACK_DECODER) += wavpack.o
OBJS-$(CONFIG_WAVPACK_ENCODER) += wavpackenc.o
OBJS-$(CONFIG_WCMV_DECODER) += wcmv.o
OBJS-$(CONFIG_WEBP_DECODER) += webp.o
@@ -725,7 +695,7 @@ OBJS-$(CONFIG_WMAVOICE_DECODER) += wmavoice.o \
celp_filters.o \
acelp_vectors.o acelp_filters.o
OBJS-$(CONFIG_WMV1_DECODER) += msmpeg4dec.o msmpeg4.o msmpeg4data.o
OBJS-$(CONFIG_WMV1_ENCODER) += msmpeg4enc.o msmpeg4.o msmpeg4data.o
OBJS-$(CONFIG_WMV1_ENCODER) += msmpeg4enc.o
OBJS-$(CONFIG_WMV2_DECODER) += wmv2dec.o wmv2.o wmv2data.o \
msmpeg4dec.o msmpeg4.o msmpeg4data.o
OBJS-$(CONFIG_WMV2_ENCODER) += wmv2enc.o wmv2.o wmv2data.o \
@@ -767,7 +737,6 @@ OBJS-$(CONFIG_PCM_ALAW_DECODER) += pcm.o
OBJS-$(CONFIG_PCM_ALAW_ENCODER) += pcm.o
OBJS-$(CONFIG_PCM_BLURAY_DECODER) += pcm-bluray.o
OBJS-$(CONFIG_PCM_DVD_DECODER) += pcm-dvd.o
OBJS-$(CONFIG_PCM_DVD_ENCODER) += pcm-dvdenc.o
OBJS-$(CONFIG_PCM_F16LE_DECODER) += pcm.o
OBJS-$(CONFIG_PCM_F24LE_DECODER) += pcm.o
OBJS-$(CONFIG_PCM_F32BE_DECODER) += pcm.o
@@ -827,14 +796,13 @@ OBJS-$(CONFIG_PCM_U32LE_DECODER) += pcm.o
OBJS-$(CONFIG_PCM_U32LE_ENCODER) += pcm.o
OBJS-$(CONFIG_PCM_VIDC_DECODER) += pcm.o
OBJS-$(CONFIG_PCM_VIDC_ENCODER) += pcm.o
OBJS-$(CONFIG_PCM_ZORK_DECODER) += pcm.o
OBJS-$(CONFIG_ADPCM_4XM_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_ADX_DECODER) += adxdec.o adx.o
OBJS-$(CONFIG_ADPCM_ADX_ENCODER) += adxenc.o adx.o
OBJS-$(CONFIG_ADPCM_AFC_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_AGM_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_AICA_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_ARGO_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_CT_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_DTK_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_EA_DECODER) += adpcm.o adpcm_data.o
@@ -850,23 +818,17 @@ OBJS-$(CONFIG_ADPCM_G726_ENCODER) += g726.o
OBJS-$(CONFIG_ADPCM_G726LE_DECODER) += g726.o
OBJS-$(CONFIG_ADPCM_G726LE_ENCODER) += g726.o
OBJS-$(CONFIG_ADPCM_IMA_AMV_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_ALP_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_APC_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_APM_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_CUNNING_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_DAT4_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_DK3_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_DK4_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_EA_EACS_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_EA_SEAD_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_ISS_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_MTF_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_OKI_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_QT_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_QT_ENCODER) += adpcmenc.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_RAD_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_SSI_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_SSI_ENCODER) += adpcmenc.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_SMJPEG_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_WAV_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_IMA_WAV_ENCODER) += adpcmenc.o adpcm_data.o
@@ -886,7 +848,6 @@ OBJS-$(CONFIG_ADPCM_VIMA_DECODER) += vima.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_XA_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_YAMAHA_DECODER) += adpcm.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_YAMAHA_ENCODER) += adpcmenc.o adpcm_data.o
OBJS-$(CONFIG_ADPCM_ZORK_DECODER) += adpcm.o adpcm_data.o
# hardware accelerators
OBJS-$(CONFIG_D3D11VA) += dxva2.o
@@ -909,7 +870,7 @@ OBJS-$(CONFIG_HEVC_D3D11VA_HWACCEL) += dxva2_hevc.o
OBJS-$(CONFIG_HEVC_DXVA2_HWACCEL) += dxva2_hevc.o
OBJS-$(CONFIG_HEVC_NVDEC_HWACCEL) += nvdec_hevc.o
OBJS-$(CONFIG_HEVC_QSV_HWACCEL) += qsvdec_h2645.o
OBJS-$(CONFIG_HEVC_VAAPI_HWACCEL) += vaapi_hevc.o h265_profile_level.o
OBJS-$(CONFIG_HEVC_VAAPI_HWACCEL) += vaapi_hevc.o
OBJS-$(CONFIG_HEVC_VDPAU_HWACCEL) += vdpau_hevc.o
OBJS-$(CONFIG_MJPEG_NVDEC_HWACCEL) += nvdec_mjpeg.o
OBJS-$(CONFIG_MJPEG_VAAPI_HWACCEL) += vaapi_mjpeg.o
@@ -941,7 +902,6 @@ OBJS-$(CONFIG_VP9_D3D11VA_HWACCEL) += dxva2_vp9.o
OBJS-$(CONFIG_VP9_DXVA2_HWACCEL) += dxva2_vp9.o
OBJS-$(CONFIG_VP9_NVDEC_HWACCEL) += nvdec_vp9.o
OBJS-$(CONFIG_VP9_VAAPI_HWACCEL) += vaapi_vp9.o
OBJS-$(CONFIG_VP9_VDPAU_HWACCEL) += vdpau_vp9.o
OBJS-$(CONFIG_VP8_QSV_HWACCEL) += qsvdec_other.o
# libavformat dependencies
@@ -993,11 +953,9 @@ OBJS-$(CONFIG_PCM_ALAW_AT_ENCODER) += audiotoolboxenc.o
OBJS-$(CONFIG_PCM_MULAW_AT_ENCODER) += audiotoolboxenc.o
OBJS-$(CONFIG_LIBAOM_AV1_DECODER) += libaomdec.o
OBJS-$(CONFIG_LIBAOM_AV1_ENCODER) += libaomenc.o
OBJS-$(CONFIG_LIBARIBB24_DECODER) += libaribb24.o ass.o
OBJS-$(CONFIG_LIBCELT_DECODER) += libcelt_dec.o
OBJS-$(CONFIG_LIBCODEC2_DECODER) += libcodec2.o codec2utils.o
OBJS-$(CONFIG_LIBCODEC2_ENCODER) += libcodec2.o codec2utils.o
OBJS-$(CONFIG_LIBDAV1D_DECODER) += libdav1d.o
OBJS-$(CONFIG_LIBDAVS2_DECODER) += libdavs2.o
OBJS-$(CONFIG_LIBFDK_AAC_DECODER) += libfdk-aacdec.o
OBJS-$(CONFIG_LIBFDK_AAC_ENCODER) += libfdk-aacenc.o
@@ -1020,7 +978,6 @@ OBJS-$(CONFIG_LIBOPUS_DECODER) += libopusdec.o libopus.o \
vorbis_data.o
OBJS-$(CONFIG_LIBOPUS_ENCODER) += libopusenc.o libopus.o \
vorbis_data.o
OBJS-$(CONFIG_LIBRAV1E_ENCODER) += librav1e.o
OBJS-$(CONFIG_LIBSHINE_ENCODER) += libshine.o
OBJS-$(CONFIG_LIBSPEEX_DECODER) += libspeexdec.o
OBJS-$(CONFIG_LIBSPEEX_ENCODER) += libspeexenc.o
@@ -1066,20 +1023,18 @@ OBJS-$(CONFIG_DVD_NAV_PARSER) += dvd_nav_parser.o
OBJS-$(CONFIG_DVDSUB_PARSER) += dvdsub_parser.o
OBJS-$(CONFIG_FLAC_PARSER) += flac_parser.o flacdata.o flac.o \
vorbis_data.o
OBJS-$(CONFIG_G723_1_PARSER) += g723_1_parser.o
OBJS-$(CONFIG_G729_PARSER) += g729_parser.o
OBJS-$(CONFIG_GIF_PARSER) += gif_parser.o
OBJS-$(CONFIG_GSM_PARSER) += gsm_parser.o
OBJS-$(CONFIG_H261_PARSER) += h261_parser.o
OBJS-$(CONFIG_H263_PARSER) += h263_parser.o
OBJS-$(CONFIG_H264_PARSER) += h264_parser.o h264_sei.o h264data.o
OBJS-$(CONFIG_HEVC_PARSER) += hevc_parser.o hevc_data.o
OBJS-$(CONFIG_JPEG2000_PARSER) += jpeg2000_parser.o
OBJS-$(CONFIG_MJPEG_PARSER) += mjpeg_parser.o
OBJS-$(CONFIG_MLP_PARSER) += mlp_parse.o mlp_parser.o mlp.o
OBJS-$(CONFIG_MLP_PARSER) += mlp_parser.o mlp.o
OBJS-$(CONFIG_MPEG4VIDEO_PARSER) += mpeg4video_parser.o h263.o \
mpeg4videodec.o mpeg4video.o \
ituh263dec.o h263dec.o h263data.o
OBJS-$(CONFIG_PNG_PARSER) += png_parser.o
OBJS-$(CONFIG_MPEGAUDIO_PARSER) += mpegaudio_parser.o
OBJS-$(CONFIG_MPEGVIDEO_PARSER) += mpegvideo_parser.o \
mpeg12.o mpeg12data.o
@@ -1097,14 +1052,11 @@ OBJS-$(CONFIG_VC1_PARSER) += vc1_parser.o vc1.o vc1data.o \
OBJS-$(CONFIG_VP3_PARSER) += vp3_parser.o
OBJS-$(CONFIG_VP8_PARSER) += vp8_parser.o
OBJS-$(CONFIG_VP9_PARSER) += vp9_parser.o
OBJS-$(CONFIG_WEBP_PARSER) += webp_parser.o
OBJS-$(CONFIG_XMA_PARSER) += xma_parser.o
# bitstream filters
OBJS-$(CONFIG_AAC_ADTSTOASC_BSF) += aac_adtstoasc_bsf.o mpeg4audio.o
OBJS-$(CONFIG_AV1_METADATA_BSF) += av1_metadata_bsf.o
OBJS-$(CONFIG_AV1_FRAME_MERGE_BSF) += av1_frame_merge_bsf.o
OBJS-$(CONFIG_AV1_FRAME_SPLIT_BSF) += av1_frame_split_bsf.o
OBJS-$(CONFIG_CHOMP_BSF) += chomp_bsf.o
OBJS-$(CONFIG_DUMP_EXTRADATA_BSF) += dump_extradata_bsf.o
OBJS-$(CONFIG_DCA_CORE_BSF) += dca_core_bsf.o
@@ -1116,7 +1068,7 @@ OBJS-$(CONFIG_H264_METADATA_BSF) += h264_metadata_bsf.o h264_levels.o
OBJS-$(CONFIG_H264_MP4TOANNEXB_BSF) += h264_mp4toannexb_bsf.o
OBJS-$(CONFIG_H264_REDUNDANT_PPS_BSF) += h264_redundant_pps_bsf.o
OBJS-$(CONFIG_HAPQA_EXTRACT_BSF) += hapqa_extract_bsf.o hap.o
OBJS-$(CONFIG_HEVC_METADATA_BSF) += h265_metadata_bsf.o h265_profile_level.o
OBJS-$(CONFIG_HEVC_METADATA_BSF) += h265_metadata_bsf.o
OBJS-$(CONFIG_HEVC_MP4TOANNEXB_BSF) += hevc_mp4toannexb_bsf.o
OBJS-$(CONFIG_IMX_DUMP_HEADER_BSF) += imx_dump_header_bsf.o
OBJS-$(CONFIG_MJPEG2JPEG_BSF) += mjpeg2jpeg_bsf.o
@@ -1128,13 +1080,9 @@ OBJS-$(CONFIG_MP3_HEADER_DECOMPRESS_BSF) += mp3_header_decompress_bsf.o \
OBJS-$(CONFIG_MPEG2_METADATA_BSF) += mpeg2_metadata_bsf.o
OBJS-$(CONFIG_NOISE_BSF) += noise_bsf.o
OBJS-$(CONFIG_NULL_BSF) += null_bsf.o
OBJS-$(CONFIG_OPUS_METADATA_BSF) += opus_metadata_bsf.o
OBJS-$(CONFIG_PCM_RECHUNK_BSF) += pcm_rechunk_bsf.o
OBJS-$(CONFIG_PRORES_METADATA_BSF) += prores_metadata_bsf.o
OBJS-$(CONFIG_REMOVE_EXTRADATA_BSF) += remove_extradata_bsf.o
OBJS-$(CONFIG_TEXT2MOVSUB_BSF) += movsub_bsf.o
OBJS-$(CONFIG_TRACE_HEADERS_BSF) += trace_headers_bsf.o
OBJS-$(CONFIG_TRUEHD_CORE_BSF) += truehd_core_bsf.o mlp_parse.o mlp.o
OBJS-$(CONFIG_VP9_METADATA_BSF) += vp9_metadata_bsf.o
OBJS-$(CONFIG_VP9_RAW_REORDER_BSF) += vp9_raw_reorder_bsf.o
OBJS-$(CONFIG_VP9_SUPERFRAME_BSF) += vp9_superframe_bsf.o
@@ -1167,7 +1115,6 @@ SKIPHEADERS-$(CONFIG_JNI) += ffjni.h
SKIPHEADERS-$(CONFIG_LIBVPX) += libvpx.h
SKIPHEADERS-$(CONFIG_LIBWEBP_ENCODER) += libwebpenc_common.h
SKIPHEADERS-$(CONFIG_MEDIACODEC) += mediacodecdec_common.h mediacodec_surface.h mediacodec_wrapper.h mediacodec_sw_buffer.h
SKIPHEADERS-$(CONFIG_MEDIAFOUNDATION) += mf_utils.h
SKIPHEADERS-$(CONFIG_NVDEC) += nvdec.h
SKIPHEADERS-$(CONFIG_NVENC) += nvenc.h
SKIPHEADERS-$(CONFIG_QSV) += qsv.h qsv_internal.h
@@ -1199,7 +1146,6 @@ TESTPROGS-$(CONFIG_IIRFILTER) += iirfilter
TESTPROGS-$(HAVE_MMX) += motion
TESTPROGS-$(CONFIG_MPEGVIDEO) += mpeg12framerate
TESTPROGS-$(CONFIG_H264_METADATA_BSF) += h264_levels
TESTPROGS-$(CONFIG_HEVC_METADATA_BSF) += h265_levels
TESTPROGS-$(CONFIG_RANGECODER) += rangecoder
TESTPROGS-$(CONFIG_SNOW_ENCODER) += snowenc

View File

@@ -60,11 +60,11 @@ typedef struct A64Context {
} A64Context;
/* gray gradient */
static const uint8_t mc_colors[5]={0x0,0xb,0xc,0xf,0x1};
static const int mc_colors[5]={0x0,0xb,0xc,0xf,0x1};
/* other possible gradients - to be tested */
//static const uint8_t mc_colors[5]={0x0,0x8,0xa,0xf,0x7};
//static const uint8_t mc_colors[5]={0x0,0x9,0x8,0xa,0x3};
//static const int mc_colors[5]={0x0,0x8,0xa,0xf,0x7};
//static const int mc_colors[5]={0x0,0x9,0x8,0xa,0x3};
static void to_meta_with_crop(AVCodecContext *avctx,
const AVFrame *p, int *dest)

View File

@@ -356,7 +356,7 @@ struct AACContext {
OutputConfiguration oc[2];
int warned_num_aac_frames;
int warned_960_sbr;
unsigned warned_71_wide;
int warned_gain_control;
/* aacdec functions pointers */
@@ -368,7 +368,7 @@ struct AACContext {
INTFLOAT *in, IndividualChannelStream *ics);
void (*update_ltp)(AACContext *ac, SingleChannelElement *sce);
void (*vector_pow43)(int *coefs, int len);
void (*subband_scale)(int *dst, int *src, int scale, int offset, int len, void *log_context);
void (*subband_scale)(int *dst, int *src, int scale, int offset, int len);
};

View File

@@ -21,8 +21,8 @@
#include "adts_header.h"
#include "adts_parser.h"
#include "avcodec.h"
#include "bsf.h"
#include "bsf_internal.h"
#include "put_bits.h"
#include "get_bits.h"
#include "mpeg4audio.h"
@@ -134,8 +134,8 @@ static int aac_adtstoasc_init(AVBSFContext *ctx)
/* Validate the extradata if the stream is already MPEG-4 AudioSpecificConfig */
if (ctx->par_in->extradata) {
MPEG4AudioConfig mp4ac;
int ret = avpriv_mpeg4audio_get_config2(&mp4ac, ctx->par_in->extradata,
ctx->par_in->extradata_size, 1, ctx);
int ret = avpriv_mpeg4audio_get_config(&mp4ac, ctx->par_in->extradata,
ctx->par_in->extradata_size * 8, 1);
if (ret < 0) {
av_log(ctx, AV_LOG_ERROR, "Error parsing AudioSpecificConfig extradata!\n");
return ret;

View File

@@ -247,12 +247,14 @@ static void apply_independent_coupling(AACContext *ac,
SingleChannelElement *target,
ChannelElement *cce, int index)
{
int i;
const float gain = cce->coup.gain[index][0];
const float *src = cce->ch[0].ret;
float *dest = target->ret;
const int len = 1024 << (ac->oc[1].m4ac.sbr == 1);
ac->fdsp->vector_fmac_scalar(dest, src, gain, len);
for (i = 0; i < len; i++)
dest[i] += gain * src[i];
}
#include "aacdec_template.c"
@@ -409,8 +411,6 @@ static int read_stream_mux_config(struct LATMContext *latmctx,
} else {
int esc;
do {
if (get_bits_left(gb) < 9)
return AVERROR_INVALIDDATA;
esc = get_bits(gb, 1);
skip_bits(gb, 8);
} while (esc);
@@ -561,7 +561,7 @@ AVCodec ff_aac_decoder = {
AV_SAMPLE_FMT_FLTP, AV_SAMPLE_FMT_NONE
},
.capabilities = AV_CODEC_CAP_CHANNEL_CONF | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE | FF_CODEC_CAP_INIT_CLEANUP,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE,
.channel_layouts = aac_channel_layout,
.flush = flush,
.priv_class = &aac_decoder_class,
@@ -586,7 +586,7 @@ AVCodec ff_aac_latm_decoder = {
AV_SAMPLE_FMT_FLTP, AV_SAMPLE_FMT_NONE
},
.capabilities = AV_CODEC_CAP_CHANNEL_CONF | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE | FF_CODEC_CAP_INIT_CLEANUP,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE,
.channel_layouts = aac_channel_layout,
.flush = flush,
.profiles = NULL_IF_CONFIG_SMALL(ff_aac_profiles),

View File

@@ -162,7 +162,7 @@ static void vector_pow43(int *coefs, int len)
}
}
static void subband_scale(int *dst, int *src, int scale, int offset, int len, void *log_context)
static void subband_scale(int *dst, int *src, int scale, int offset, int len)
{
int ssign = scale < 0 ? -1 : 1;
int s = FFABS(scale);
@@ -189,18 +189,18 @@ static void subband_scale(int *dst, int *src, int scale, int offset, int len, vo
dst[i] = out * (unsigned)ssign;
}
} else {
av_log(log_context, AV_LOG_ERROR, "Overflow in subband_scale()\n");
av_log(NULL, AV_LOG_ERROR, "Overflow in subband_scale()\n");
}
}
static void noise_scale(int *coefs, int scale, int band_energy, int len)
{
int s = -scale;
int ssign = scale < 0 ? -1 : 1;
int s = FFABS(scale);
unsigned int round;
int i, out, c = exp2tab[s & 3];
int nlz = 0;
av_assert0(s >= 0);
while (band_energy > 0x7fff) {
band_energy >>= 1;
nlz++;
@@ -216,20 +216,15 @@ static void noise_scale(int *coefs, int scale, int band_energy, int len)
round = s ? 1 << (s-1) : 0;
for (i=0; i<len; i++) {
out = (int)(((int64_t)coefs[i] * c) >> 32);
coefs[i] = -((int)(out+round) >> s);
coefs[i] = ((int)(out+round) >> s) * ssign;
}
}
else {
s = s + 32;
if (s > 0) {
round = 1 << (s-1);
for (i=0; i<len; i++) {
out = (int)((int64_t)((int64_t)coefs[i] * c + round) >> s);
coefs[i] = -out;
}
} else {
for (i=0; i<len; i++)
coefs[i] = -(int64_t)coefs[i] * c * (1 << -s);
round = 1 << (s-1);
for (i=0; i<len; i++) {
out = (int)((int64_t)((int64_t)coefs[i] * c + round) >> s);
coefs[i] = out * ssign;
}
}
}
@@ -461,7 +456,7 @@ AVCodec ff_aac_fixed_decoder = {
AV_SAMPLE_FMT_S32P, AV_SAMPLE_FMT_NONE
},
.capabilities = AV_CODEC_CAP_CHANNEL_CONF | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE | FF_CODEC_CAP_INIT_CLEANUP,
.caps_internal = FF_CODEC_CAP_INIT_THREADSAFE,
.channel_layouts = aac_channel_layout,
.profiles = NULL_IF_CONFIG_SMALL(ff_aac_profiles),
.flush = flush,

View File

@@ -520,7 +520,7 @@ static void flush(AVCodecContext *avctx)
*
* @return Returns error status. 0 - OK, !0 - error
*/
static int set_default_channel_config(AACContext *ac, AVCodecContext *avctx,
static int set_default_channel_config(AVCodecContext *avctx,
uint8_t (*layout_map)[3],
int *tags,
int channel_config)
@@ -547,7 +547,7 @@ static int set_default_channel_config(AACContext *ac, AVCodecContext *avctx,
* As actual intended 7.1(wide) streams are very rare, default to assuming a
* 7.1 layout was intended.
*/
if (channel_config == 7 && avctx->strict_std_compliance < FF_COMPLIANCE_STRICT && (!ac || !ac->warned_71_wide++)) {
if (channel_config == 7 && avctx->strict_std_compliance < FF_COMPLIANCE_STRICT) {
av_log(avctx, AV_LOG_INFO, "Assuming an incorrectly encoded 7.1 channel layout"
" instead of a spec-compliant 7.1(wide) layout, use -strict %d to decode"
" according to the specification instead.\n", FF_COMPLIANCE_STRICT);
@@ -573,7 +573,7 @@ static ChannelElement *get_che(AACContext *ac, int type, int elem_id)
av_log(ac->avctx, AV_LOG_DEBUG, "mono with CPE\n");
if (set_default_channel_config(ac, ac->avctx, layout_map,
if (set_default_channel_config(ac->avctx, layout_map,
&layout_map_tags, 2) < 0)
return NULL;
if (output_configure(ac, layout_map, layout_map_tags,
@@ -592,7 +592,7 @@ static ChannelElement *get_che(AACContext *ac, int type, int elem_id)
av_log(ac->avctx, AV_LOG_DEBUG, "stereo with SCE\n");
if (set_default_channel_config(ac, ac->avctx, layout_map,
if (set_default_channel_config(ac->avctx, layout_map,
&layout_map_tags, 1) < 0)
return NULL;
if (output_configure(ac, layout_map, layout_map_tags,
@@ -841,7 +841,7 @@ static int decode_ga_specific_config(AACContext *ac, AVCodecContext *avctx,
if (tags < 0)
return tags;
} else {
if ((ret = set_default_channel_config(ac, avctx, layout_map,
if ((ret = set_default_channel_config(avctx, layout_map,
&tags, channel_config)))
return ret;
}
@@ -937,7 +937,7 @@ static int decode_eld_specific_config(AACContext *ac, AVCodecContext *avctx,
skip_bits_long(gb, 8 * len);
}
if ((ret = set_default_channel_config(ac, avctx, layout_map,
if ((ret = set_default_channel_config(avctx, layout_map,
&tags, channel_config)))
return ret;
@@ -975,7 +975,7 @@ static int decode_audio_specific_config_gb(AACContext *ac,
int i, ret;
GetBitContext gbc = *gb;
if ((i = ff_mpeg4audio_get_config_gb(m4ac, &gbc, sync_extension, avctx)) < 0)
if ((i = ff_mpeg4audio_get_config_gb(m4ac, &gbc, sync_extension)) < 0)
return AVERROR_INVALIDDATA;
if (m4ac->sampling_index > 12) {
@@ -1157,9 +1157,6 @@ static av_cold int aac_decode_init(AVCodecContext *avctx)
AACContext *ac = avctx->priv_data;
int ret;
if (avctx->sample_rate > 96000)
return AVERROR_INVALIDDATA;
ret = ff_thread_once(&aac_table_init, &aac_static_table_init);
if (ret != 0)
return AVERROR_UNKNOWN;
@@ -1200,7 +1197,7 @@ static av_cold int aac_decode_init(AVCodecContext *avctx)
ac->oc[1].m4ac.chan_config = i;
if (ac->oc[1].m4ac.chan_config) {
int ret = set_default_channel_config(ac, avctx, layout_map,
int ret = set_default_channel_config(avctx, layout_map,
&layout_map_tags, ac->oc[1].m4ac.chan_config);
if (!ret)
output_configure(ac, layout_map, layout_map_tags,
@@ -1676,24 +1673,25 @@ static int decode_spectrum_and_dequant(AACContext *ac, INTFLOAT coef[1024],
}
} else if (cbt_m1 == NOISE_BT - 1) {
for (group = 0; group < (AAC_SIGNE)g_len; group++, cfo+=128) {
#if !USE_FIXED
float scale;
#endif /* !USE_FIXED */
INTFLOAT band_energy;
#if USE_FIXED
for (k = 0; k < off_len; k++) {
ac->random_state = lcg_random(ac->random_state);
#if USE_FIXED
cfo[k] = ac->random_state >> 3;
#else
cfo[k] = ac->random_state;
#endif /* USE_FIXED */
}
#if USE_FIXED
band_energy = ac->fdsp->scalarproduct_fixed(cfo, cfo, off_len);
band_energy = fixed_sqrt(band_energy, 31);
noise_scale(cfo, sf[idx], band_energy, off_len);
#else
float scale;
for (k = 0; k < off_len; k++) {
ac->random_state = lcg_random(ac->random_state);
cfo[k] = ac->random_state;
}
band_energy = ac->fdsp->scalarproduct_float(cfo, cfo, off_len);
scale = sf[idx] / sqrtf(band_energy);
ac->fdsp->vector_fmul_scalar(cfo, cfo, scale, off_len);
@@ -1929,7 +1927,7 @@ static int decode_spectrum_and_dequant(AACContext *ac, INTFLOAT coef[1024],
if (cbt_m1 < NOISE_BT - 1) {
for (group = 0; group < (int)g_len; group++, cfo+=128) {
ac->vector_pow43(cfo, off_len);
ac->subband_scale(cfo, cfo, sf[idx], 34, off_len, ac->avctx);
ac->subband_scale(cfo, cfo, sf[idx], 34, off_len);
}
}
}
@@ -2160,7 +2158,7 @@ static void apply_intensity_stereo(AACContext *ac,
coef0 + group * 128 + offsets[i],
scale,
23,
offsets[i + 1] - offsets[i] ,ac->avctx);
offsets[i + 1] - offsets[i]);
#else
ac->fdsp->vector_fmul_scalar(coef1 + group * 128 + offsets[i],
coef0 + group * 128 + offsets[i],
@@ -2495,9 +2493,6 @@ static void apply_tns(INTFLOAT coef_param[1024], TemporalNoiseShaping *tns,
INTFLOAT tmp[TNS_MAX_ORDER+1];
UINTFLOAT *coef = coef_param;
if(!mmm)
return;
for (w = 0; w < ics->num_windows; w++) {
bottom = ics->num_swb;
for (filt = 0; filt < tns->n_filt[w]; filt++) {
@@ -2662,7 +2657,7 @@ static void imdct_and_windowing(AACContext *ac, SingleChannelElement *sce)
ac->mdct.imdct_half(&ac->mdct, buf, in);
#if USE_FIXED
for (i=0; i<1024; i++)
buf[i] = (buf[i] + 4LL) >> 3;
buf[i] = (buf[i] + 4) >> 3;
#endif /* USE_FIXED */
}
@@ -3002,7 +2997,7 @@ static int parse_adts_frame_header(AACContext *ac, GetBitContext *gb)
push_output_configuration(ac);
if (hdr_info.chan_config) {
ac->oc[1].m4ac.chan_config = hdr_info.chan_config;
if ((ret = set_default_channel_config(ac, ac->avctx,
if ((ret = set_default_channel_config(ac->avctx,
layout_map,
&layout_map_tags,
hdr_info.chan_config)) < 0)
@@ -3249,15 +3244,9 @@ static int aac_decode_frame_int(AVCodecContext *avctx, void *data,
err = AVERROR_INVALIDDATA;
goto fail;
}
err = 0;
while (elem_id > 0) {
int ret = decode_extension_payload(ac, gb, elem_id, che_prev, che_prev_type);
if (ret < 0) {
err = ret;
break;
}
elem_id -= ret;
}
while (elem_id > 0)
elem_id -= decode_extension_payload(ac, gb, elem_id, che_prev, che_prev_type);
err = 0; /* FIXME */
break;
default:

View File

@@ -39,7 +39,6 @@
#include "mpeg4audio.h"
#include "kbdwin.h"
#include "sinewin.h"
#include "profiles.h"
#include "aac.h"
#include "aactab.h"
@@ -1132,7 +1131,6 @@ static const AVOption aacenc_options[] = {
{"aac_ltp", "Long term prediction", offsetof(AACEncContext, options.ltp), AV_OPT_TYPE_BOOL, {.i64 = 0}, -1, 1, AACENC_FLAGS},
{"aac_pred", "AAC-Main prediction", offsetof(AACEncContext, options.pred), AV_OPT_TYPE_BOOL, {.i64 = 0}, -1, 1, AACENC_FLAGS},
{"aac_pce", "Forces the use of PCEs", offsetof(AACEncContext, options.pce), AV_OPT_TYPE_BOOL, {.i64 = 0}, -1, 1, AACENC_FLAGS},
FF_AAC_PROFILE_OPTS
{NULL}
};

View File

@@ -144,7 +144,7 @@ void ff_aac_adjust_common_ltp(AACEncContext *s, ChannelElement *cpe)
int sum = sce0->ics.ltp.used[sfb] + sce1->ics.ltp.used[sfb];
if (sum != 2) {
sce0->ics.ltp.used[sfb] = 0;
} else {
} else if (sum == 2) {
count++;
}
}

View File

@@ -118,7 +118,7 @@ static int read_ ## PAR ## _data(AVCodecContext *avctx, GetBitContext *gb, PSCon
return 0; \
err: \
av_log(avctx, AV_LOG_ERROR, "illegal "#PAR"\n"); \
return AVERROR_INVALIDDATA; \
return -1; \
}
READ_PAR_DATA(iid, huff_offset[table_idx], 0, FFABS(ps->iid_par[e][b]) > 7 + 8 * ps->iid_quant)
@@ -414,33 +414,33 @@ static void hybrid_synthesis(PSDSPContext *dsp, INTFLOAT out[2][38][64],
memset(out[0][n], 0, 5*sizeof(out[0][n][0]));
memset(out[1][n], 0, 5*sizeof(out[1][n][0]));
for (i = 0; i < 12; i++) {
out[0][n][0] += (UINTFLOAT)in[ i][n][0];
out[1][n][0] += (UINTFLOAT)in[ i][n][1];
out[0][n][0] += in[ i][n][0];
out[1][n][0] += in[ i][n][1];
}
for (i = 0; i < 8; i++) {
out[0][n][1] += (UINTFLOAT)in[12+i][n][0];
out[1][n][1] += (UINTFLOAT)in[12+i][n][1];
out[0][n][1] += in[12+i][n][0];
out[1][n][1] += in[12+i][n][1];
}
for (i = 0; i < 4; i++) {
out[0][n][2] += (UINTFLOAT)in[20+i][n][0];
out[1][n][2] += (UINTFLOAT)in[20+i][n][1];
out[0][n][3] += (UINTFLOAT)in[24+i][n][0];
out[1][n][3] += (UINTFLOAT)in[24+i][n][1];
out[0][n][4] += (UINTFLOAT)in[28+i][n][0];
out[1][n][4] += (UINTFLOAT)in[28+i][n][1];
out[0][n][2] += in[20+i][n][0];
out[1][n][2] += in[20+i][n][1];
out[0][n][3] += in[24+i][n][0];
out[1][n][3] += in[24+i][n][1];
out[0][n][4] += in[28+i][n][0];
out[1][n][4] += in[28+i][n][1];
}
}
dsp->hybrid_synthesis_deint(out, in + 27, 5, len);
} else {
for (n = 0; n < len; n++) {
out[0][n][0] = (UINTFLOAT)in[0][n][0] + in[1][n][0] + in[2][n][0] +
(UINTFLOAT)in[3][n][0] + in[4][n][0] + in[5][n][0];
out[1][n][0] = (UINTFLOAT)in[0][n][1] + in[1][n][1] + in[2][n][1] +
(UINTFLOAT)in[3][n][1] + in[4][n][1] + in[5][n][1];
out[0][n][1] = (UINTFLOAT)in[6][n][0] + in[7][n][0];
out[1][n][1] = (UINTFLOAT)in[6][n][1] + in[7][n][1];
out[0][n][2] = (UINTFLOAT)in[8][n][0] + in[9][n][0];
out[1][n][2] = (UINTFLOAT)in[8][n][1] + in[9][n][1];
out[0][n][0] = in[0][n][0] + in[1][n][0] + in[2][n][0] +
in[3][n][0] + in[4][n][0] + in[5][n][0];
out[1][n][0] = in[0][n][1] + in[1][n][1] + in[2][n][1] +
in[3][n][1] + in[4][n][1] + in[5][n][1];
out[0][n][1] = in[6][n][0] + in[7][n][0];
out[1][n][1] = in[6][n][1] + in[7][n][1];
out[0][n][2] = in[8][n][0] + in[9][n][0];
out[1][n][2] = in[8][n][1] + in[9][n][1];
}
dsp->hybrid_synthesis_deint(out, in + 7, 3, len);
}

View File

@@ -54,10 +54,10 @@ static void ps_hybrid_analysis_c(INTFLOAT (*out)[2], INTFLOAT (*in)[2],
INT64FLOAT sum_im = (INT64FLOAT)filter[i][6][0] * in[6][1];
for (j = 0; j < 6; j++) {
INT64FLOAT in0_re = in[j][0];
INT64FLOAT in0_im = in[j][1];
INT64FLOAT in1_re = in[12-j][0];
INT64FLOAT in1_im = in[12-j][1];
INTFLOAT in0_re = in[j][0];
INTFLOAT in0_im = in[j][1];
INTFLOAT in1_re = in[12-j][0];
INTFLOAT in1_im = in[12-j][1];
sum_re += (INT64FLOAT)filter[i][j][0] * (in0_re + in1_re) -
(INT64FLOAT)filter[i][j][1] * (in0_im - in1_im);
sum_im += (INT64FLOAT)filter[i][j][0] * (in0_im + in1_im) +

View File

@@ -6,24 +6,19 @@ OBJS-$(CONFIG_H264DSP) += aarch64/h264dsp_init_aarch64.o
OBJS-$(CONFIG_H264PRED) += aarch64/h264pred_init.o
OBJS-$(CONFIG_H264QPEL) += aarch64/h264qpel_init_aarch64.o
OBJS-$(CONFIG_HPELDSP) += aarch64/hpeldsp_init_aarch64.o
OBJS-$(CONFIG_IDCTDSP) += aarch64/idctdsp_init_aarch64.o
OBJS-$(CONFIG_MPEGAUDIODSP) += aarch64/mpegaudiodsp_init.o
OBJS-$(CONFIG_NEON_CLOBBER_TEST) += aarch64/neontest.o
OBJS-$(CONFIG_PIXBLOCKDSP) += aarch64/pixblockdsp_init_aarch64.o
OBJS-$(CONFIG_VIDEODSP) += aarch64/videodsp_init.o
OBJS-$(CONFIG_VP8DSP) += aarch64/vp8dsp_init_aarch64.o
# decoders/encoders
OBJS-$(CONFIG_AAC_DECODER) += aarch64/aacpsdsp_init_aarch64.o \
aarch64/sbrdsp_init_aarch64.o
OBJS-$(CONFIG_DCA_DECODER) += aarch64/synth_filter_init.o
OBJS-$(CONFIG_OPUS_DECODER) += aarch64/opusdsp_init.o
OBJS-$(CONFIG_RV40_DECODER) += aarch64/rv40dsp_init_aarch64.o
OBJS-$(CONFIG_VC1DSP) += aarch64/vc1dsp_init_aarch64.o
OBJS-$(CONFIG_VORBIS_DECODER) += aarch64/vorbisdsp_init.o
OBJS-$(CONFIG_VP9_DECODER) += aarch64/vp9dsp_init_10bpp_aarch64.o \
aarch64/vp9dsp_init_12bpp_aarch64.o \
aarch64/vp9mc_aarch64.o \
aarch64/vp9dsp_init_aarch64.o
# ARMv8 optimizations
@@ -44,16 +39,14 @@ NEON-OBJS-$(CONFIG_H264PRED) += aarch64/h264pred_neon.o
NEON-OBJS-$(CONFIG_H264QPEL) += aarch64/h264qpel_neon.o \
aarch64/hpeldsp_neon.o
NEON-OBJS-$(CONFIG_HPELDSP) += aarch64/hpeldsp_neon.o
NEON-OBJS-$(CONFIG_IDCTDSP) += aarch64/simple_idct_neon.o
NEON-OBJS-$(CONFIG_IDCTDSP) += aarch64/idctdsp_init_aarch64.o \
aarch64/simple_idct_neon.o
NEON-OBJS-$(CONFIG_MDCT) += aarch64/mdct_neon.o
NEON-OBJS-$(CONFIG_MPEGAUDIODSP) += aarch64/mpegaudiodsp_neon.o
NEON-OBJS-$(CONFIG_PIXBLOCKDSP) += aarch64/pixblockdsp_neon.o
NEON-OBJS-$(CONFIG_VP8DSP) += aarch64/vp8dsp_neon.o
# decoders/encoders
NEON-OBJS-$(CONFIG_AAC_DECODER) += aarch64/aacpsdsp_neon.o
NEON-OBJS-$(CONFIG_DCA_DECODER) += aarch64/synth_filter_neon.o
NEON-OBJS-$(CONFIG_OPUS_DECODER) += aarch64/opusdsp_neon.o
NEON-OBJS-$(CONFIG_VORBIS_DECODER) += aarch64/vorbisdsp_neon.o
NEON-OBJS-$(CONFIG_VP9_DECODER) += aarch64/vp9itxfm_16bpp_neon.o \
aarch64/vp9itxfm_neon.o \

View File

@@ -19,6 +19,14 @@
#ifndef AVCODEC_AARCH64_ASM_OFFSETS_H
#define AVCODEC_AARCH64_ASM_OFFSETS_H
/* CeltIMDCTContext */
#define CELT_EXPTAB 0x20
#define CELT_FFT_N 0x00
#define CELT_LEN2 0x04
#define CELT_LEN4 (CELT_LEN2 + 0x4) // loaded as pair
#define CELT_TMP 0x10
#define CELT_TWIDDLE (CELT_TMP + 0x8) // loaded as pair
/* FFTContext */
#define IMDCT_HALF 0x48

View File

@@ -25,28 +25,14 @@
#include "libavutil/aarch64/cpu.h"
#include "libavcodec/h264dsp.h"
void ff_h264_v_loop_filter_luma_neon(uint8_t *pix, ptrdiff_t stride, int alpha,
void ff_h264_v_loop_filter_luma_neon(uint8_t *pix, int stride, int alpha,
int beta, int8_t *tc0);
void ff_h264_h_loop_filter_luma_neon(uint8_t *pix, ptrdiff_t stride, int alpha,
void ff_h264_h_loop_filter_luma_neon(uint8_t *pix, int stride, int alpha,
int beta, int8_t *tc0);
void ff_h264_v_loop_filter_luma_intra_neon(uint8_t *pix, ptrdiff_t stride, int alpha,
int beta);
void ff_h264_h_loop_filter_luma_intra_neon(uint8_t *pix, ptrdiff_t stride, int alpha,
int beta);
void ff_h264_v_loop_filter_chroma_neon(uint8_t *pix, ptrdiff_t stride, int alpha,
void ff_h264_v_loop_filter_chroma_neon(uint8_t *pix, int stride, int alpha,
int beta, int8_t *tc0);
void ff_h264_h_loop_filter_chroma_neon(uint8_t *pix, ptrdiff_t stride, int alpha,
void ff_h264_h_loop_filter_chroma_neon(uint8_t *pix, int stride, int alpha,
int beta, int8_t *tc0);
void ff_h264_h_loop_filter_chroma422_neon(uint8_t *pix, ptrdiff_t stride, int alpha,
int beta, int8_t *tc0);
void ff_h264_v_loop_filter_chroma_intra_neon(uint8_t *pix, ptrdiff_t stride,
int alpha, int beta);
void ff_h264_h_loop_filter_chroma_intra_neon(uint8_t *pix, ptrdiff_t stride,
int alpha, int beta);
void ff_h264_h_loop_filter_chroma422_intra_neon(uint8_t *pix, ptrdiff_t stride,
int alpha, int beta);
void ff_h264_h_loop_filter_chroma_mbaff_intra_neon(uint8_t *pix, ptrdiff_t stride,
int alpha, int beta);
void ff_weight_h264_pixels_16_neon(uint8_t *dst, ptrdiff_t stride, int height,
int log2_den, int weight, int offset);
@@ -91,22 +77,9 @@ av_cold void ff_h264dsp_init_aarch64(H264DSPContext *c, const int bit_depth,
if (have_neon(cpu_flags) && bit_depth == 8) {
c->h264_v_loop_filter_luma = ff_h264_v_loop_filter_luma_neon;
c->h264_h_loop_filter_luma = ff_h264_h_loop_filter_luma_neon;
c->h264_v_loop_filter_luma_intra= ff_h264_v_loop_filter_luma_intra_neon;
c->h264_h_loop_filter_luma_intra= ff_h264_h_loop_filter_luma_intra_neon;
c->h264_v_loop_filter_chroma = ff_h264_v_loop_filter_chroma_neon;
c->h264_v_loop_filter_chroma_intra = ff_h264_v_loop_filter_chroma_intra_neon;
if (chroma_format_idc <= 1) {
c->h264_h_loop_filter_chroma = ff_h264_h_loop_filter_chroma_neon;
c->h264_h_loop_filter_chroma_intra = ff_h264_h_loop_filter_chroma_intra_neon;
c->h264_h_loop_filter_chroma_mbaff_intra = ff_h264_h_loop_filter_chroma_mbaff_intra_neon;
} else {
c->h264_h_loop_filter_chroma = ff_h264_h_loop_filter_chroma422_neon;
c->h264_h_loop_filter_chroma_mbaff = ff_h264_h_loop_filter_chroma_neon;
c->h264_h_loop_filter_chroma_intra = ff_h264_h_loop_filter_chroma422_intra_neon;
c->h264_h_loop_filter_chroma_mbaff_intra = ff_h264_h_loop_filter_chroma_intra_neon;
}
if (chroma_format_idc <= 1)
c->h264_h_loop_filter_chroma = ff_h264_h_loop_filter_chroma_neon;
c->weight_h264_pixels_tab[0] = ff_weight_h264_pixels_16_neon;
c->weight_h264_pixels_tab[1] = ff_weight_h264_pixels_8_neon;

Some files were not shown because too many files have changed in this diff Show More