Compare commits

..

135 Commits

Author SHA1 Message Date
Michael Niedermayer
a7cb7a2e43 Changelog: update
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-21 09:02:44 +01:00
Michael Niedermayer
b429df281d avcodec/dfa: Check the chunk header is not truncated
Fixes: Timeout (11sec -> 3sec)
Fixes: 13218/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DFA_fuzzer-5661074316066816

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f20760fadb)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-21 09:01:42 +01:00
Michael Niedermayer
7ce56329e7 avcodec/clearvideo: Check remaining data in P frames
Fixes: Timeout (19sec -> 419msec)
Fixes: 13411/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CLEARVIDEO_fuzzer-5733153811988480

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 41f93f9411)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-21 09:01:42 +01:00
James Almer
dbef08b60f avcodec/hevcdec: decode at most one slice reporting being the first in the picture
Fixes deadlocks when decoding packets containing more than one of the aforementioned
slices when using frame threads.

Tested-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 70c8c8a818)
2019-03-20 20:28:04 -03:00
Michael Niedermayer
77d244e7a9 Update for 4.1.2
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 17:31:54 +01:00
Michael Niedermayer
8cee4190f3 avcodec/dvbsubdec: Check object position
Reference: ETSI EN 300 743 V1.2.1  7.2.2 Region composition segment

Fixes: Timeout
Fixes: 13325/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DVBSUB_fuzzer-5143979392237568

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit a8c5ae4511)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 16:54:31 +01:00
Michael Niedermayer
04ce4cc072 avcodec/cdgraphics: Use ff_set_dimensions()
Fixes: Timeout (17 sec -> 65 milli sec)
Fixes: 13264/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CDGRAPHICS_fuzzer-5711167941509120

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9a9f0e239c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 16:54:10 +01:00
Michael Niedermayer
5d208aac52 avformat/gdv: Check fps
Fixes: Division by 0
Fixes: ffmpeg_zero_division.bin

Found-by: Anatoly Trosinenko <anatoly.trosinenko@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 38381400fc)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 16:53:57 +01:00
Guo, Yejun
83bfd4f3b5 configure: use vpx_codec_vp8_dx/cx for libvpx-vp8 checking
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit d9b2668766)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 11:51:09 +01:00
Guo, Yejun
9bf40978c6 configure: add missing pthreads extralibs dependency for libvpx-vp9
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 402bf26237)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 11:49:55 +01:00
Michael Niedermayer
1e50a327c6 avcodec/mpeg4videodec: Check idx in mpeg4_decode_studio_block()
Fixes: Out of array access
Fixes: 13500/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-5769760178962432

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Kieran Kunhya <kierank@obe.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d227ed5d59)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
ad12d9df1e avcodec/dxv: Correct integer overflow in get_opcodes()
Fixes: 13099/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DXV_fuzzer-5665598896340992
Fixes: signed integer overflow: 2147483647 + 7 cannot be represented in type 'int'

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6e0b5d3a20)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
67d030787e avcodec/scpr: Fix use of uninitialized variable
Fixes: Undefined shift
Fixes: 12911/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SCPR_fuzzer-5677102915911680

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 53248acfb3)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
c90836cc3d avcodec/qpeg: Limit copy in qpeg_decode_intra() to the available bytes
Fixes: Timeout (27 sec -> 39 milli sec)
Fixes: 13151/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_QPEG_fuzzer-5717536023248896

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b819472995)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
6c0124d392 avcodec/aic: Check remaining bits in aic_decode_coeffs()
Fixes: Timeout (78 seconds -> 2 seconds)
Fixes: 13186/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_AIC_fuzzer-5639516533030912

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 951bb7632f)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
29619a8ac2 avcodec/gdv: Check for truncated tags in decompress_5()
Testcase: 13169/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5666354038833152

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 5cf42f65b6)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
09683e1f4e avcodec/bethsoftvideo: Check block_type
Fixes: Timeout (17 seconds -> 1 second)
Fixes: 13184/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_BETHSOFTVID_fuzzer-5711446296494080

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b8ecadec05)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
662b6351c8 avcodec/jpeg2000dwt: Fix integer overflow in dwt_decode97_int()
Fixes: runtime error: signed integer overflow: 2147483598 + 128 cannot be represented in type 'int'
Fixes: 12926/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_JPEG2000_fuzzer-5705100733972480

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4801eea0d4)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
b8dd1d2d4b avcodec/error_resilience: Use a symmetric check for skipping MV estimation
This speeds up the testcase by a factor of 4

Fixes: Timeout
Fixes: 13100/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WMV2_fuzzer-5767533905313792

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e4289cb253)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
92335fc02b avcodec/mlpdec: Insuffient typo
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit fc32e08941)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
ff491b1544 avcodec/zmbv: obtain frame later
The frame is not needed that early so obtaining it later avoids
the costly operation in case other checks fail.

Fixes: Timeout (14sec -> 4sec)
Fixes: 13140/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ZMBV_fuzzer-5738330308739072

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 177b40890c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
4e624c89fd avcodec/jvdec: Check available input space before decode8x8()
Fixes: Timeout (78 sec -> 15 millisec)
Fixes: 13147/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_JV_fuzzer-5727107827630080

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 61523683c5)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
9495228df0 avcodec/h264_direct: Fix overflow in POC comparission
Fixes: runtime error: signed integer overflow: 2147421862 - -33624063 cannot be represented in type 'int'
Fixes: 12885/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-5733516975800320

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 5ccf296e74)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
339f40f618 avformat/webmdashenc: Check id in adaption_sets
Fixes: out of array access

Found-by: Wenxiang Qian
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b687b549aa)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Wenxiang Qian
ec22b46a4d avformat/http: Fix Out-of-Bounds access in process_line()
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 85f91ed760)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Wenxiang Qian
11375cd101 avformat/ftp: Fix Out-of-Bounds Access and Information Leak in ftp.c:393
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit a142ffdcae)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Kevin Backhouse via RT
f7f3937494 avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for handling braces
Fixes: [Semmle Security Reports #19439]
Fixes: dos_sscanf2.mkv

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 894995c41e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Kevin Backhouse via RT
cc5361ed18 avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for tag scaning
Fixes: [Semmle Security Reports #19438]
Fixes: dos_sscanf1.mkv

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1f00c97bc3)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
4d1fcd734e avformat/matroskadec: Do not leak queued packets on sync errors
Fixes: memleak
Fixes: clusterfuzz-testcase-minimized-audio_decoder_fuzzer-5649187601121280

Reported-by: Chris Cunningham <chcunningham@google.com>
Tested-by: Chris Cunningham <chcunningham@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d1afa7284c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
8066cb3556 avcodec/mpeg4videodec: Clear interlaced_dct for studio profile
Fixes: Out of array access
Fixes: 13090/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-5408668986638336

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Kieran Kunhya <kierank@obe.tv>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1f686d023b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
d25f388584 avformat/mov: Do not use reference stream in mov_read_sidx() if there is no reference stream
Fixes: NULL pointer dereference
Fixes: clusterfuzz-testcase-minimized-audio_decoder_fuzzer-5634316373721088

Reported-by: Chris Cunningham <chcunningham@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b0d8b7cb8e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Michael Niedermayer
1a82246cae avcodec/sbrdsp_fixed.c: remove input value limit for sbr_sum_square_c()
Fixes: 1377/clusterfuzz-testcase-minimized-5487049807233024
Fixes: assertion failure in sbr_sum_square_c()

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4cde7e62db)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Alex Mogurenko
7e204f7260 avcodec/prores_ks: Fix luma quantization if q >= MAX_STORED_Q
The problem occurs in slice quant estimation and slice encoding:

If the slice quant is larger than  MAX_STORED_Q we don't use pre-calculated
quant matrices, but generate a new one, but both qmat and qmat_chroma both
point to the same table, so the luma table ends up having chroma table
values.

Add custom_chroma_q the same way as custom_q.

Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
(cherry picked from commit e4788ae31b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-03-14 00:24:44 +01:00
Charles Liu
53f3f5233f avformat/mov: fix hang while seek on a kind of fragmented mp4
Binary searching would hang if the fragment items do NOT have timestamp for the
specified stream.

For example, a fmp4 consists of separated 'moof' boxes for each track, and
separated 'sidx' for each segment, but no 'mfra' box.  Then every fragment item
only have the timestamp for one of its tracks.

Example:
ffmpeg -f lavfi -i testsrc -f lavfi -i sine -movflags dash+frag_keyframe+skip_trailer+separate_moof -t 1 out.mp4
ffmpeg -ss 0.5 -i out.mp4 -f null none

Also fixes the hang in ticket #7572, but not the reason for having
AV_NOPTS_VALUE timestamps there.

Signed-off-by: Charles Liu <liuchh83@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit aa25198f1b)
2019-02-11 22:07:54 +01:00
Marton Balint
110eff79ca avformat/async: fix assertion condition when draining buffer
Fixes some random assertion failures with

ffprobe -show_packets async:samples/ffmpeg-bugs/trac/ticket6132/Samsung_HDR_-_Chasing_the_Light.ts > /dev/null

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit 4b46d1ee46)
2019-02-11 22:07:06 +01:00
James Almer
33c8009773 avcodec/cbs_av1: don't call cbs_av1_read_trailing_bits() when no bits remain in the OBU
Reviewed-by: jkqxz
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 3e8b8b6b50)
2019-02-10 21:02:06 -03:00
Michael Niedermayer
74700e50bf Changelog: update
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-09 18:33:21 +01:00
chcunningham
00cdf4e4e5 avformat/mov: validate chunk_count vs stsc_data
Bad content may contain stsc boxes with a first_chunk index that
exceeds stco.entries (chunk_count). This ammends the existing check to
include cases where chunk_count == 0. It also patches up the case
when stsc refers to unknown chunks, but stts has no samples (so we
can simply ignore stsc).

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1c15449ca9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-08 12:22:37 +01:00
chcunningham
bcc71f30ad avformat/mov.c: require tfhd to begin parsing trun
Detecting missing tfhd avoids re-using tfhd track info from the previous
moof. For files with multiple tracks, this may make a mess of the
avindex and fragindex, which can later trigger av_assert0 in
mov_read_trun().

Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3ea87e5d9e)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-08 12:22:13 +01:00
Michael Niedermayer
31a1d2aa83 Changelog: update
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-04 00:51:42 +01:00
Michael Niedermayer
7816497ba0 avcodec/pgssubdec: Check for duplicate display segments
In such a duplication the previous gets overwritten and leaks

Fixes: memleak
Fixes: 12510/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PGSSUB_fuzzer-5694439226343424

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e35c3d887b)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-02-04 00:32:09 +01:00
Michael Niedermayer
953f97979f avformat/rtsp: Check number of streams in sdp_parse_line()
Fixes: OOM

Found-by: Michael Hanselmann <public@hansmi.ch>
Reviewed-by: Michael Hanselmann <public@hansmi.ch>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 497c9b0cce)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 18:03:35 +01:00
Michael Niedermayer
e75a73d629 avformat/rtsp: Clear reply in every iteration in ff_rtsp_connect()
Fixes: Infinite loop

Found-by: Michael Hanselmann <public@hansmi.ch>
Reviewed-by: Michael Hanselmann <public@hansmi.ch>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 0b50f27635)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:29:41 +01:00
Michael Niedermayer
b482e94e59 avcodec/rasc: Move ff_get_buffer() after frame checks
If the frame1/2 checks fail this avoids doing the allocation of a new frame

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9f4af97aff)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:29:05 +01:00
Michael Niedermayer
0f1332309a avcodec/rasc: Check uncompressed dlta size
We assume that if the compressed size is bigger than if each byte is encoded in a single raw packet
that the data is invalid.

Fixes: Out of memory
Fixes: 12208/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RASC_fuzzer-5648916473708544

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f4079d5174)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:28:23 +01:00
Michael Niedermayer
f5c9753bfd avcodec/fic: Check that there is input left in fic_decode_block()
Fixes: Timeout
Fixes: 12450/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FIC_fuzzer-5661984622641152

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit db1c4acd02)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:23:01 +01:00
Michael Niedermayer
d8b8b27dc3 avcodec/ilbcdec: Fix undefined integer overflow lsf2poly()
The addition is moved up into the context where the variable is unsigned avoiding
the undefined behavior

Fixes: runtime error: signed integer overflow: 2147481972 + 4096 cannot be represented in type 'int'
Fixes: 12444/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ILBC_fuzzer-5755706244857856

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4523cc5e75)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:20:38 +01:00
Michael Niedermayer
62f5325ca3 avcodec/ilbcdec: Fix integer overflow in construct_vector()
webrtc contains explicit code to ignore the undefined behavior (RTC_NO_SANITIZE / OverflowingAddS32S32ToS32())

Probably fixes: Integer overflow (unreproducable here)
Probably fixes: 12215/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ILBC_fuzzer-5767142427852800

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c95d0fb239)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-31 17:20:24 +01:00
Michael Niedermayer
bcfd82b0be Update for 4.1.1
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 08:34:57 +01:00
Michael Niedermayer
31fa50f3d9 avcodec/prosumer: Error out if decompress() stops reading data
if 0 is encountered in the LUT then decompress() will continue to output 0 bytes but never read more data.
Without a specification it is impossible to say if this is invalid or a feature.
None of the valid prosumer files tested cause a 0 to be read, so it is likely
not a intended feature.

Fixes: Timeout
Fixes: 11266/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PROSUMER_fuzzer-5681827423977472

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 62f8d27ef1)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
552733d48b avcodec/tiff: Check for 12bit gray fax
Fixes: Assertion failure
Fixes: 11898/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TIFF_fuzzer-5759794191794176

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ec28a85107)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
a8b5990f45 avutil/imgutils: Optimize memset_bytes() by using av_memcpy_backptr()
This is strongly based on code by Marton Balint, and depends on the previous commit

Fixes: Timeout
Fixes: 11502/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664893810769920
Before: Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664893810769920 in 11209 ms
After:  Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WCMV_fuzzer-5664893810769920 in  4104 ms

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f64c0dffa1)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
cb6af7dfa1 avutil/mem: Optimize fill32() by unrolling and using 64bit
Reviewed-by: Marton Balint <cus@passwd.hu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 12b1338be3)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
James Almer
29d978c91e configure: bump year
Happy new year!

(cherry picked from commit 3209d7b393)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
3a52cae2c7 avcodec/tests/rangecoder: initialize array to avoid valgrind warning
Found-by: jamrial
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c15972f0af)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
792df36f42 avcodec/gdv: Optimize and factorize scaling loops
Fixes: Timeout
Fixes: 11067/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5686623711264768

Before change: Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5686623711264768 in 34386 ms
After  change: Executed clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_GDV_fuzzer-5686623711264768 in 24327 ms

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6e23736aef)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
c694273feb avcodec/h264_slice: Fix integer overflow in implicit_weight_table()
Fixes: signed integer overflow: 2 * 2132811760 cannot be represented in type 'int'
Fixes: 11156/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_H264_fuzzer-6237685933408256

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 77e56d74f9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
9239d58b36 avcodec/exr: set layer_match in all branches
Otherwise it is left to the value from the previous iteration

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 433d2ae435)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
1623f42d99 avcodec/exr: Check for duplicate channel index
Fixes: Out of memory
Fixes: 11582/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_EXR_fuzzer-5730204559867904

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f9728feaf9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
99576bf034 avfilter/vf_tonemap_opencl: Make static tables const
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 47c3a10b16)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
e385fc45dd doc/indevs: fix upto typo
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b33de55747)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
15857674c5 avcodec/4xm: Fix returned error codes
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 07607a1db8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
6b6c854658 avformat/libopenmpt: Fix successfull typo
Reviewed-by: Lou Logan <lou@lrcd.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 571af98a59)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
41ee513c81 avcodec/v4l2_m2m: fix cant typo
Reviewed-by: Lou Logan <lou@lrcd.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 062bf56393)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
33b4aba5bd avcodec/mjpegbdec: Fix some misplaced {} and spaces
Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 11a8d2ccab)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
David Bryant
ea279bd160 avformat/wvdec: detect and error out on WavPack DSD files
Not currently supported.

(cherry picked from commit db109373d8)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
gxw
929b5519d8 avcodec/mips: Fix failed case: hevc-conformance-AMP_A_Samsung_* when enable msa
The AV_INPUT_BUFFER_PADDING_SIZE has been increased to 64, but the value is still 32
in function ff_hevc_sao_edge_filter_8_msa. So, use AV_INPUT_BUFFER_PADDING_SIZE directly.
Also, use MAX_PB_SIZE directly instead of 64. Fate tests passed.

Reviewed-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f652c7a45c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
5ed024e40b avcodec/fic: Fail on invalid slice size/off
Fixes: Timeout
Fixes: 11486/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_FIC_fuzzer-5677133863583744

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 30a7a81cdc)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
5550946ff4 avcodec/ilbcdec: fix integer overflow in energy
webrtc uses a int32_t like the existing code in ilbcdec

Fixes: signed integer overflow: 2080245063 + 257939661 cannot be represented in type 'int'
Fixes: 11037/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_ILBC_fuzzer-5682976612941824

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit fbf409cd91)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
daef9d4382 postproc/postprocess_template: remove FF_REG_sp from clobber list
Future gcc may no longer support this

Tested-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c1cbeb87db)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
69f50eb915 postproc/postprocess_template: Avoid using %4 for the threshold compare
This avoids problems if %4 is the stack pointer
the constraints do not allow %4 to be the stack pointer but gcc 9 may
no longer support specifying such constraints

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 4325527e1c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Jacob Trimble
73c90818b1 libavformat/mov: Fix NULL-dereference read for some encrypted content.
When reading frames, we need to use the fragment for the correct
stream.  Sometimes the "current" fragment is not the same as the one
the frame is for.

Found by Chromium's ClusterFuzz:
https://crbug.com/906392 and https://crbug.com/915524

Signed-off-by: Jacob Trimble <modmaker@google.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 555f332e7a)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
c22b67feaa avcodec/rpza: Check that there is enough data for all the blocks
Fixes: Timeout
Fixes: 11547/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RPZA_fuzzer-5678435842654208

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit e63517e00a)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
4c0be3a60c avcodec/rpza: Move frame allocation to a later point
This will allow performing some fast checks before the slow allocation

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 8a708aa99c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
42357b37cb avcodec/avcodec: Document the data type for AV_PKT_DATA_MPEGTS_STREAM_ID
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 68e011e410)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
e3fbbb7d18 avformat/mpegts: Fix side data type for stream id
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ab1319d82f)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
2f75965c47 tests/fate/filter-video: increase fuzz for fate-filter-refcmp-psnr-rgb
Fixes: test failure on powerpc

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit f8f762c300)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
e1f40f0dae avcodec/mjpegdec: Fix indention of ljpeg_decode_yuv_scan()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ea30ac1e40)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
chcunningham
45f5f2086e lavf/id3v2: fail read_apic on EOF reading mimetype
avio_read may return EOF, leaving the mimetype array unitialized. fail
early when this occurs to avoid using the array in an unitialized state.

Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit ee1e39a576)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
321c418b87 avcodec/rasc: Check that the number of moves is less than or equal the number of pixels
Fixes: OOM
Fixes: 10307/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RASC_fuzzer-5393974559244288

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 092cb17983)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
f5859d4a8e avformat/nutenc: Document trailer index assert better
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3a95b73abc)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
chcunningham
54fbdacc37 lavf/mov: ensure only one tkhd per trak
Chromium fuzzing produced a whacky file with extra tkhds. This caused
an AVStream that was already in use to be corrupted by assigning it a
new id, which blows up later in mov_read_trun because the
MOVFragmentStreamInfo.index_entry now points OOB.

Reviewed-by: Baptiste Coudurier <baptiste.coudurier@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c9f7b6f7a9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
228f17ced3 avcodec/clearvideo: Check remaining input bits in P macro block loop
Fixes: Timeout
Fixes: 11083/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CLEARVIDEO_fuzzer-5657180351496192

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 7aaab127be)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
9b5a6bb67b avcodec/rasc: Check input space before reading chunk
Fixes: Timeout
Fixes: 11118/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_RASC_fuzzer-5652564066959360

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 52ba824c65)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
219cbc5527 avcodec/dxv: Check that there is enough data to decompress
Fixes: Timeout
Fixes: 10979/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DXV_fuzzer-6178582203203584

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2bc3811c0d)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
55c36d2498 avcodec/ppc/hevcdsp: Fix build failures with powerpc-linux-gnu-gcc-4.8 with --disable-optimizations
The affected functions could also be changed into macros, this is the
smaller change to fix it though. And avoids (probably) less readable macros
The extra code should be optimized out when optimizations are done as all values
are known at build after inlining.

Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2c64a6bcd2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
558ba71de5 avcodec/msvideo1: Check for too small dimensions
Such low resolution would result in empty output as a minimum of 4x4 is needed
We could also check for multiple of 4 dimensions but that is not needed

Fixes: Timeout
Fixes: 11191/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MSVIDEO1_fuzzer-5739529588178944

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 953bd58861)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
1a5db666ac avcodec/wmv2dec: Skip I frame if its smaller than 1/8 of the minimal size
Frames that small are not valid and of limited use for error concealment, while
being very computationally intensive to process.

Fixes: Timeout
Fixes: 11168/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_WMV2_fuzzer-5733782032744448

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit d6f4341522)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
eee0cf487a avcodec/msmpeg4dec: Skip frame if its smaller than 1/8 of the minimal size
Frames that small are not valid and of limited use for error concealment, while
being very computationally intensive to process.

Fixes: Timeout
Fixes: 11318/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MSMPEG4V1_fuzzer-5710884555456512

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 09ec182864)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
90db1e441f avcodec/truemotion2rt: Fix rounding in input size check
Fixes: Timeout
Fixes: 11332/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TRUEMOTION2RT_fuzzer-5678456612847616

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 7f22a4ebc9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
4fe90900d8 avcodec/diracdec: Check component quant
Fixes: Timeout
Fixes: 10708/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DIRAC_fuzzer-5730140957442048

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 28c96c2ce2)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:26 +01:00
Michael Niedermayer
ee349bd0fd avcodec/tiff: Limit filtering to decoded data
Fixes: Timeout
Fixes: 11068/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TIFF_fuzzer-5698456681709568

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Tomas Härdin <tjoppen@acc.umu.se>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 90ac0e5f29)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
ab744447e1 avcodec/truemotion2: fix integer overflows in tm2_low_chroma()
Fixes: 11295/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TRUEMOTION2_fuzzer-4888953459572736

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2ae39d7956)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
89d65915cf avcodec/pngdec: Check compression method
method 0 (inflate/deflate) is the only specified in the specification and the only supported

Fixes: Timeout
Fixes: 10976/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PNG_fuzzer-5729372588736512

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1f99674ddd)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
e69bb0fb05 fftools/ffmpeg: Repair reinit_filter feature
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 3504004879)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
98a9d868d1 avcodec/shorten: Fix integer overflow with offset
Fixes: signed integer overflow: -1625810908 - 582229060 cannot be represented in type 'int'
Fixes: 10977/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SHORTEN_fuzzer-5732602018267136

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 2f888771cd)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Michael Niedermayer
b66152a4e5 avcodec/imm4: Use ff_set_dimensions()
Fixes: Out of memory
Fixes: 10970/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_IMM4_fuzzer-5698750043914240

Reviewed-by: Paul B Mahol <onemda@gmail.com>
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c305e134ce)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Andreas Rheinhardt
ac50246cc4 h264_redundant_pps: Fix logging context
The first element of H264RedundantPPSContext is not a pointer to an
AVClass as required.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 6dafcb6fdb)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-01-21 07:53:25 +01:00
Marton Balint
ddc284300e avfilter/af_asetnsamples: fix last frame props
Frame properties were not copied, so e.g. PTS was not set for the last frame.

Regression since ef3babb2c7.

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit f9e947845f)
2019-01-01 20:39:44 +01:00
Mark Thompson
b420f23566 cbs_av1: Fix reading of overlong uvlc codes
The specification allows 2^32-1 to be encoded as any number of zeroes
greater than 31, followed by a one.  This previously failed because the
trace code would overflow the array containing the string representation
of the bits if there were more than 63 zeroes.  Fix that by splitting the
trace output into batches, and at the same time move it out of the default
path.

(While this seems likely to be a specification error, libaom does support
it so we probably should as well.)

From a test case by keval shah <skeval65@gmail.com>.

Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit b97a4b6588)
2018-12-22 18:28:41 +00:00
James Almer
5356e61001 avcodec/cbs_av1: fix parsing delta_frame_id_minus1
delta_frame_id_minus1 is not a single value in the bitstream, and can
store values up to 17 bits wide.

Fixes parsing files with frame ids.

Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 064f9505f4)
2018-12-20 18:29:42 -03:00
Paul B Mahol
a4ddc3c9fc avfilter/vf_overlay: fix filtering with negative y
(cherry picked from commit 8440835dbe)
2018-12-14 23:56:21 +01:00
Paul B Mahol
59e30c05d7 avformat/movenc: get number of written bytes from bitstream writer
Update fate test.

(cherry picked from commit 97d1ee437b)
2018-11-26 15:36:12 +01:00
Paul B Mahol
fcffed470a avformat/movenc: fix size calculation in mov_write_eac3_tag()
Otherwise it would assert when flushing bits.

(cherry picked from commit 027f032bbc)
2018-11-26 15:36:05 +01:00
Paul B Mahol
9efc591cb7 avfilter/vf_overlay: fix crash with negative y
(cherry picked from commit 57815cfad5)
2018-11-25 12:46:56 +01:00
Marton Balint
d4c5f515f0 avcodec/mpeg_er: fix clearing chroma blocks for 422 and 444
Fixes ticket #7494.

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit e3a9630982)
2018-11-19 23:29:30 +01:00
Marton Balint
bb01cd3cc0 avfilter/af_afade: fix duration maximum
Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit aecd63b926)
2018-11-15 22:34:53 +01:00
Mark Harris
fed94c2f22 avfilter/vf_fade: fix start/duration max value
A fade out (usually at the end of a video) can easily start beyond
INT32_MAX (about 36 minutes).  Regression since d40dc64173.

(cherry picked from commit ae4323548a)
2018-11-15 22:34:34 +01:00
James Almer
a9e9303f26 avcodec/cbs_av1: fix parsing signed integer values
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit f0f2832a5c)
2018-11-14 20:53:44 -03:00
James Almer
49bc641e89 avcodec/cbs_av1: fix storage size for segmentation_params feature_value fields
The valid range is -255 to 255.

Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 79831f4531)
2018-11-14 20:53:40 -03:00
Mark Thompson
4f1e07090a configure: Add missing xlib dependency for VAAPI X11 code
Fixes #7538.

(cherry picked from commit 2ce3a48f30)
2018-11-14 23:24:51 +00:00
Mark Wu
11dff170ef avcodec/hevcdec: fix non-ref frame judgement
After inspecting the source code of x265, mpv and ffmpeg, I've found that
ffmpeg mistakenly regards EVC_NAL_BLA_N_LP and HEVC_NAL_IDR_N_LP as non-
reference frames, which are acutally reference frames according to the
specification in x265, and drops them.

This patch should address the problem. I have tested it with mpv.

Signed-off-by: Mark Wu <wfwf1997@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 10bc4c3a7d)
2018-11-10 14:38:25 -03:00
Mark Thompson
10506de9ad cbs_av1: Support redundant frame headers
(cherry picked from commit f5894178fb)
2018-11-05 23:11:03 +00:00
Mark Thompson
af3fccfeff cbs_av1: Fix header writing when already aligned
(cherry picked from commit 6bdb7712ae)
2018-11-05 23:10:57 +00:00
Mark Thompson
ec1b5216fc configure: Add missing V4L2 M2M decoder BSF dependencies
(cherry picked from commit e9d2e3fdaa)
2018-11-05 23:10:49 +00:00
Mark Thompson
066ff02621 configure: Add missing IVF muxer BSF dependency
(cherry picked from commit a4fb2b1150)
2018-11-05 23:10:41 +00:00
James Almer
398a70309e avcodec/cbs_av1: fix decoder/encoder_buffer_delay variable types
buffer_delay_length_minus_1 is five bits long, meaning decode_buffer_delay and
encoder_buffer_delay can have values up to 32 bits long.

Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 89a0d33e3a)
2018-11-04 22:06:20 -03:00
Mark Thompson
acd13f1255 configure: Fix av1_metadata BSF dependency
(cherry picked from commit 34429182b9)
2018-11-04 22:06:11 -03:00
James Almer
1c98cf4ddd avformat/ivfenc: use the av1_metadata bsf to insert Temporal Delimiter OBUs if needed
Reviewed-by: Mark Thompson <sw@jkqxz.net>
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 2d2af23349)
2018-11-04 22:06:08 -03:00
Marton Balint
63c1e291ef avformat/ftp: allow nonstandard 202 reply to OPTS UTF8
Fixes ticket #7481.

Signed-off-by: Marton Balint <cus@passwd.hu>
(cherry picked from commit 8e5a2495a8)
2018-11-04 22:55:09 +01:00
Michael Niedermayer
7ebc27e1fa avcodec/cavsdec: Propagate error codes inside decode_mb_i()
Fixes: Timeout
Fixes: 10702/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_CAVS_fuzzer-5669940938407936

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit c1cee05656)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
bc5777bdab avcodec/mpeg4videodec: Clear partitioned frame in decode_studio_vop_header()
partitioned_frame is also set/cleared in decode_vop_header()

Fixes: out of array read
Fixes: 9789/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEG4_fuzzer-5638681627983872

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 074187d599)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
7d23ccac8d avcodec/mpegaudio_parser: Consume more than 0 bytes in case of the unsupported mp3adu case
Fixes: Timeout
Fixes: 10966/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MP3ADU_fuzzer-5348695024336896
Fixes: 10969/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MP3ADUFLOAT_fuzzer-5691669402877952

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit df91af140c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
2f04b78b95 avcodec/prosumer: Simplify bit juggling of the c variable in decompress()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 66425add27)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
fd05e20650 avcodec/prosumer: Remove always true check in decompress()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 1dfa0b6f36)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
a163384467 avcodec/prosumer: Remove unneeded ()
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 506839a3e9)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Michael Niedermayer
b9875b7583 avcodec/prosumer: Check for bytestream eof in decompress()
Fixes: Infinite loop
Fixes: 10685/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_PROSUMER_fuzzer-5652236881887232

Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
(cherry picked from commit 9acdf17b2c)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-04 20:26:49 +01:00
Philip Langdale
ebc1c49e41 avfilter/vf_cuda_yadif: Avoid new syntax for vector initialisation
This requires a newer version of CUDA than we want to require.

(cherry picked from commit 8e50215b5e)
2018-11-03 15:50:31 -07:00
Philip Langdale
6feec11e48 avcodec/nvdec: Increase frame pool size to help deinterlacing
With the cuda yadif filter in use, the number of mapped decoder
frames could increase by two, as the filter holds on to additional
frames.

(cherry picked from commit 1b41115ef7)
2018-11-03 15:50:25 -07:00
Philip Langdale
67126555fc avfilter/vf_yadif_cuda: CUDA accelerated yadif deinterlacer
This is a cuda implementation of yadif, which gives us a way to
do deinterlacing when using the nvdec hwaccel. In that scenario
we don't have access to the nvidia deinterlacer.

(cherry picked from commit d5272e94ab)
2018-11-03 15:50:12 -07:00
Philip Langdale
041231fcd6 libavfilter/vf_yadif: Make frame management logic and options shareable
I'm writing a cuda implementation of yadif, and while this
obviously has a very different implementation of the actual
filtering, all the frame management is unchanged. To avoid
duplicating that logic, let's make it shareable.

From the perspective of the existing filter, the only real change
is introducing a function pointer for the filter() function so it
can be specified for the specific filter.

(cherry picked from commit 598f0f3927)
2018-11-03 15:45:55 -07:00
Josh de Kock
765fb1f224 fate/api-h264-slice-test: use cleaner error handling
Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 1052578dad)
2018-11-03 12:57:51 -03:00
Josh de Kock
5060a615c7 fate/api-h264-slice-test: don't use ssize_t
Fixes ticket #7521

Signed-off-by: James Almer <jamrial@gmail.com>
(cherry picked from commit 8096f52049)
2018-11-03 12:57:37 -03:00
Michael Niedermayer
1665ac6a44 RELEASE_NOTES: Based on the version from 4.0
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-02 01:36:21 +01:00
Michael Niedermayer
3c7e973430 Update for 4.1
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2018-11-02 01:33:08 +01:00
7983 changed files with 409196 additions and 972207 deletions

View File

@@ -1,223 +0,0 @@
# This file describes the expected reviewers for a PR based on the changed
# files. Unlike what the name of the file suggests they don't own the code, but
# merely have a good understanding of that area of the codebase and therefore
# are usually suited as a reviewer.
# Lines in this file match changed paths via Go-Style regular expressions:
# https://pkg.go.dev/regexp/syntax
# Mind the alphabetical order
# avcodec
# =======
libavcodec/.*aac.* @lynne
libavcodec/.*ac3.* @lynne
libavcodec/.*adpcm.* @zane @pross
libavcodec/anm.* @pross
libavcodec/amf.* @OvchinnikovDmitrii @ArazIusubov
libavcodec/ansi.* @pross
libavcodec/aom_film_grain.* @haasn
libavcodec/.*atrac9.* @lynne
libavcodec/bink.* @pross
libavcodec/bintext.* @pross
libavcodec/.*bitpacked.* @lynne
libavcodec/.*d3d12va.* @jianhuaw @tong1wu
libavcodec/.*dirac.* @lynne
libavcodec/.*dovi_rpu.* @haasn
libavcodec/dpx.* @pross
libavcodec/dsd.* @pross
libavcodec/eacmv.* @pross
libavcodec/eaidct.* @pross
libavcodec/eamad.* @pross
libavcodec/eat.* @pross
libavcodec/.*exif.* @Traneptora
libavcodec/.*ffv1.* @lynne @michaelni
libavcodec/g728.* @pross
libavcodec/gem.* @pross
libavcodec/golomb.* @michaelni
libavcodec/.*h266.* @frankplow @NuoMi @jianhuaw
libavcodec/h26x/.* @frankplow @NuoMi @jianhuaw
libavcodec/.*h274.* @haasn
libavcodec/iff.* @pross
libavcodec/.*jpegxl.* @lynne @Traneptora
libavcodec/jv.* @pross
libavcodec/.*jxl.* @lynne @Traneptora
libavcodec/.*lcms2.* @haasn
libavcodec/lead.* @pross
libavcodec/mediacodec* @quink
libavcodec/mmvideo.* @pross
libavcodec/msp2.* @pross
libavcodec/mvc.* @pross
libavcodec/oh* @quink
libavcodec/.*opus.* @lynne
libavcodec/pictor.* @pross
libavcodec/.*png.* @Traneptora
libavcodec/.*prores.* @lynne
libavcodec/rangecoder.* @michaelni
libavcodec/ratecontrol.* @michaelni
libavcodec/rv60.* @pross
libavcodec/sgirle.* @pross
libavcodec/.*siren.* @lynne
libavcodec/smpte_436m.* @programmerjake
libavcodec/svq1.* @pross
libavcodec/svq3.* @pross
libavcodec/.*vc2.* @lynne
libavcodec/videotoolbox.* @ePirat
libavcodec/vp3.* @pross
libavcodec/vp4.* @pross
libavcodec/vp5.* @pross
libavcodec/vp6.* @pross
libavcodec/vp8.* @rbultje @pross
libavcodec/vp9.* @rbultje
libavcodec/vpx.* @rbultje @pross
libavcodec/vqc.* @pross
libavcodec/.*vvc.* @frankplow @NuoMi @jianhuaw
libavcodec/wmavoice.* @rbultje
libavcodec/wbmp.* @pross
# bitstream filters
libavcodec/bsf/eia608_to_smpte436m.* @programmerjake
libavcodec/bsf/smpte436m_to_eia608.* @programmerjake
# architecture-specific
libavcodec/aarch64/.* @lynne @mstorsjo
libavcodec/arm/.* @mstorsjo
libavcodec/ppc/.* @sean_mcg
libavcodec/riscv/.* @Courmisch
libavcodec/wasm/hevc/.* @quink
libavcodec/x86/.* @lynne
libavcodec/x86/vp8.* @rbultje
libavcodec/x86/vp9.* @rbultje
libavcodec/x86/vpx.* @rbultje
# avfilter
# =======
libavfilter/af_whisper.* @vpalmisano
libavfilter/.*_amf* @OvchinnikovDmitrii @ArazIusubov
libavfilter/avfiltergraph.* @haasn
libavfilter/colorspace.* @rbultje
libavfilter/formats.* @haasn
libavfilter/.*f_ebur128.* @haasn
libavfilter/vf_blackdetect.* @haasn
libavfilter/vf_colordetect.* @haasn
libavfilter/vf_colorspace.* @rbultje
libavfilter/.*drawvg.* @ayosec
libavfilter/vf_icc.* @haasn
libavfilter/vf_libplacebo.* @haasn
libavfilter/vf_premultiply.* @haasn
libavfilter/vf_scale.* @haasn
libavfilter/vf_scale_vt.* @quink
libavfilter/vf_thumbnail.* @haasn
libavfilter/vf_transpose_vt.* @quink
libavfilter/vf_yadif.* @michaelni
libavfilter/vsrc_mandelbrot.* @michaelni
libavfilter/aarch64/.* @mstorsjo
libavfilter/riscv/.* @Courmisch
libavfilter/x86/colorspace.* @rbultje
libavfilter/x86/scene_sad.* @haasn
# avformat
# =======
libavformat/alp.* @zane
libavformat/amv.* @zane
libavformat/anm.* @pross
libavformat/apm.* @zane
libavformat/argo_.* @zane
libavformat/bink.* @pross
libavformat/bintext.* @pross
libavformat/caf.* @pross
libavformat/cine.* @pross
libavformat/dsf.* @pross
libavformat/eacdata.* @pross
libavformat/electronicarts.* @pross
libavformat/.*exif.* @Traneptora
libavformat/filmstrip.* @pross
libavformat/frm.* @pross
libavformat/iamf.* @jamrial
libavformat/icecast.c @ePirat
libavformat/ico.* @pross
libavformat/iff.* @pross
libavformat/.*jpegxl.* @Traneptora
libavformat/jv.* @pross
libavformat/.*jxl.* @Traneptora
libavformat/kvag.* @zane
libavformat/mccdec.* @programmerjake
libavformat/mccenc.* @programmerjake
libavformat/mlv.* @pross
libavformat/mm.* @pross
libavformat/msp.* @pross
libavformat/mv.* @pross
libavformat/pp_bnk.* @zane
libavformat/rm.* @pross
libavformat/sauce.* @pross
libavformat/scd.* @zane
libavformat/tty.* @pross
libavformat/wsd.* @pross
libavformat/wtv.* @pross
# avutil
# ======
libavutil/.*_amf* @OvchinnikovDmitrii @ArazIusubov
libavutil/.*crc.* @lynne @michaelni
libavutil/.*d3d12va.* @jianhuaw @tong1wu
libavutil/csp.* @rbultje @haasn
libavutil/eval.* @michaelni
libavutil/film_grain.* @haasn
libavutil/dovi_meta.* @haasn
libavutil/hwcontext_oh.* @quink
libavutil/hwcontext_mediacodec.* @quink
libavutil/hwcontext_videotoolbox.* @ePirat
libavutil/iamf.* @jamrial
libavutil/integer.* @michaelni
libavutil/lfg.* @michaelni
libavutil/lls.* @michaelni
libavutil/md5.* @michaelni
libavutil/mathematics.* @michaelni
libavutil/mem.* @michaelni
libavutil/qsort.* @michaelni
libavutil/random_seed.* @michaelni
libavutil/rational.* @michaelni
libavutil/sfc.* @michaelni
libavutil/softfloat.* @michaelni
libavutil/tree.* @michaelni
libavutil/tx.* @lynne
libavutil/aarch64/.* @lynne @mstorsjo
libavutil/arm/.* @mstorsjo
libavutil/ppc/.* @sean_mcg
libavutil/riscv/.* @Courmisch
libavutil/x86/.* @lynne
# swresample
# =======
libswresample/aarch64/.* @mstorsjo
libswresample/arm/.* @mstorsjo
libswresample/.* @michaelni
# swscale
# =======
libswscale/aarch64/.* @mstorsjo
libswscale/arm/.* @mstorsjo
libswscale/ppc/.* @sean_mcg
libswscale/riscv/.* @Courmisch
libswscale/.* @haasn
# tools
# =====
fftools/ffplay_renderer.* @haasn
# doc
# ===
doc/.* @GyanD
# Frameworks
# ==========
.*d3d12va.* @jianhuaw @tong1wu
.*vulkan.* @lynne @haasn
# tests
# =====
tests/checkasm/riscv/.* @Courmisch
tests/ref/.*drawvg.* @ayosec
tests/ref/fate/sub-mcc.* @programmerjake

View File

@@ -1,9 +0,0 @@
# Summary of the bug
Briefly describe the issue you're experiencing. Include any error messages, unexpected behavior, or relevant observations.
# Steps to reproduce
List the steps required to trigger the bug.
Include the exact CLI command used, if any.
Provide sample input files, logs, or scripts if available.

View File

@@ -1,73 +0,0 @@
module.exports = async ({github, context}) => {
const title = (context.payload.pull_request?.title || context.payload.issue?.title || '').toLowerCase();
const labels = [];
const issueNumber = context.payload.pull_request?.number || context.payload.issue?.number;
const kwmap = {
'avcodec': 'avcodec',
'avdevice': 'avdevice',
'avfilter': 'avfilter',
'avformat': 'avformat',
'avutil': 'avutil',
'swresample': 'swresample',
'swscale': 'swscale',
'fftools': 'CLI'
};
async function isOrgMember(username) {
try {
const response = await github.rest.orgs.checkMembershipForUser({
org: context.repo.owner,
username: username
});
return response.status === 204;
} catch (error) {
return false;
}
}
if (context.payload.action === 'closed' ||
(context.payload.action !== 'opened' && (
context.payload.action === 'assigned' ||
context.payload.action === 'label_updated' ||
context.payload.action === 'labeled' ||
context.payload.comment) &&
await isOrgMember(context.payload.sender.login))
) {
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issueNumber,
// this should say 'new', but forgejo deviates from GitHub API here and expects the ID
name: '41'
});
console.log('Removed "new" label');
} catch (error) {
if (error.status !== 404 && error.status !== 410) {
console.log('Could not remove "new" label');
}
}
} else if (context.payload.action === 'opened') {
labels.push('new');
console.log('Detected label: new');
}
if ((context.payload.action === 'opened' || context.payload.action === 'edited') && context.eventName !== 'issue_comment') {
for (const [kw, label] of Object.entries(kwmap)) {
if (title.includes(kw)) {
labels.push(label);
console.log('Detected label: ' + label);
}
}
}
if (labels.length > 0) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issueNumber,
labels: labels,
});
}
}

View File

@@ -1,31 +0,0 @@
avcodec:
- changed-files:
- any-glob-to-any-file: libavcodec/**
avdevice:
- changed-files:
- any-glob-to-any-file: libavdevice/**
avfilter:
- changed-files:
- any-glob-to-any-file: libavfilter/**
avformat:
- changed-files:
- any-glob-to-any-file: libavformat/**
avutil:
- changed-files:
- any-glob-to-any-file: libavutil/**
swresample:
- changed-files:
- any-glob-to-any-file: libswresample/**
swscale:
- changed-files:
- any-glob-to-any-file: libswscale/**
CLI:
- changed-files:
- any-glob-to-any-file: fftools/**

View File

@@ -1,36 +0,0 @@
exclude: ^tests/ref/
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-illegal-windows-names
- id: check-shebang-scripts-are-executable
- id: check-yaml
- id: end-of-file-fixer
- id: file-contents-sorter
files:
.forgejo/pre-commit/ignored-words.txt
args:
- --ignore-case
- id: fix-byte-order-marker
- id: mixed-line-ending
- id: trailing-whitespace
- repo: local
hooks:
- id: aarch64-asm-indent
name: fix aarch64 assembly indentation
files: ^.*/aarch64/.*\.S$
language: script
entry: ./tools/check_arm_indent.sh --apply
pass_filenames: false
- repo: https://github.com/codespell-project/codespell
rev: v2.4.1
hooks:
- id: codespell
args:
- --ignore-words=.forgejo/pre-commit/ignored-words.txt
- --ignore-multiline-regex=codespell:off.*?(codespell:on|\Z)
exclude: ^tools/(patcheck|clean-diff)$

View File

@@ -1,119 +0,0 @@
abl
ACN
acount
addin
alis
alls
ALOG
ALS
als
ANC
anc
ANS
ans
anull
basf
bloc
brane
BREIF
BU
bu
bufer
CAF
caf
clen
clens
Collet
compre
dum
endin
erro
FIEL
fiel
filp
fils
FILTERD
filterd
fle
fo
FPR
fro
Hald
indx
ine
inh
inout
inouts
inport
ist
LAF
laf
lastr
LinS
mapp
mis
mot
nd
nIn
offsetp
orderd
ot
outout
padd
PAETH
paeth
PARM
parm
parms
pEvents
PixelX
Psot
quater
readd
recuse
redY
Reencode
reencode
remaind
renderD
rin
SAV
SEH
SER
ser
setts
shft
SIZ
siz
skipd
sme
som
sover
STAP
startd
statics
struc
suble
TE
tE
te
tha
tne
tolen
tpye
tre
TRUN
trun
truns
Tung
TYE
ue
UES
ues
vai
vas
vie
VILL
vor
wel
wih

View File

@@ -1,28 +0,0 @@
on:
pull_request_target:
types: [opened, edited, synchronize, closed, assigned, labeled, unlabeled]
issues:
types: [opened, edited, closed, assigned, labeled, unlabeled]
issue_comment:
types: [created]
jobs:
pr_labeler:
runs-on: utilities
if: ${{ github.event.sender.login != 'ffmpeg-devel' }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Label by file-changes
uses: https://github.com/actions/labeler@v5
if: ${{ forge.event_name == 'pull_request_target' }}
with:
configuration-path: .forgejo/labeler/labeler.yml
repo-token: ${{ secrets.AUTOLABELER_TOKEN }}
- name: Label by title-match
uses: https://github.com/actions/github-script@v7
with:
script: |
const script = require('.forgejo/labeler/labeler.js')
await script({github, context})
github-token: ${{ secrets.AUTOLABELER_TOKEN }}

View File

@@ -1,26 +0,0 @@
on:
push:
branches:
- master
pull_request:
jobs:
lint:
runs-on: utilities
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install pre-commit CI
id: install
run: |
python3 -m venv ~/pre-commit
~/pre-commit/bin/pip install --upgrade pip setuptools
~/pre-commit/bin/pip install pre-commit
echo "envhash=$({ python3 --version && cat .forgejo/pre-commit/config.yaml; } | sha256sum | cut -d' ' -f1)" >> $FORGEJO_OUTPUT
- name: Cache
uses: actions/cache@v4
with:
path: ~/.cache/pre-commit
key: pre-commit-${{ steps.install.outputs.envhash }}
- name: Run pre-commit CI
run: ~/pre-commit/bin/pre-commit run -c .forgejo/pre-commit/config.yaml --show-diff-on-failure --color=always --all-files

View File

@@ -1,76 +0,0 @@
on:
push:
branches:
- master
pull_request:
jobs:
run_fate:
strategy:
fail-fast: false
matrix:
runner: [linux-aarch64]
shared: ['static']
bits: ['64']
include:
- runner: linux-amd64
shared: 'static'
bits: '32'
- runner: linux-amd64
shared: 'shared'
bits: '64'
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure
run: |
./configure --enable-gpl --enable-nonfree --enable-memory-poisoning --assert-level=2 \
$([ "${{ matrix.bits }}" != "32" ] || echo --arch=x86_32 --extra-cflags=-m32 --extra-cxxflags=-m32 --extra-ldflags=-m32) \
$([ "${{ matrix.shared }}" != "shared" ] || echo --enable-shared --disable-static) \
|| CFGRES=$? && CFGRES=$?
cat ffbuild/config.log
exit $CFGRES
- name: Build
run: make -j$(nproc)
- name: Restore Cached Fate-Suite
id: cache
uses: actions/cache/restore@v4
with:
path: fate-suite
key: fate-suite
restore-keys: |
fate-suite-
- name: Sync Fate-Suite
id: fate
run: |
make fate-rsync SAMPLES=$PWD/fate-suite
echo "hash=$(find fate-suite -type f -printf "%P %s %T@\n" | sort | sha256sum | cut -d' ' -f1)" >> $FORGEJO_OUTPUT
- name: Cache Fate-Suite
uses: actions/cache/save@v4
if: ${{ format('fate-suite-{0}', steps.fate.outputs.hash) != steps.cache.outputs.cache-matched-key }}
with:
path: fate-suite
key: fate-suite-${{ steps.fate.outputs.hash }}
- name: Run Fate
run: LD_LIBRARY_PATH="$(printf "%s:" "$PWD"/lib*)$PWD" make fate fate-build SAMPLES=$PWD/fate-suite -j$(nproc)
compile_only:
strategy:
fail-fast: false
matrix:
image: ["ghcr.io/btbn/ffmpeg-builds/win64-gpl:latest"]
runs-on: linux-amd64
container: ${{ matrix.image }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure
run: |
./configure --pkg-config-flags="--static" $FFBUILD_TARGET_FLAGS $FF_CONFIGURE \
--cc="$CC" --cxx="$CXX" --ar="$AR" --ranlib="$RANLIB" --nm="$NM" \
--extra-cflags="$FF_CFLAGS" --extra-cxxflags="$FF_CXXFLAGS" \
--extra-libs="$FF_LIBS" --extra-ldflags="$FF_LDFLAGS" --extra-ldexeflags="$FF_LDEXEFLAGS"
- name: Build
run: make -j$(nproc)
- name: Run Fate
run: make -j$(nproc) fate-build

2
.gitattributes vendored
View File

@@ -1,2 +1,2 @@
*.pnm -diff -text
Changelog merge=union
tests/ref/fate/sub-scc eol=crlf

13
.gitignore vendored
View File

@@ -1,6 +1,5 @@
*.a
*.o
*.objs
*.o.*
*.d
*.def
@@ -20,12 +19,8 @@
*.swp
*.ver
*.version
*.metal.air
*.metallib
*.metallib.c
*.ptx
*.ptx.c
*.ptx.gz
*_g
\#*
.\#*
@@ -36,14 +31,8 @@
/ffprobe
/config.asm
/config.h
/config_components.asm
/config_components.h
/coverage.info
/avversion.h
/lcov/
/src
/mapfile
/tools/python/__pycache__/
/libavcodec/vulkan/*.c
/libavfilter/vulkan/*.c
/.*/
!/.forgejo/

View File

@@ -1,30 +0,0 @@
<jeebjp@gmail.com> <jan.ekstrom@aminocom.com>
<sw@jkqxz.net> <mrt@jkqxz.net>
<u@pkh.me> <cboesch@gopro.com>
<quinkblack@foxmail.com> <wantlamy@gmail.com>
<quinkblack@foxmail.com> <zhilizhao@tencent.com>
<modmaker@google.com> <modmaker-at-google.com@ffmpeg.org>
<stebbins@jetheaddev.com> <jstebbins@jetheaddev.com>
<barryjzhao@tencent.com> <mypopydev@gmail.com>
<barryjzhao@tencent.com> <jun.zhao@intel.com>
<josh@itanimul.li> <joshdk@obe.tv>
<michael@niedermayer.cc> <michaelni@gmx.at>
<linjie.justin.fu@gmail.com> <linjie.fu@intel.com>
<linjie.justin.fu@gmail.com> <fulinjie@zju.edu.cn>
<ceffmpeg@gmail.com> <cehoyos@ag.or.at>
<ceffmpeg@gmail.com> <cehoyos@rainbow.studorg.tuwien.ac.at>
<ffmpeg@gyani.pro> <gyandoshi@gmail.com>
<atomnuker@gmail.com> <rpehlivanov@obe.tv>
<lizhong1008@gmail.com> <zhong.li@intel.com>
<lizhong1008@gmail.com> <zhongli_dev@126.com>
<andreas.rheinhardt@outlook.com> <andreas.rheinhardt@gmail.com>
<andreas.rheinhardt@outlook.com> <andreas.rheinhardt@googlemail.com>
rcombs <rcombs@rcombs.me> <rodger.combs@gmail.com>
<thilo.borgmann@mail.de> <thilo.borgmann@googlemail.com>
<lq@chinaffmpeg.org> <liuqi05@kuaishou.com>
<ruiling.song83@gmail.com> <ruiling.song@intel.com>
Cosmin Stejerean <cosmin@cosmin.at> Cosmin Stejerean via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
<wutong1208@outlook.com> <tong1.wu-at-intel.com@ffmpeg.org>
<wutong1208@outlook.com> <tong1.wu@intel.com>
<toqsxw@outlook.com> <jianhua.wu-at-intel.com@ffmpeg.org>
<toqsxw@outlook.com> <jianhua.wu@intel.com>

30
.travis.yml Normal file
View File

@@ -0,0 +1,30 @@
language: c
sudo: false
os:
- linux
- osx
addons:
apt:
packages:
- nasm
- diffutils
compiler:
- clang
- gcc
matrix:
exclude:
- os: osx
compiler: gcc
cache:
directories:
- ffmpeg-samples
before_install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update --all; fi
install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install nasm; fi
script:
- mkdir -p ffmpeg-samples
- ./configure --samples=ffmpeg-samples --cc=$CC
- make -j 8
- make fate-rsync
- make check -j 8

View File

@@ -55,7 +55,7 @@ modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
@@ -111,7 +111,7 @@ modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
@@ -158,7 +158,7 @@ Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
@@ -216,7 +216,7 @@ instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
@@ -267,7 +267,7 @@ Library will still fall under Section 6.)
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
@@ -329,7 +329,7 @@ restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
@@ -370,7 +370,7 @@ subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
@@ -422,7 +422,7 @@ conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
@@ -456,7 +456,7 @@ SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Libraries
If you develop a new library, and you want it to be of the greatest

View File

@@ -1,6 +1,6 @@
See the Git history of the project (https://git.ffmpeg.org/ffmpeg) to
See the Git history of the project (git://source.ffmpeg.org/ffmpeg) to
get the names of people who have contributed to FFmpeg.
To check the log, you can type the command "git log" in the FFmpeg
source directory, or browse the online repository at
https://git.ffmpeg.org/ffmpeg
http://source.ffmpeg.org.

605
Changelog
View File

@@ -1,503 +1,116 @@
Entries are sorted chronologically from oldest to youngest within each release,
releases are sorted from youngest to oldest.
version <next>:
- ffprobe -codec option
- EXIF Metadata Parsing
- gfxcapture: Windows.Graphics.Capture based window/monitor capture
- hxvs demuxer for HXVS/HXVT IP camera format
- MPEG-H 3D Audio decoding via mpeghdec
- D3D12 H.264 encoder
- drawvg filter via libcairo
- ffmpeg CLI tiled HEIF support
- D3D12 AV1 encoder
- ProRes Vulkan hwaccel
- DPX Vulkan hwaccel
- Rockchip H.264/HEVC hardware encoder
version 4.1.2:
- avcodec/dfa: Check the chunk header is not truncated
- avcodec/clearvideo: Check remaining data in P frames
- avcodec/hevcdec: decode at most one slice reporting being the first in the picture
- avcodec/dvbsubdec: Check object position
- avcodec/cdgraphics: Use ff_set_dimensions()
- avformat/gdv: Check fps
- configure: use vpx_codec_vp8_dx/cx for libvpx-vp8 checking
- configure: add missing pthreads extralibs dependency for libvpx-vp9
- avcodec/mpeg4videodec: Check idx in mpeg4_decode_studio_block()
- avcodec/dxv: Correct integer overflow in get_opcodes()
- avcodec/scpr: Fix use of uninitialized variable
- avcodec/qpeg: Limit copy in qpeg_decode_intra() to the available bytes
- avcodec/aic: Check remaining bits in aic_decode_coeffs()
- avcodec/gdv: Check for truncated tags in decompress_5()
- avcodec/bethsoftvideo: Check block_type
- avcodec/jpeg2000dwt: Fix integer overflow in dwt_decode97_int()
- avcodec/error_resilience: Use a symmetric check for skipping MV estimation
- avcodec/mlpdec: Insuffient typo
- avcodec/zmbv: obtain frame later
- avcodec/jvdec: Check available input space before decode8x8()
- avcodec/h264_direct: Fix overflow in POC comparission
- avformat/webmdashenc: Check id in adaption_sets
- avformat/http: Fix Out-of-Bounds access in process_line()
- avformat/ftp: Fix Out-of-Bounds Access and Information Leak in ftp.c:393
- avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for handling braces
- avcodec/htmlsubtitles: Fixes denial of service due to use of sscanf in inner loop for tag scaning
- avformat/matroskadec: Do not leak queued packets on sync errors
- avcodec/mpeg4videodec: Clear interlaced_dct for studio profile
- avformat/mov: Do not use reference stream in mov_read_sidx() if there is no reference stream
- avcodec/sbrdsp_fixed.c: remove input value limit for sbr_sum_square_c()
- avcodec/prores_ks: Fix luma quantization if q >= MAX_STORED_Q
- avformat/mov: fix hang while seek on a kind of fragmented mp4
- avformat/async: fix assertion condition when draining buffer
- avcodec/cbs_av1: don't call cbs_av1_read_trailing_bits() when no bits remain in the OBU
version 8.0:
- Whisper filter
- Drop support for OpenSSL < 1.1.0
- Enable TLS peer certificate verification by default (on next major version bump)
- Drop support for OpenSSL < 1.1.1
- yasm support dropped, users need to use nasm
- VVC VAAPI decoder
- RealVideo 6.0 decoder
- OpenMAX encoders deprecated
- libx265 alpha layer encoding
- ADPCM IMA Xbox decoder
- Enhanced FLV v2: Multitrack audio/video, modern codec support
- Animated JPEG XL encoding (via libjxl)
- VVC in Matroska
- CENC AV1 support in MP4 muxer
- pngenc: set default prediction method to PAETH
- APV decoder and APV raw bitstream muxing and demuxing
- APV parser
- APV encoding support through a libopenapv wrapper
- VVC decoder supports all content of SCC (Screen Content Coding):
IBC (Inter Block Copy), Palette Mode and ACT (Adaptive Color Transform
- G.728 decoder
- pad_cuda filter
- Sanyo LD-ADPCM decoder
- APV in MP4/ISOBMFF muxing and demuxing
- OpenHarmony hardware decoder/encoder
- Colordetect filter
- Add vf_scale_d3d11 filter
- No longer disabling GCC autovectorization, on X86, ARM and AArch64
- VP9 Vulkan hwaccel
- AV1 Vulkan encoder
- ProRes RAW decoder
- ProRes RAW Vulkan hwaccel
- ffprobe -codec option
- HDR10+ metadata passthrough when decoding/encoding with libaom-av1
version 7.1:
- Raw Captions with Time (RCWT) closed caption demuxer
- LC3/LC3plus decoding/encoding using external library liblc3
- ffmpeg CLI filtergraph chaining
- LC3/LC3plus demuxer and muxer
- pad_vaapi, drawbox_vaapi filters
- vf_scale supports secondary ref input and framesync options
- vf_scale2ref deprecated
- qsv_params option added for QSV encoders
- VVC decoder compatible with DVB test content
- xHE-AAC decoder
- removed DEC Alpha DSP and support code
- VVC encoding support via libvvenc
- perlin video source
- D3D12VA HEVC encoder
- Cropping metadata parsing and writing in Matroska and MP4/MOV de/muxers
- Intel QSV-accelerated VVC decoding
- MediaCodec AAC/AMR-NB/AMR-WB/MP3 decoding
- YUV colorspace negotiation for codecs and filters, obsoleting the
YUVJ pixel format
- Vulkan H.264 encoder
- Vulkan H.265 encoder
- stream specifiers in fftools can now match by stream disposition
- LCEVC enhancement data exporting in H.26x and MP4/ISOBMFF
- LCEVC filter
- MV-HEVC decoding
- minor stream specifier syntax changes:
- when matching by metadata (:m:<key>:<val>), the colon character
in keys or values now has to be backslash-escaped
- in optional maps (-map ....?) with a metadata-matching stream specifier,
the value has to be separated from the question mark by a colon, i.e.
-map ....:m:<key>:<val>:? (otherwise it would be ambiguous whether the
question mark is a part of <val> or not)
- multiple stream types in a single specifier (e.g. :s:s:0) now cause an
error, as such a specifier makes no sense
- Mastering Display and Content Light Level metadata support in hevc_nvenc
and av1_nvenc encoders
- libswresample now accepts custom order channel layouts as input, with some
constrains
- FFV1 parser
version 7.0:
- DXV DXT1 encoder
- LEAD MCMP decoder
- EVC decoding using external library libxevd
- EVC encoding using external library libxeve
- QOA decoder and demuxer
- aap filter
- demuxing, decoding, filtering, encoding, and muxing in the
ffmpeg CLI now all run in parallel
- enable gdigrab device to grab a window using the hwnd=HANDLER syntax
- IAMF raw demuxer and muxer
- D3D12VA hardware accelerated H264, HEVC, VP9, AV1, MPEG-2 and VC1 decoding
- tiltandshift filter
- qrencode filter and qrencodesrc source
- quirc filter
- lavu/eval: introduce randomi() function in expressions
- VVC decoder (experimental)
- fsync filter
- Raw Captions with Time (RCWT) closed caption muxer
- ffmpeg CLI -bsf option may now be used for input as well as output
- ffmpeg CLI options may now be used as -/opt <path>, which is equivalent
to -opt <contents of file <path>>
- showinfo bitstream filter
- a C11-compliant compiler is now required; note that this requirement
will be bumped to C17 in the near future, so consider updating your
build environment if it lacks C17 support
- Change the default bitrate control method from VBR to CQP for QSV encoders.
- removed deprecated ffmpeg CLI options -psnr and -map_channel
- DVD-Video demuxer, powered by libdvdnav and libdvdread
- ffprobe -show_stream_groups option
- ffprobe (with -export_side_data film_grain) now prints film grain metadata
- AEA muxer
- ffmpeg CLI loopback decoders
- Support PacketTypeMetadata of PacketType in enhanced flv format
- ffplay with hwaccel decoding support (depends on vulkan renderer via libplacebo)
- dnn filter libtorch backend
- Android content URIs protocol
- AOMedia Film Grain Synthesis 1 (AFGS1)
- RISC-V optimizations for AAC, FLAC, JPEG-2000, LPC, RV4.0, SVQ, VC1, VP8, and more
- Loongarch optimizations for HEVC decoding
- Important AArch64 optimizations for HEVC
- IAMF support inside MP4/ISOBMFF
- Support for HEIF/AVIF still images and tiled still images
- Dolby Vision profile 10 support in AV1
- Support for Ambient Viewing Environment metadata in MP4/ISOBMFF
- HDR10 metadata passthrough when encoding with libx264, libx265, and libsvtav1
version 6.1:
- libaribcaption decoder
- Playdate video decoder and demuxer
- Extend VAAPI support for libva-win32 on Windows
- afireqsrc audio source filter
- arls filter
- ffmpeg CLI new option: -readrate_initial_burst
- zoneplate video source filter
- command support in the setpts and asetpts filters
- Vulkan decode hwaccel, supporting H264, HEVC and AV1
- color_vulkan filter
- bwdif_vulkan filter
- nlmeans_vulkan filter
- RivaTuner video decoder
- xfade_vulkan filter
- vMix video decoder
- Essential Video Coding parser, muxer and demuxer
- Essential Video Coding frame merge bsf
- bwdif_cuda filter
- Microsoft RLE video encoder
- Raw AC-4 muxer and demuxer
- Raw VVC bitstream parser, muxer and demuxer
- Bitstream filter for editing metadata in VVC streams
- Bitstream filter for converting VVC from MP4 to Annex B
- scale_vt filter for videotoolbox
- transpose_vt filter for videotoolbox
- support for the P_SKIP hinting to speed up libx264 encoding
- Support HEVC,VP9,AV1 codec in enhanced flv format
- apsnr and asisdr audio filters
- OSQ demuxer and decoder
- Support HEVC,VP9,AV1 codec fourcclist in enhanced rtmp protocol
- CRI USM demuxer
- ffmpeg CLI '-top' option deprecated in favor of the setfield filter
- VAAPI AV1 encoder
- ffprobe XML output schema changed to account for multiple
variable-fields elements within the same parent element
- ffprobe -output_format option added as an alias of -of
# codespell:off
version 6.0:
- Radiance HDR image support
- ddagrab (Desktop Duplication) video capture filter
- ffmpeg -shortest_buf_duration option
- ffmpeg now requires threading to be built
- ffmpeg now runs every muxer in a separate thread
- Add new mode to cropdetect filter to detect crop-area based on motion vectors and edges
- VAAPI decoding and encoding for 10/12bit 422, 10/12bit 444 HEVC and VP9
- WBMP (Wireless Application Protocol Bitmap) image format
- a3dscope filter
- bonk decoder and demuxer
- Micronas SC-4 audio decoder
- LAF demuxer
- APAC decoder and demuxer
- Media 100i decoders
- DTS to PTS reorder bsf
- ViewQuest VQC decoder
- backgroundkey filter
- nvenc AV1 encoding support
- MediaCodec decoder via NDKMediaCodec
- MediaCodec encoder
- oneVPL support for QSV
- QSV AV1 encoder
- QSV decoding and encoding for 10/12bit 422, 10/12bit 444 HEVC and VP9
- showcwt multimedia filter
- corr video filter
- adrc audio filter
- afdelaysrc audio filter
- WADY DPCM decoder and demuxer
- CBD2 DPCM decoder
- ssim360 video filter
- ffmpeg CLI new options: -stats_enc_pre[_fmt], -stats_enc_post[_fmt],
-stats_mux_pre[_fmt]
- hstack_vaapi, vstack_vaapi and xstack_vaapi filters
- XMD ADPCM decoder and demuxer
- media100 to mjpegb bsf
- ffmpeg CLI new option: -fix_sub_duration_heartbeat
- WavArc decoder and demuxer
- CrystalHD decoders deprecated
- SDNS demuxer
- RKA decoder and demuxer
- filtergraph syntax in ffmpeg CLI now supports passing file contents
as option values, by prefixing option name with '/'
- hstack_qsv, vstack_qsv and xstack_qsv filters
version 5.1:
- add ipfs/ipns gateway support
- dialogue enhance audio filter
- dropped obsolete XvMC hwaccel
- pcm-bluray encoder
- DFPWM audio encoder/decoder and raw muxer/demuxer
- SITI filter
- Vizrt Binary Image encoder/decoder
- avsynctest source filter
- feedback video filter
- pixelize video filter
- colormap video filter
- colorchart video source filter
- multiply video filter
- PGS subtitle frame merge bitstream filter
- blurdetect filter
- tiltshelf audio filter
- QOI image format support
- ffprobe -o option
- virtualbass audio filter
- VDPAU AV1 hwaccel
- PHM image format support
- remap_opencl filter
- added chromakey_cuda filter
- added bilateral_cuda filter
version 5.0:
- ADPCM IMA Westwood encoder
- Westwood AUD muxer
- ADPCM IMA Acorn Replay decoder
- Argonaut Games CVG demuxer
- Argonaut Games CVG muxer
- Concatf protocol
- afwtdn audio filter
- audio and video segment filters
- Apple Graphics (SMC) encoder
- hsvkey and hsvhold video filters
- adecorrelate audio filter
- atilt audio filter
- grayworld video filter
- AV1 Low overhead bitstream format muxer
- swscale slice threading
- MSN Siren decoder
- scharr video filter
- apsyclip audio filter
- morpho video filter
- amr parser
- (a)latency filters
- GEM Raster image decoder
- asdr audio filter
- speex decoder
- limitdiff video filter
- xcorrelate video filter
- varblur video filter
- huesaturation video filter
- colorspectrum source video filter
- RTP packetizer for uncompressed video (RFC 4175)
- bitpacked encoder
- VideoToolbox VP9 hwaccel
- VideoToolbox ProRes hwaccel
- support loongarch.
- aspectralstats audio filter
- adynamicsmooth audio filter
- libplacebo filter
- vflip_vulkan, hflip_vulkan and flip_vulkan filters
- adynamicequalizer audio filter
- yadif_videotoolbox filter
- VideoToolbox ProRes encoder
- anlmf audio filter
- IMF demuxer (experimental)
version 4.4:
- AudioToolbox output device
- MacCaption demuxer
- PGX decoder
- chromanr video filter
- VDPAU accelerated HEVC 10/12bit decoding
- ADPCM IMA Ubisoft APM encoder
- Rayman 2 APM muxer
- AV1 encoding support SVT-AV1
- Cineform HD encoder
- ADPCM Argonaut Games encoder
- Argonaut Games ASF muxer
- AV1 Low overhead bitstream format demuxer
- RPZA video encoder
- ADPCM IMA MOFLEX decoder
- MobiClip FastAudio decoder
- MobiClip video decoder
- MOFLEX demuxer
- MODS demuxer
- PhotoCD decoder
- MCA demuxer
- AV1 decoder (Hardware acceleration used only)
- SVS demuxer
- Argonaut Games BRP demuxer
- DAT demuxer
- aax demuxer
- IPU decoder, parser and demuxer
- Intel QSV-accelerated AV1 decoding
- Argonaut Games Video decoder
- libwavpack encoder removed
- ACE demuxer
- AVS3 demuxer
- AVS3 video decoder via libuavs3d
- Cintel RAW decoder
- VDPAU accelerated VP9 10/12bit decoding
- afreqshift and aphaseshift filters
- High Voltage Software ADPCM encoder
- LEGO Racers ALP (.tun & .pcm) muxer
- AV1 VAAPI decoder
- adenorm filter
- ADPCM IMA AMV encoder
- AMV muxer
- NVDEC AV1 hwaccel
- DXVA2/D3D11VA hardware accelerated AV1 decoding
- speechnorm filter
- SpeedHQ encoder
- asupercut filter
- asubcut filter
- Microsoft Paint (MSP) version 2 decoder
- Microsoft Paint (MSP) demuxer
- AV1 monochrome encoding support via libaom >= 2.0.1
- asuperpass and asuperstop filter
- shufflepixels filter
- tmidequalizer filter
- estdif filter
- epx filter
- Dolby E parser
- shear filter
- kirsch filter
- colortemperature filter
- colorcontrast filter
- PFM encoder
- colorcorrect filter
- binka demuxer
- XBM parser
- xbm_pipe demuxer
- colorize filter
- CRI parser
- aexciter audio filter
- exposure video filter
- monochrome video filter
- setts bitstream filter
- vif video filter
- OpenEXR image encoder
- Simbiosis IMX decoder
- Simbiosis IMX demuxer
- Digital Pictures SGA demuxer and decoders
- TTML subtitle encoder and muxer
- identity video filter
- msad video filter
- gophers protocol
- RIST protocol via librist
version 4.3:
- v360 filter
- Intel QSV-accelerated MJPEG decoding
- Intel QSV-accelerated VP9 decoding
- Support for TrueHD in mp4
- Support AMD AMF encoder on Linux (via Vulkan)
- IMM5 video decoder
- ZeroMQ protocol
- support Sipro ACELP.KELVIN decoding
- streamhash muxer
- sierpinski video source
- scroll video filter
- photosensitivity filter
- anlms filter
- arnndn filter
- bilateral filter
- maskedmin and maskedmax filters
- VDPAU VP9 hwaccel
- median filter
- QSV-accelerated VP9 encoding
- AV1 encoding support via librav1e
- AV1 frame merge bitstream filter
- AV1 Annex B demuxer
- axcorrelate filter
- mvdv decoder
- mvha decoder
- MPEG-H 3D Audio support in mp4
- thistogram filter
- freezeframes filter
- Argonaut Games ADPCM decoder
- Argonaut Games ASF demuxer
- xfade video filter
- xfade_opencl filter
- afirsrc audio filter source
- pad_opencl filter
- Simon & Schuster Interactive ADPCM decoder
- Real War KVAG demuxer
- CDToons video decoder
- siren audio decoder
- Rayman 2 ADPCM decoder
- Rayman 2 APM demuxer
- cas video filter
- High Voltage Software ADPCM decoder
- LEGO Racers ALP (.tun & .pcm) demuxer
- AMQP 0-9-1 protocol (RabbitMQ)
- Vulkan support
- avgblur_vulkan, overlay_vulkan, scale_vulkan and chromaber_vulkan filters
- ADPCM IMA MTF decoder
- FWSE demuxer
- DERF DPCM decoder
- DERF demuxer
- CRI HCA decoder
- CRI HCA demuxer
- overlay_cuda filter
- switch from AvxSynth to AviSynth+ on Linux
- mv30 decoder
- Expanded styling support for 3GPP Timed Text Subtitles (movtext)
- WebP parser
- tmedian filter
- maskedthreshold filter
- Support for muxing pcm and pgs in m2ts
- Cunning Developments ADPCM decoder
- asubboost filter
- Pro Pinball Series Soundbank demuxer
- pcm_rechunk bitstream filter
- scdet filter
- NotchLC decoder
- gradients source video filter
- MediaFoundation encoder wrapper
- untile filter
- Simon & Schuster Interactive ADPCM encoder
- PFM decoder
- dblur video filter
- Real War KVAG muxer
version 4.2:
- tpad filter
- AV1 decoding support through libdav1d
- dedot filter
- chromashift and rgbashift filters
- freezedetect filter
- truehd_core bitstream filter
- dhav demuxer
- PCM-DVD encoder
- GIF parser
- vividas demuxer
- hymt decoder
- anlmdn filter
- maskfun filter
- hcom demuxer and decoder
- ARBC decoder
- libaribb24 based ARIB STD-B24 caption support (profiles A and C)
- Support decoding of HEVC 4:4:4 content in nvdec and cuviddec
- removed libndi-newtek
- agm decoder
- KUX demuxer
- AV1 frame split bitstream filter
- lscr decoder
- lagfun filter
- asoftclip filter
- Support decoding of HEVC 4:4:4 content in vdpau
- colorhold filter
- xmedian filter
- asr filter
- showspatial multimedia filter
- VP4 video decoder
- IFV demuxer
- derain filter
- deesser filter
- mov muxer writes tracks with unspecified language instead of English by default
- add support for using clang to compile CUDA kernels
version 4.1.1:
- avformat/mov: validate chunk_count vs stsc_data
- avformat/mov: require tfhd to begin parsing trun
- avcodec/pgssubdec: Check for duplicate display segments
- avformat/rtsp: Check number of streams in sdp_parse_line()
- avformat/rtsp: Clear reply in every iteration in ff_rtsp_connect()
- avcodec/rasc: Move ff_get_buffer() after frame checks
- avcodec/rasc: Check uncompressed dlta size
- avcodec/fic: Check that there is input left in fic_decode_block()
- avcodec/ilbcdec: Fix undefined integer overflow lsf2poly()
- avcodec/ilbcdec: Fix integer overflow in construct_vector()
- avcodec/prosumer: Error out if decompress() stops reading data
- avcodec/tiff: Check for 12bit gray fax
- avutil/imgutils: Optimize memset_bytes() by using av_memcpy_backptr()
- avutil/mem: Optimize fill32() by unrolling and using 64bit
- configure: bump year
- avcodec/tests/rangecoder: initialize array to avoid valgrind warning
- avcodec/gdv: Optimize and factorize scaling loops
- avcodec/h264_slice: Fix integer overflow in implicit_weight_table()
- avcodec/exr: set layer_match in all branches
- avcodec/exr: Check for duplicate channel index
- avfilter/vf_tonemap_opencl: Make static tables const
- doc/indevs: fix upto typo
- avcodec/4xm: Fix returned error codes
- avformat/libopenmpt: Fix successfull typo
- avcodec/v4l2_m2m: fix cant typo
- avcodec/mjpegbdec: Fix some misplaced {} and spaces
- avformat/wvdec: detect and error out on WavPack DSD files
- avcodec/mips: Fix failed case: hevc-conformance-AMP_A_Samsung_* when enable msa
- avcodec/fic: Fail on invalid slice size/off
- avcodec/ilbcdec: fix integer overflow in energy
- postproc/postprocess_template: remove FF_REG_sp from clobber list
- postproc/postprocess_template: Avoid using %4 for the threshold compare
- libavformat/mov: Fix NULL-dereference read for some encrypted content.
- avcodec/rpza: Check that there is enough data for all the blocks
- avcodec/rpza: Move frame allocation to a later point
- avcodec/avcodec: Document the data type for AV_PKT_DATA_MPEGTS_STREAM_ID
- avformat/mpegts: Fix side data type for stream id
- tests/fate/filter-video: increase fuzz for fate-filter-refcmp-psnr-rgb
- avcodec/mjpegdec: Fix indention of ljpeg_decode_yuv_scan()
- lavf/id3v2: fail read_apic on EOF reading mimetype
- avcodec/rasc: Check that the number of moves is less than or equal the number of pixels
- avformat/nutenc: Document trailer index assert better
- lavf/mov: ensure only one tkhd per trak
- avcodec/clearvideo: Check remaining input bits in P macro block loop
- avcodec/rasc: Check input space before reading chunk
- avcodec/dxv: Check that there is enough data to decompress
- avcodec/ppc/hevcdsp: Fix build failures with powerpc-linux-gnu-gcc-4.8 with --disable-optimizations
- avcodec/msvideo1: Check for too small dimensions
- avcodec/wmv2dec: Skip I frame if its smaller than 1/8 of the minimal size
- avcodec/msmpeg4dec: Skip frame if its smaller than 1/8 of the minimal size
- avcodec/truemotion2rt: Fix rounding in input size check
- avcodec/diracdec: Check component quant
- avcodec/tiff: Limit filtering to decoded data
- avcodec/truemotion2: fix integer overflows in tm2_low_chroma()
- avcodec/pngdec: Check compression method
- fftools/ffmpeg: Repair reinit_filter feature
- avcodec/shorten: Fix integer overflow with offset
- avcodec/imm4: Use ff_set_dimensions()
- h264_redundant_pps: Fix logging context
- avfilter/af_asetnsamples: fix last frame props
- cbs_av1: Fix reading of overlong uvlc codes
- avcodec/cbs_av1: fix parsing delta_frame_id_minus1
- avfilter/vf_overlay: fix filtering with negative y
- avformat/movenc: get number of written bytes from bitstream writer
- avformat/movenc: fix size calculation in mov_write_eac3_tag()
- avfilter/vf_overlay: fix crash with negative y
- avcodec/mpeg_er: fix clearing chroma blocks for 422 and 444
- avfilter/af_afade: fix duration maximum
- avfilter/vf_fade: fix start/duration max value
- avcodec/cbs_av1: fix parsing signed integer values
- avcodec/cbs_av1: fix storage size for segmentation_params feature_value fields
- configure: Add missing xlib dependency for VAAPI X11 code
- avcodec/hevcdec: fix non-ref frame judgement
version 4.1:

View File

@@ -1,7 +0,0 @@
{
"drips": {
"ethereum": {
"ownedBy": "0x2f3900e7064eE63D30d749971265858612AA7139"
}
}
}

View File

@@ -1,7 +1,4 @@
## Installing FFmpeg
0. If you like to include source plugins, merge them before configure
for example run tools/merge-all-source-plugins
#Installing FFmpeg:
1. Type `./configure` to create the configuration. A list of configure
options is printed by running `configure --help`.
@@ -18,11 +15,3 @@ NOTICE
------
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.
NOTICE for Package Maintainers
------------------------------
- It is recommended to build FFmpeg twice, first with minimal external dependencies so
that 3rd party packages, which depend on FFmpegs libavutil/libavfilter/libavcodec/libavformat
can then be built. And last build FFmpeg with full dependencies (which may in turn depend on
some of these 3rd party packages). This avoids circular dependencies during build.

View File

@@ -12,6 +12,7 @@ configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
Specifically, the GPL parts of FFmpeg are:
- libpostproc
- optional x86 optimization in the files
- `libavcodec/x86/flac_dsp_gpl.asm`
- `libavcodec/x86/idct_mmx.c`
@@ -20,11 +21,10 @@ Specifically, the GPL parts of FFmpeg are:
- `compat/solaris/make_sunver.pl`
- `doc/t2h.pm`
- `doc/texi2pod.pl`
- `libswresample/tests/swresample.c`
- `libswresample/swresample-test.c`
- `tests/checkasm/*`
- `tests/tiny_ssim.c`
- the following filters in libavfilter:
- `signature_lookup.c`
- `vf_blackframe.c`
- `vf_boxblur.c`
- `vf_colormatrix.c`
@@ -34,28 +34,27 @@ Specifically, the GPL parts of FFmpeg are:
- `vf_eq.c`
- `vf_find_rect.c`
- `vf_fspp.c`
- `vf_geq.c`
- `vf_histeq.c`
- `vf_hqdn3d.c`
- `vf_interlace.c`
- `vf_kerndeint.c`
- `vf_lensfun.c` (GPL version 3 or later)
- `vf_mcdeint.c`
- `vf_mpdecimate.c`
- `vf_nnedi.c`
- `vf_owdenoise.c`
- `vf_perspective.c`
- `vf_phase.c`
- `vf_pp.c`
- `vf_pp7.c`
- `vf_pullup.c`
- `vf_repeatfields.c`
- `vf_sab.c`
- `vf_signature.c`
- `vf_smartblur.c`
- `vf_spp.c`
- `vf_stereo3d.c`
- `vf_super2xsai.c`
- `vf_tinterlace.c`
- `vf_uspp.c`
- `vf_vaguedenoiser.c`
- `vsrc_mptestsrc.c`
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
@@ -81,47 +80,41 @@ affect the licensing of binaries resulting from the combination.
### Compatible libraries
The following libraries are under GPL version 2:
- avisynth
The following libraries are under GPL:
- frei0r
- libcdio
- libdavs2
- librubberband
- libvidstab
- libx264
- libx265
- libxavs
- libxavs2
- libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
passing `--enable-gpl` to configure.
The following libraries are under LGPL version 3:
- gmp
- libaribb24
- liblensfun
When combining them with FFmpeg, use the configure option `--enable-version3` to
upgrade FFmpeg to the LGPL v3.
The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn libraries are under the Apache License
2.0. That license is incompatible with the LGPL v2.1 and the GPL v2, but not with
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
version 3 of those licenses. So to combine these libraries with FFmpeg, the
license version needs to be upgraded by passing `--enable-version3` to configure.
The smbclient library is under the GPL v3, to combine it with FFmpeg,
the options `--enable-gpl` and `--enable-version3` have to be passed to
configure to upgrade FFmpeg to the GPL v3.
### Incompatible libraries
There are certain libraries you can combine with FFmpeg whose licenses are not
compatible with the GPL and/or the LGPL. If you wish to enable these
libraries, even in circumstances that their license may be incompatible, pass
`--enable-nonfree` to configure. This will cause the resulting binary to be
`--enable-nonfree` to configure. But note that if you enable any of these
libraries the resulting binary will be under a complex license mix that is
more restrictive than the LGPL and that may result in additional obligations.
It is possible that these restrictions cause the resulting binary to be
unredistributable.
The Fraunhofer FDK AAC and OpenSSL libraries are under licenses which are
incompatible with the GPLv2 and v3. To the best of our knowledge, they are
compatible with the LGPL.
The NVENC library, while its header file is licensed under the compatible MIT
license, requires a proprietary binary blob at run time, and is deemed to be
incompatible with the GPL. We are not certain if it is compatible with the
LGPL, but we require `--enable-nonfree` even with LGPL configurations in case
it is not.

View File

@@ -6,38 +6,28 @@ FFmpeg code.
Please try to keep entries where you are the maintainer up to date!
*Status*, one of the following:
[X] Old code. Something tagged obsolete generally means it has been replaced by a better system and you should be using that.
[0] No current maintainer [but maybe you could take the role as you write your new code].
[1] It has a maintainer but they don't have time to do much other than throw the odd patch in.
[2] Someone actually looks after it.
Names in () mean that the maintainer currently has no time to maintain the code.
A (CC <address>) after the name means that the maintainer prefers to be CC-ed on
patches and related discussions.
(L <address>) *Mailing list* that is relevant to this area
(W <address>) *Web-page* with status/info
(B <address>) URI for where to file *bugs*. A web-page with detailed bug
filing info, a direct bug tracker link, or a mailto: URI.
(P <address>) *Subsystem Profile* document for more details submitting
patches to the given subsystem. This is either an in-tree file,
or a URI. See Documentation/maintainer/maintainer-entry-profile.rst
for details.
(T <address>) *SCM* tree type and location.
Type is one of: git, hg, quilt, stgit, topgit
Project Leader
==============
final design decisions
Applications
============
ffmpeg:
ffmpeg.c Michael Niedermayer, Anton Khirnov
ffmpeg.c Michael Niedermayer
ffplay:
ffplay.c [2] Marton Balint
ffplay.c Marton Balint
ffprobe:
ffprobe.c [2] Stefano Sabatini
ffprobe.c Stefano Sabatini
Commandline utility code:
cmdutils.c, cmdutils.h Michael Niedermayer
@@ -45,32 +35,29 @@ Commandline utility code:
QuickTime faststart:
tools/qt-faststart.c Baptiste Coudurier
Execution Graph Printing
fftools/graph, fftools/resources [2] softworkz
Miscellaneous Areas
===================
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Gyan Doshi
project server day to day operations (L: root@ffmpeg.org) Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov, Timo Rothenpieler
project server emergencies (L: root@ffmpeg.org) Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov, Timo Rothenpieler
presets [0]
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Lou Logan, Gyan Doshi
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov
presets Robert Swain
metadata subsystem Aurelien Jacobs
release management Michael Niedermayer
API tests [0]
samples-request [2] Thilo Borgmann, James Almer, Ben Littler
API tests Ludmila Glinskih
Communication
=============
website (T: https://git.ffmpeg.org/ffmpeg-web) Deby Barbara Lepage
fate.ffmpeg.org (L: fate-admin@ffmpeg.org) (W: https://fate.ffmpeg.org) (P: https://ffmpeg.org/fate.html) (S: https://git.ffmpeg.org/fateserver) Timo Rothenpieler
Trac bug tracker (W: https://trac.ffmpeg.org) Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos
Patchwork [2] (W: https://patchwork.ffmpeg.org) Andriy Gelman
mailing lists (W: https://ffmpeg.org/contact.html#MailingLists) Baptiste Coudurier
Twitter Reynaldo H. Verdejo Pinochet
website Deby Barbara Lepage
fate.ffmpeg.org Timothy Gu
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
mailing lists Baptiste Coudurier, Lou Logan
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
Twitter Lou Logan, Reynaldo H. Verdejo Pinochet
Launchpad Timothy Gu
ffmpeg-security [2] (L: ffmpeg-security@ffmpeg.org) (W: https://ffmpeg.org/security.html) Michael Niedermayer, Reimar Doeffinger
ffmpeg-security Andreas Cadhalpun, Carl Eugen Hoyos, Clément Bœsch, Michael Niedermayer, Reimar Doeffinger, Rodger Combs, wm4
libavutil
@@ -85,26 +72,22 @@ Other:
aes_ctr.c, aes_ctr.h Eran Kornblau
bprint Nicolas George
bswap.h
csp.c, csp.h Leo Izen, Ronald S. Bultje
des Reimar Doeffinger
dynarray.h Nicolas George
eval.c, eval.h [2] Michael Niedermayer
eval.c, eval.h Michael Niedermayer
float_dsp Loren Merritt
hash Reimar Doeffinger
hwcontext_cuda* Timo Rothenpieler
hwcontext_d3d12va* Wu Jianhua
hwcontext_vulkan* [2] Lynne
intfloat* Michael Niedermayer
integer.c, integer.h Michael Niedermayer
lzo Reimar Doeffinger
mathematics.c, mathematics.h [2] Michael Niedermayer
mem.c, mem.h [2] Michael Niedermayer
mathematics.c, mathematics.h Michael Niedermayer
mem.c, mem.h Michael Niedermayer
opencl.c, opencl.h Wei Gao
opt.c, opt.h Michael Niedermayer
rational.c, rational.h [2] Michael Niedermayer
rational.c, rational.h Michael Niedermayer
rc4 Reimar Doeffinger
ripemd.c, ripemd.h James Almer
tx* [2] Lynne
libavcodec
@@ -126,18 +109,22 @@ Generic Parts:
DSP utilities:
dsputils.c, dsputils.h Michael Niedermayer
entropy coding:
rangecoder.c, rangecoder.h [2] Michael Niedermayer
rangecoder.c, rangecoder.h Michael Niedermayer
lzw.* Michael Niedermayer
floating point AAN DCT:
faandct.c, faandct.h [2] Michael Niedermayer
faandct.c, faandct.h Michael Niedermayer
Non-power-of-two MDCT:
mdct15.c, mdct15.h Rostislav Pehlivanov
Golomb coding:
golomb.c, golomb.h [2] Michael Niedermayer
golomb.c, golomb.h Michael Niedermayer
motion estimation:
motion* Michael Niedermayer
rate control:
ratecontrol.c [2] Michael Niedermayer
ratecontrol.c Michael Niedermayer
simple IDCT:
simple_idct.c, simple_idct.h [2] Michael Niedermayer
simple_idct.c, simple_idct.h Michael Niedermayer
postprocessing:
libpostproc/* Michael Niedermayer
table generation:
tableprint.c, tableprint.h Reimar Doeffinger
fixed point FFT:
@@ -145,33 +132,31 @@ Generic Parts:
Text Subtitles Clément Bœsch
Codecs:
4xm.c [2] Michael Niedermayer
4xm.c Michael Niedermayer
8bps.c Roberto Togni
8svx.c Jaikrishnan Menon
aacenc*, aaccoder.c Rostislav Pehlivanov
adpcm.c Zane van Iperen
alacenc.c Jaikrishnan Menon
alsdec.c Thilo Borgmann, Umair Khan
amfdec*,amfenc* [2] Dmitrii Ovchinnikov, Araz Iusubov
aptx.c Aurelien Jacobs
ass* Aurelien Jacobs
asv* Michael Niedermayer
atrac3plus* Maxim Poliakovski
audiotoolbox* rcombs
avs2* Huiwen Ren
audiotoolbox* Rodger Combs
bgmc.c, bgmc.h Thilo Borgmann
binkaudio.c Peter Ross
cavs* Stefan Gehrer
cdxl.c Paul B Mahol
celp_filters.* Vitor Sessak
cinepak.c Roberto Togni
cinepakenc.c Rl / Aetey G.T. AB
ccaption_dec.c Anshul Maheshwari, Aman Gupta
cljr Alex Beregszaszi
cpia.c Stephan Hilb
crystalhd.c Philip Langdale
cscd.c Reimar Doeffinger
cuviddec.c Timo Rothenpieler
dca* foo86
dfpwm* Jack Bruienne
dirac* Rostislav Pehlivanov
dnxhd* Baptiste Coudurier
dolby_e* foo86
@@ -179,10 +164,11 @@ Codecs:
dss_sp.c Oleksij Rempel
dv.c Roman Shaposhnik
dvbsubdec.c Anshul Maheshwari
dxv.*, dxvenc.* Emma Worley
eacmv*, eaidct*, eat* Peter Ross
evrc* Paul B Mahol
exif.c, exif.h Thilo Borgmann
ffv1* [2] Michael Niedermayer
exr.c Martin Vignali
ffv1* Michael Niedermayer
ffwavesynth.c Nicolas George
fifo.c Jan Sebechlebsky
flicvideo.c Mike Melanson
@@ -193,29 +179,24 @@ Codecs:
h263* Michael Niedermayer
h264* Loren Merritt, Michael Niedermayer
hap* Tom Butterworth
hevc/* Anton Khirnov
huffyuv* Michael Niedermayer
idcinvideo.c Mike Melanson
interplayvideo.c Mike Melanson
jni*, ffjni* Matthieu Bouron
jpeg2000* Nicolas Bertrand
jpegxl* Leo Izen
jvdec.c Peter Ross
lcl*.c Roberto Togni, Reimar Doeffinger
libcelt_dec.c Nicolas George
libcodec2.c Tomas Härdin
libdirac* David Conrad
libdavs2.c Huiwen Ren
libjxl*.c, libjxl.h Leo Izen
libgsm.c Michel Bardiaux
libkvazaar.c Arttu Ylä-Outinen
libopenh264enc.c Martin Storsjo, Linjie Fu
libopenjpeg.c Jaikrishnan Menon
libopenjpegenc.c Michael Bradshaw
libtheoraenc.c [0]
libtheoraenc.c David Conrad
libvorbis.c David Conrad
libvpx* James Zern
libxavs.c Stefan Gehrer
libxavs2.c Huiwen Ren
libzvbi-teletextdec.c Marton Balint
lzo.h, lzo.c Reimar Doeffinger
mdec.c Michael Niedermayer
@@ -228,18 +209,17 @@ Codecs:
mqc* Nicolas Bertrand
msmpeg4.c, msmpeg4data.h Michael Niedermayer
msrle.c Mike Melanson
msrleenc.c Tomas Härdin
msvideo1.c Mike Melanson
nuv.c Reimar Doeffinger
nvdec*, nvenc* Timo Rothenpieler
omx.c Martin Storsjo, Aman Gupta
opus* Rostislav Pehlivanov
paf.* Paul B Mahol
pcx.c Ivo van Poorten
pgssubdec.c Reimar Doeffinger
ptx.c Ivo van Poorten
qcelp* Reynaldo H. Verdejo Pinochet
qdm2.c, qdm2data.h Roberto Togni
qsv* Mark Thompson, Zhong Li, Haihao Xiang
qsv* Mark Thompson, Zhong Li
qtrle.c Mike Melanson
ra144.c, ra144.h, ra288.c, ra288.h Roberto Togni
resample2.c Michael Niedermayer
@@ -247,21 +227,25 @@ Codecs:
rpza.c Roberto Togni
rtjpeg.c, rtjpeg.h Reimar Doeffinger
rv10.c Michael Niedermayer
sanm.c Manuel Lauss
s3tc* Ivo van Poorten
smc.c Mike Melanson
smvjpegdec.c Ash Hughes
snow* Michael Niedermayer, Loren Merritt
sonic.c Alex Beregszaszi
speedhq.c Steinar H. Gunderson
srt* Aurelien Jacobs
sunrast.c Ivo van Poorten
svq3.c Michael Niedermayer
tak* Paul B Mahol
truemotion1* Mike Melanson
tta.c Alex Beregszaszi, Jaikrishnan Menon
ttaenc.c Paul B Mahol
txd.c Ivo van Poorten
v4l2_* Jorge Ramirez-Ortiz
vc2* Rostislav Pehlivanov
vcr1.c Michael Niedermayer
videotoolboxenc.c Rick Kern, Aman Gupta
vima.c Paul B Mahol
vorbisdec.c Denes Balatoni, David Conrad
vorbisenc.c Oded Shimon
vp3* Mike Melanson
@@ -270,24 +254,24 @@ Codecs:
vp8 David Conrad, Ronald Bultje
vp9 Ronald Bultje
vqavideo.c Mike Melanson
vvc [2] Nuo Mi, Wu Jianhua, Frank Plowman
wmaprodec.c Sascha Sommer
wmavoice.c Ronald S. Bultje
wmv2.c Michael Niedermayer
xan.c Mike Melanson
xbm* Paul B Mahol
xface Stefano Sabatini
xvmc.c Ivan Kalvachev
xwd* Paul B Mahol
Hardware acceleration:
amf* [2] Dmitrii Ovchinnikov, Araz Iusubov
crystalhd.c Philip Langdale
dxva2* Hendrik Leppkes, Laurent Aimar, Steve Lhomme
d3d11va* Steve Lhomme
d3d12va* Wu Jianhua
d3d12va_encode* Tong Wu
mediacodec* Matthieu Bouron, Aman Gupta, Zhao Zhili
vaapi* Haihao Xiang
vaapi_encode* Mark Thompson, Haihao Xiang
mediacodec* Matthieu Bouron, Aman Gupta
vaapi* Gwenole Beauchesne
vaapi_encode* Mark Thompson
vdpau* Philip Langdale, Carl Eugen Hoyos
videotoolbox* Rick Kern, Aman Gupta, Zhao Zhili
videotoolbox* Rick Kern, Aman Gupta
libavdevice
@@ -322,40 +306,65 @@ Generic parts:
motion_estimation.c Davinder Singh
Filters:
f_drawgraph.c Paul B Mahol
af_adelay.c Paul B Mahol
af_aecho.c Paul B Mahol
af_afade.c Paul B Mahol
af_amerge.c Nicolas George
af_aphaser.c Paul B Mahol
af_aresample.c Michael Niedermayer
af_astats.c Paul B Mahol
af_atempo.c Pavel Koshevoy
af_biquads.c Paul B Mahol
af_chorus.c Paul B Mahol
af_compand.c Paul B Mahol
af_firequalizer.c Muhammad Faiz
af_hdcd.c Burt P.
af_ladspa.c Paul B Mahol
af_loudnorm.c Kyle Swanson
af_pan.c Nicolas George
af_sidechaincompress.c Paul B Mahol
af_silenceremove.c Paul B Mahol
avf_aphasemeter.c Paul B Mahol
avf_avectorscope.c Paul B Mahol
avf_showcqt.c Muhammad Faiz
vf_blend.c Paul B Mahol
vf_bwdif Thomas Mundt (CC <thomas.mundt@hr.de>)
vf_chromakey.c Timo Rothenpieler
vf_colorchannelmixer.c Paul B Mahol
vf_colorconstancy.c Mina Sami (CC <minas.gorgy@gmail.com>)
vf_colorbalance.c Paul B Mahol
vf_colorkey.c Timo Rothenpieler
vf_colorlevels.c Paul B Mahol
vf_coreimage.m Thilo Borgmann
vf_deband.c Paul B Mahol
vf_dejudder.c Nicholas Robbins
vf_delogo.c Jean Delvare (CC <jdelvare@suse.com>)
vf_drawbox.c/drawgrid Andrey Utkin
vf_fsync.c Thilo Borgmann
vf_extractplanes.c Paul B Mahol
vf_histogram.c Paul B Mahol
vf_hqx.c Clément Bœsch
vf_idet.c Pascal Massimino
vf_il.c Paul B Mahol
vf_(t)interlace Thomas Mundt (CC <thomas.mundt@hr.de>)
vf_lenscorrection.c Daniel Oberhoff
vf_libplacebo.c Niklas Haas
vf_mergeplanes.c Paul B Mahol
vf_mestimate.c Davinder Singh
vf_minterpolate.c Davinder Singh
vf_neighbor.c Paul B Mahol
vf_psnr.c Paul B Mahol
vf_random.c Paul B Mahol
vf_readvitc.c Tobias Rapp (CC t.rapp at noa-archive dot com)
vf_scale.c [2] Michael Niedermayer
vf_tonemap_opencl.c Ruiling Song
vf_yadif.c [2] Michael Niedermayer
vf_xfade_vulkan.c [2] Marvin Scholz (CC <epirat07@gmail.com>)
vf_scale.c Michael Niedermayer
vf_separatefields.c Paul B Mahol
vf_ssim.c Paul B Mahol
vf_stereo3d.c Paul B Mahol
vf_telecine.c Paul B Mahol
vf_yadif.c Michael Niedermayer
vf_zoompan.c Paul B Mahol
Sources:
vsrc_mandelbrot.c [2] Michael Niedermayer
dnn Yejun Guo
vsrc_mandelbrot.c Michael Niedermayer
libavformat
===========
@@ -371,35 +380,33 @@ Generic parts:
Muxers/Demuxers:
4xm.c Mike Melanson
aadec.c Vesselin Bontchev (vesselin.bontchev at yandex dot com)
adtsenc.c [0]
adtsenc.c Robert Swain
afc.c Paul B Mahol
aiffdec.c Baptiste Coudurier, Matthieu Bouron
aiffenc.c Baptiste Coudurier, Matthieu Bouron
alp.c Zane van Iperen
amvenc.c Zane van Iperen
apm.c Zane van Iperen
apngdec.c Benoit Fouet
argo_asf.c Zane van Iperen
argo_brp.c Zane van Iperen
argo_cvg.c Zane van Iperen
ass* Aurelien Jacobs
astdec.c Paul B Mahol
astenc.c James Almer
avi* Michael Niedermayer
avisynth.c Stephen Hutchinson
avr.c Paul B Mahol
bink.c Peter Ross
boadec.c Michael Niedermayer
brstm.c Paul B Mahol
caf* Peter Ross
cdxl.c Paul B Mahol
codec2.c Tomas Härdin
crc.c Michael Niedermayer
dashdec.c Steven Liu
dashenc.c Karthick Jeyapal
daud.c Reimar Doeffinger
dfpwmdec.c Jack Bruienne
dss.c Oleksij Rempel
dtsdec.c foo86
dtshddec.c Paul B Mahol
dv.c Roman Shaposhnik
dvdvideodec.c [2] Marth64
electronicarts.c Peter Ross
evc* Samsung (Dawid Kozinski)
epafdec.c Paul B Mahol
ffm* Baptiste Coudurier
flic.c Mike Melanson
flvdec.c Michael Niedermayer
@@ -407,26 +414,25 @@ Muxers/Demuxers:
gxf.c Reimar Doeffinger
gxfenc.c Baptiste Coudurier
hlsenc.c Christian Suloway, Steven Liu
iamf* [2] James Almer
idcin.c Mike Melanson
idroqdec.c Mike Melanson
iff.c Jaikrishnan Menon
imf* Pierre-Anthony Lemieux
img2*.c Michael Niedermayer
ipmovie.c Mike Melanson
ircam* Paul B Mahol
iss.c Stefan Gehrer
jpegxl* Leo Izen
jvdec.c Peter Ross
kvag.c Zane van Iperen
libmodplug.c Clément Bœsch
libopenmpt.c Josh de Kock
lmlm4.c Ivo van Poorten
lvfdec.c Paul B Mahol
lxfdec.c Tomas Härdin
matroska.c Andreas Rheinhardt
matroskadec.c Andreas Rheinhardt
matroskaenc.c Andreas Rheinhardt
matroska.c Aurelien Jacobs
matroskadec.c Aurelien Jacobs
matroskaenc.c David Conrad
matroska subtitles (matroskaenc.c) John Peebles
metadata* Aurelien Jacobs
mgsts.c Paul B Mahol
microdvd* Aurelien Jacobs
mm.c Peter Ross
mov.c Baptiste Coudurier
@@ -438,21 +444,22 @@ Muxers/Demuxers:
mpegtsenc.c Baptiste Coudurier
msnwc_tcp.c Ramiro Polla
mtv.c Reynaldo H. Verdejo Pinochet
mxf* Baptiste Coudurier, Tomas Härdin
mxf* Baptiste Coudurier
nistspheredec.c Paul B Mahol
nsvdec.c Francois Revol
nut* Michael Niedermayer
nuv.c Reimar Doeffinger
oggdec.c, oggdec.h David Conrad
oggenc.c Baptiste Coudurier
oggparse*.c David Conrad
oggparsedaala* Rostislav Pehlivanov
oma.c Maxim Poliakovski
pp_bnk.c Zane van Iperen
paf.c Paul B Mahol
psxstr.c Mike Melanson
pva.c Ivo van Poorten
pvfdec.c Paul B Mahol
r3d.c Baptiste Coudurier
raw.c Michael Niedermayer
rcwtdec.c [2] Marth64
rcwtenc.c [2] Marth64
rdt.c Ronald S. Bultje
rl2.c Sascha Sommer
rmdec.c, rmenc.c Ronald S. Bultje
@@ -471,10 +478,11 @@ Muxers/Demuxers:
sdp.c Martin Storsjo
segafilm.c Mike Melanson
segment.c Stefano Sabatini
smush.c Manuel Lauss
smjpeg* Paul B Mahol
spdif* Anssi Hannula
srtdec.c Aurelien Jacobs
swf.c Baptiste Coudurier
takdec.c Paul B Mahol
tta.c Alex Beregszaszi
txd.c Ivo van Poorten
voc.c Aurelien Jacobs
@@ -484,49 +492,44 @@ Muxers/Demuxers:
webvtt* Matthew J Heaney
westwood.c Mike Melanson
wtv.c Peter Ross
wvenc.c Paul B Mahol
Protocols:
async.c Zhang Rui
bluray.c Petri Hintukainen
ftp.c Lukasz Marek
http.c Ronald S. Bultje
libsrt.c Zhao Zhili
libssh.c Lukasz Marek
libzmq.c Andriy Gelman
mms*.c Ronald S. Bultje
udp.c Luca Abeni
icecast.c [2] Marvin Scholz (CC <epirat07@gmail.com>)
icecast.c Marvin Scholz
libswresample
=============
Generic parts:
audioconvert.c [2] Michael Niedermayer
dither.c [2] Michael Niedermayer
rematrix*.c [2] Michael Niedermayer
swresample*.c [2] Michael Niedermayer
audioconvert.c Michael Niedermayer
dither.c Michael Niedermayer
rematrix*.c Michael Niedermayer
swresample*.c Michael Niedermayer
Resamplers:
resample*.c [2] Michael Niedermayer
resample*.c Michael Niedermayer
soxr_resample.c Rob Sykes
Operating systems / CPU architectures
=====================================
*BSD [2] Brad Smith
Alpha [0]
Alpha Falk Hueffner
MIPS Manojkumar Bhosale, Shiyou Yin
LoongArch [2] Shiyou Yin
Darwin (macOS, iOS) [2] Marvin Scholz
Mac OS X / PowerPC [0]
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
Amiga / PowerPC Colin Ward
Linux / PowerPC [2] Sean McGovern (CC <gseanmcg@gmail.com>), Lauri Kasanen
RISC-V [2] Rémi Denis-Courmont
Windows MinGW Alex Beregszaszi, Ramiro Polla
Windows Cygwin Victor Paesa
Windows MSVC Hendrik Leppkes
Windows MSVC Matthew Oliver, Hendrik Leppkes
Windows ICL Matthew Oliver
ADI/Blackfin DSP Marc Hoffman
Sparc Roman Shaposhnik
OS/2 KO Myung-Hun
@@ -542,7 +545,6 @@ Benjamin Larsson
Bobby Bingham
Daniel Verkamp
Derek Buitenhuis
Fei Wang
Ganesh Ajjanagadde
Henrik Gramner
Ivan Uskov
@@ -550,10 +552,8 @@ James Darnley
Jan Ekström
Joakim Plate
Jun Zhao
Kacper Michajłow
Kieran Kunhya
Kirill Gavrilov
Limin Wang
Martin Storsjö
Panagiotis Issaris
Pedro Arthur
@@ -566,12 +566,10 @@ wm4
Releases
========
7.0 Michael Niedermayer
6.1 Michael Niedermayer
5.1 Michael Niedermayer
4.4 Michael Niedermayer
3.4 Michael Niedermayer
2.8 Michael Niedermayer
2.7 Michael Niedermayer
2.6 Michael Niedermayer
2.5 Michael Niedermayer
If you want to maintain an older release, please contact us
@@ -592,39 +590,28 @@ Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8
Clément Bœsch 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
Frank Plowman 34E2 48D6 B7DF 4769 70C7 3304 03A8 4C6A 098F 2C6B
Ganesh Ajjanagadde C96A 848E 97C3 CEA2 AB72 5CE4 45F9 6A2D 3C36 FB1B
Gwenole Beauchesne 2E63 B3A6 3E44 37E2 017D 2704 53C7 6266 B153 99C4
Haihao Xiang (haihao) 1F0C 31E8 B4FE F7A4 4DC1 DC99 E0F5 76D4 76FC 437F
Jaikrishnan Menon 61A1 F09F 01C9 2D45 78E1 C862 25DC 8831 AF70 D368
James Almer 7751 2E8C FD94 A169 57E6 9A7A 1463 01AD 7376 59E0
Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A
Leo Izen (Traneptora) B6FD 3CFC 7ACF 83FC 9137 6945 5A71 C331 FD2F A19A
Leo Izen (Traneptora) 1D83 0A0B CE46 709E 203B 26FC 764E 48EA 4822 1833
Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE
Lynne FE50 139C 6805 72CA FD52 1F8D A2FE A5F0 3F03 4464
Lou Logan (llogan) 7D68 DC73 CBEF EABB 671A B6CF 621C 2E28 82F8 DC3A
Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
DD1E C9E8 DE08 5C62 9B3E 1846 B18E 8928 B394 8D64
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
Niklas Haas (haasn) 1DDB 8076 B14D 5B48 32FC 99D9 EB52 DA9C 02BA 6FB4
Nikolay Aleksandrov 8978 1D8C FB71 588E 4B27 EAA8 C4F0 B5FC E011 13B1
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
Philip Langdale 5DC5 8D66 5FBA 3A43 18EC 045E F8D6 B194 6A75 682E
Pierre-Anthony Lemieux (pal) F4B3 9492 E6F2 E4AF AEC8 46CB 698F A1F0 F8D4 EED4
Ramiro Polla 7859 C65B 751B 1179 792E DAE8 8E95 8B2F 9B6C 5700
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
Sean McGovern (Sean_McG) 6D03 BC60 3A33 E615 6E2E 06AD 8C06 8175 6F59 8684
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
Steinar H. Gunderson C2E9 004F F028 C18E 4EAD DB83 7F61 7561 7797 8F76
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
Thilo Borgmann (thilo) CE1D B7F4 4D20 FC3A DD9F FE5A 257C 5B8F 1D20 B92F
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
Tomas Härdin (thardin) A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551
Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9
Zane van Iperen (zane) 61AE D40F 368B 6F26 9DAE 3892 6861 6B2D 8AC4 DCC5

View File

@@ -13,26 +13,18 @@ vpath %.v $(SRC_PATH)
vpath %.texi $(SRC_PATH)
vpath %.cu $(SRC_PATH)
vpath %.ptx $(SRC_PATH)
vpath %.metal $(SRC_PATH)
vpath %/fate_config.sh.template $(SRC_PATH)
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64 audiomatch
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
ALLFFLIBS = \
avcodec \
avdevice \
avfilter \
avformat \
avutil \
swscale \
swresample \
# $(FFLIBS-yes) needs to be in linking order
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
FFLIBS-$(CONFIG_AVFILTER) += avfilter
FFLIBS-$(CONFIG_AVFORMAT) += avformat
FFLIBS-$(CONFIG_AVCODEC) += avcodec
FFLIBS-$(CONFIG_AVRESAMPLE) += avresample
FFLIBS-$(CONFIG_POSTPROC) += postproc
FFLIBS-$(CONFIG_SWRESAMPLE) += swresample
FFLIBS-$(CONFIG_SWSCALE) += swscale
@@ -53,64 +45,34 @@ FF_DEP_LIBS := $(DEP_LIBS)
FF_STATIC_DEP_LIBS := $(STATIC_DEP_LIBS)
$(TOOLS): %$(EXESUF): %.o
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(filter-out $(FF_DEP_LIBS), $^) $(EXTRALIBS-$(*F)) $(EXTRALIBS) $(ELIBS))
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(EXTRALIBS-$(*F)) $(EXTRALIBS) $(ELIBS)
target_dec_%_fuzzer$(EXESUF): target_dec_%_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH)
target_enc_%_fuzzer$(EXESUF): target_enc_%_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
tools/target_bsf_%_fuzzer$(EXESUF): tools/target_bsf_%_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
target_dem_%_fuzzer$(EXESUF): target_dem_%_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
tools/target_dem_fuzzer$(EXESUF): tools/target_dem_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
tools/target_io_dem_fuzzer$(EXESUF): tools/target_io_dem_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
tools/target_sws_fuzzer$(EXESUF): tools/target_sws_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
tools/target_swr_fuzzer$(EXESUF): tools/target_swr_fuzzer.o $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH))
tools/enum_options$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/enum_options$(EXESUF): $(FF_DEP_LIBS)
tools/enc_recon_frame_test$(EXESUF): $(FF_DEP_LIBS)
tools/enc_recon_frame_test$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/scale_slice_test$(EXESUF): $(FF_DEP_LIBS)
tools/scale_slice_test$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/sofa2wavs$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
tools/uncoded_frame$(EXESUF): ELIBS = $(FF_EXTRALIBS)
tools/target_dec_%_fuzzer$(EXESUF): $(FF_DEP_LIBS)
tools/target_dem_%_fuzzer$(EXESUF): $(FF_DEP_LIBS)
CONFIGURABLE_COMPONENTS = \
$(wildcard $(FFLIBS:%=$(SRC_PATH)/lib%/all*.c)) \
$(SRC_PATH)/libavcodec/bitstream_filters.c \
$(SRC_PATH)/libavcodec/hwaccels.h \
$(SRC_PATH)/libavcodec/parsers.c \
$(SRC_PATH)/libavformat/protocols.c \
config_components.h: ffbuild/.config
config.h: ffbuild/.config
ffbuild/.config: $(CONFIGURABLE_COMPONENTS)
@-tput bold 2>/dev/null
@-printf '\nWARNING: $(?) newer than config_components.h, rerun configure\n\n'
@-printf '\nWARNING: $(?) newer than config.h, rerun configure\n\n'
@-tput sgr0 2>/dev/null
SUBDIR_VARS := CLEANFILES FFLIBS HOSTPROGS TESTPROGS TOOLS \
HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \
ALTIVEC-OBJS VSX-OBJS X86ASM-OBJS \
ALTIVEC-OBJS VSX-OBJS MMX-OBJS X86ASM-OBJS \
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSP-OBJS MSA-OBJS \
MMI-OBJS LSX-OBJS LASX-OBJS RV-OBJS RVV-OBJS RVVB-OBJS \
OBJS SHLIBOBJS STLIBOBJS HOSTOBJS TESTOBJS SIMD128-OBJS
MMI-OBJS OBJS SLIBOBJS HOSTOBJS TESTOBJS
define RESET
$(1) :=
@@ -132,33 +94,27 @@ include $(SRC_PATH)/fftools/Makefile
include $(SRC_PATH)/doc/Makefile
include $(SRC_PATH)/doc/examples/Makefile
$(ALLFFLIBS:%=lib%/version.o): libavutil/ffversion.h
libavcodec/utils.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
ifeq ($(STRIPTYPE),direct)
$(STRIP) -o $@ $<
else
$(RM) $@
$(CP) $< $@
$(STRIP) $@
endif
%$(PROGSSUF)_g$(EXESUF): $(FF_DEP_LIBS)
$(call LINK,$(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS))
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
VERSION_SH = $(SRC_PATH)/ffbuild/version.sh
ifeq ($(VERSION_TRACKING),yes)
GIT_LOG = $(SRC_PATH)/.git/logs/HEAD
endif
.version: $(wildcard $(GIT_LOG)) $(VERSION_SH) ffbuild/config.mak
.version: M=@
ifneq ($(VERSION_TRACKING),yes)
libavutil/ffversion.h .version: REVISION=unknown
endif
libavutil/ffversion.h .version:
$(M)revision=$(REVISION) $(VERSION_SH) $(SRC_PATH) libavutil/ffversion.h $(EXTRA_VERSION)
$(M)$(VERSION_SH) $(SRC_PATH) libavutil/ffversion.h $(EXTRA_VERSION)
$(Q)touch .version
# force version.sh to run whenever version might have changed
@@ -179,17 +135,16 @@ uninstall-data:
clean::
$(RM) $(CLEANSUFFIXES)
$(RM) $(addprefix compat/,$(CLEANSUFFIXES)) $(addprefix compat/*/,$(CLEANSUFFIXES)) $(addprefix compat/*/*/,$(CLEANSUFFIXES))
$(RM) $(addprefix compat/,$(CLEANSUFFIXES)) $(addprefix compat/*/,$(CLEANSUFFIXES))
$(RM) -r coverage-html
$(RM) -rf coverage.info coverage.info.in lcov
distclean:: clean
$(RM) .version config.asm config.h config_components.* mapfile \
$(RM) .version avversion.h config.asm config.h mapfile \
ffbuild/.config ffbuild/config.* libavutil/avconfig.h \
version.h libavutil/ffversion.h libavcodec/codec_names.h \
libavcodec/bsf_list.c libavformat/protocol_list.c \
libavcodec/codec_list.c libavcodec/parser_list.c \
libavfilter/filter_list.c libavdevice/indev_list.c libavdevice/outdev_list.c \
libavformat/muxer_list.c libavformat/demuxer_list.c
ifeq ($(SRC_LINK),src)
$(RM) src
@@ -204,7 +159,7 @@ check: all alltools examples testprogs fate
include $(SRC_PATH)/tests/Makefile
$(sort $(OUTDIRS)):
$(sort $(OBJDIRS)):
$(Q)mkdir -p $@
# Dummy rule to stop make trying to rebuild removed or renamed headers

View File

@@ -9,7 +9,7 @@ such as audio, video, subtitles and related metadata.
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides means to alter decoded audio and video through a directed graph of connected filters.
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.

View File

@@ -1 +1 @@
8.0.git
4.1.2

15
RELEASE_NOTES Normal file
View File

@@ -0,0 +1,15 @@
┌─────────────────────────────────────────────┐
│ RELEASE NOTES for FFmpeg 4.1 "al-Khwarizmi" │
└─────────────────────────────────────────────┘
The FFmpeg Project proudly presents FFmpeg 4.1 "al-Khwarizmi", about 6
months after the release of FFmpeg 4.0.
A complete Changelog is available at the root of the project, and the
complete Git history on https://git.ffmpeg.org/gitweb/ffmpeg.git
We hope you will like this release as much as we enjoyed working on it, and
as usual, if you have any questions about it, or any FFmpeg related topic,
feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask
on the mailing-lists.

View File

@@ -1,114 +0,0 @@
/*
* Android Binder handler
*
* Copyright (c) 2025 Dmitrii Okunev
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#if defined(__ANDROID__)
#include <dlfcn.h>
#include <stdint.h>
#include <stdlib.h>
#include "libavutil/log.h"
#include "binder.h"
#define THREAD_POOL_SIZE 1
static void *dlopen_libbinder_ndk(void)
{
/*
* libbinder_ndk.so often does not contain the functions we need, so making
* this dependency optional, thus using dlopen/dlsym instead of linking.
*
* See also: https://source.android.com/docs/core/architecture/aidl/aidl-backends
*/
void *h = dlopen("libbinder_ndk.so", RTLD_NOW | RTLD_LOCAL);
if (h != NULL)
return h;
av_log(NULL, AV_LOG_WARNING,
"android/binder: unable to load libbinder_ndk.so: '%s'; skipping binder threadpool init (MediaCodec likely won't work)\n",
dlerror());
return NULL;
}
static void android_binder_threadpool_init(void)
{
typedef int (*set_thread_pool_max_fn)(uint32_t);
typedef void (*start_thread_pool_fn)(void);
set_thread_pool_max_fn set_thread_pool_max = NULL;
start_thread_pool_fn start_thread_pool = NULL;
void *h = dlopen_libbinder_ndk();
if (h == NULL)
return;
unsigned thead_pool_size = THREAD_POOL_SIZE;
set_thread_pool_max =
(set_thread_pool_max_fn) dlsym(h,
"ABinderProcess_setThreadPoolMaxThreadCount");
start_thread_pool =
(start_thread_pool_fn) dlsym(h, "ABinderProcess_startThreadPool");
if (start_thread_pool == NULL) {
av_log(NULL, AV_LOG_WARNING,
"android/binder: ABinderProcess_startThreadPool not found; skipping threadpool init (MediaCodec likely won't work)\n");
return;
}
if (set_thread_pool_max != NULL) {
int ok = set_thread_pool_max(thead_pool_size);
av_log(NULL, AV_LOG_DEBUG,
"android/binder: ABinderProcess_setThreadPoolMaxThreadCount(%u) => %s\n",
thead_pool_size, ok ? "ok" : "fail");
} else {
av_log(NULL, AV_LOG_DEBUG,
"android/binder: ABinderProcess_setThreadPoolMaxThreadCount is unavailable; using the library default\n");
}
start_thread_pool();
av_log(NULL, AV_LOG_DEBUG,
"android/binder: ABinderProcess_startThreadPool() called\n");
}
void android_binder_threadpool_init_if_required(void)
{
#if __ANDROID_API__ >= 24
if (android_get_device_api_level() < 35) {
// the issue with the thread pool was introduced in Android 15 (API 35)
av_log(NULL, AV_LOG_DEBUG,
"android/binder: API<35, thus no need to initialize a thread pool\n");
return;
}
android_binder_threadpool_init();
#else
// android_get_device_api_level was introduced in API 24, so we cannot use it
// to detect the API level in API<24. For simplicity we just assume
// libbinder_ndk.so on the system running this code would have API level < 35;
av_log(NULL, AV_LOG_DEBUG,
"android/binder: is built with API<24, assuming this is not Android 15+\n");
#endif
}
#endif /* __ANDROID__ */

View File

@@ -1,31 +0,0 @@
/*
* Android Binder handler
*
* Copyright (c) 2025 Dmitrii Okunev
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_ANDROID_BINDER_H
#define COMPAT_ANDROID_BINDER_H
/**
* Initialize Android Binder thread pool.
*/
void android_binder_threadpool_init_if_required(void);
#endif // COMPAT_ANDROID_BINDER_H

View File

@@ -0,0 +1,173 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* based on vlc_atomic.h from VLC
* Copyright (C) 2010 Rémi Denis-Courmont
*/
#ifndef COMPAT_ATOMICS_GCC_STDATOMIC_H
#define COMPAT_ATOMICS_GCC_STDATOMIC_H
#include <stddef.h>
#include <stdint.h>
#define ATOMIC_FLAG_INIT 0
#define ATOMIC_VAR_INIT(value) (value)
#define atomic_init(obj, value) \
do { \
*(obj) = (value); \
} while(0)
#define kill_dependency(y) ((void)0)
#define atomic_thread_fence(order) \
__sync_synchronize()
#define atomic_signal_fence(order) \
((void)0)
#define atomic_is_lock_free(obj) 0
typedef _Bool atomic_flag;
typedef _Bool atomic_bool;
typedef char atomic_char;
typedef signed char atomic_schar;
typedef unsigned char atomic_uchar;
typedef short atomic_short;
typedef unsigned short atomic_ushort;
typedef int atomic_int;
typedef unsigned int atomic_uint;
typedef long atomic_long;
typedef unsigned long atomic_ulong;
typedef long long atomic_llong;
typedef unsigned long long atomic_ullong;
typedef wchar_t atomic_wchar_t;
typedef int_least8_t atomic_int_least8_t;
typedef uint_least8_t atomic_uint_least8_t;
typedef int_least16_t atomic_int_least16_t;
typedef uint_least16_t atomic_uint_least16_t;
typedef int_least32_t atomic_int_least32_t;
typedef uint_least32_t atomic_uint_least32_t;
typedef int_least64_t atomic_int_least64_t;
typedef uint_least64_t atomic_uint_least64_t;
typedef int_fast8_t atomic_int_fast8_t;
typedef uint_fast8_t atomic_uint_fast8_t;
typedef int_fast16_t atomic_int_fast16_t;
typedef uint_fast16_t atomic_uint_fast16_t;
typedef int_fast32_t atomic_int_fast32_t;
typedef uint_fast32_t atomic_uint_fast32_t;
typedef int_fast64_t atomic_int_fast64_t;
typedef uint_fast64_t atomic_uint_fast64_t;
typedef intptr_t atomic_intptr_t;
typedef uintptr_t atomic_uintptr_t;
typedef size_t atomic_size_t;
typedef ptrdiff_t atomic_ptrdiff_t;
typedef intmax_t atomic_intmax_t;
typedef uintmax_t atomic_uintmax_t;
#define atomic_store(object, desired) \
do { \
*(object) = (desired); \
__sync_synchronize(); \
} while (0)
#define atomic_store_explicit(object, desired, order) \
atomic_store(object, desired)
#define atomic_load(object) \
(__sync_synchronize(), *(object))
#define atomic_load_explicit(object, order) \
atomic_load(object)
#define atomic_exchange(object, desired) \
({ \
__typeof__(object) _obj = (object); \
__typeof__(*object) _old; \
do \
_old = atomic_load(_obj); \
while (!__sync_bool_compare_and_swap(_obj, _old, (desired))); \
_old; \
})
#define atomic_exchange_explicit(object, desired, order) \
atomic_exchange(object, desired)
#define atomic_compare_exchange_strong(object, expected, desired) \
({ \
__typeof__(object) _exp = (expected); \
__typeof__(*object) _old = *_exp; \
*_exp = __sync_val_compare_and_swap((object), _old, (desired)); \
*_exp == _old; \
})
#define atomic_compare_exchange_strong_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak(object, expected, desired) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_weak(object, expected, desired)
#define atomic_fetch_add(object, operand) \
__sync_fetch_and_add(object, operand)
#define atomic_fetch_add_explicit(object, operand, order) \
atomic_fetch_add(object, operand)
#define atomic_fetch_sub(object, operand) \
__sync_fetch_and_sub(object, operand)
#define atomic_fetch_sub_explicit(object, operand, order) \
atomic_fetch_sub(object, operand)
#define atomic_fetch_or(object, operand) \
__sync_fetch_and_or(object, operand)
#define atomic_fetch_or_explicit(object, operand, order) \
atomic_fetch_or(object, operand)
#define atomic_fetch_xor(object, operand) \
__sync_fetch_and_xor(object, operand)
#define atomic_fetch_xor_explicit(object, operand, order) \
atomic_fetch_xor(object, operand)
#define atomic_fetch_and(object, operand) \
__sync_fetch_and_and(object, operand)
#define atomic_fetch_and_explicit(object, operand, order) \
atomic_fetch_and(object, operand)
#define atomic_flag_test_and_set(object) \
atomic_exchange(object, 1)
#define atomic_flag_test_and_set_explicit(object, order) \
atomic_flag_test_and_set(object)
#define atomic_flag_clear(object) \
atomic_store(object, 0)
#define atomic_flag_clear_explicit(object, order) \
atomic_flag_clear(object)
#endif /* COMPAT_ATOMICS_GCC_STDATOMIC_H */

View File

@@ -0,0 +1,39 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* based on vlc_atomic.h from VLC
* Copyright (C) 2010 Rémi Denis-Courmont
*/
#include <pthread.h>
#include <stdint.h>
#include "stdatomic.h"
static pthread_mutex_t atomic_lock = PTHREAD_MUTEX_INITIALIZER;
void avpriv_atomic_lock(void)
{
pthread_mutex_lock(&atomic_lock);
}
void avpriv_atomic_unlock(void)
{
pthread_mutex_unlock(&atomic_lock);
}

View File

@@ -0,0 +1,197 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* based on vlc_atomic.h from VLC
* Copyright (C) 2010 Rémi Denis-Courmont
*/
#ifndef COMPAT_ATOMICS_PTHREAD_STDATOMIC_H
#define COMPAT_ATOMICS_PTHREAD_STDATOMIC_H
#include <stdint.h>
#define ATOMIC_FLAG_INIT 0
#define ATOMIC_VAR_INIT(value) (value)
#define atomic_init(obj, value) \
do { \
*(obj) = (value); \
} while(0)
#define kill_dependency(y) ((void)0)
#define atomic_signal_fence(order) \
((void)0)
#define atomic_is_lock_free(obj) 0
typedef intptr_t atomic_flag;
typedef intptr_t atomic_bool;
typedef intptr_t atomic_char;
typedef intptr_t atomic_schar;
typedef intptr_t atomic_uchar;
typedef intptr_t atomic_short;
typedef intptr_t atomic_ushort;
typedef intptr_t atomic_int;
typedef intptr_t atomic_uint;
typedef intptr_t atomic_long;
typedef intptr_t atomic_ulong;
typedef intptr_t atomic_llong;
typedef intptr_t atomic_ullong;
typedef intptr_t atomic_wchar_t;
typedef intptr_t atomic_int_least8_t;
typedef intptr_t atomic_uint_least8_t;
typedef intptr_t atomic_int_least16_t;
typedef intptr_t atomic_uint_least16_t;
typedef intptr_t atomic_int_least32_t;
typedef intptr_t atomic_uint_least32_t;
typedef intptr_t atomic_int_least64_t;
typedef intptr_t atomic_uint_least64_t;
typedef intptr_t atomic_int_fast8_t;
typedef intptr_t atomic_uint_fast8_t;
typedef intptr_t atomic_int_fast16_t;
typedef intptr_t atomic_uint_fast16_t;
typedef intptr_t atomic_int_fast32_t;
typedef intptr_t atomic_uint_fast32_t;
typedef intptr_t atomic_int_fast64_t;
typedef intptr_t atomic_uint_fast64_t;
typedef intptr_t atomic_intptr_t;
typedef intptr_t atomic_uintptr_t;
typedef intptr_t atomic_size_t;
typedef intptr_t atomic_ptrdiff_t;
typedef intptr_t atomic_intmax_t;
typedef intptr_t atomic_uintmax_t;
void avpriv_atomic_lock(void);
void avpriv_atomic_unlock(void);
static inline void atomic_thread_fence(int order)
{
avpriv_atomic_lock();
avpriv_atomic_unlock();
}
static inline void atomic_store(intptr_t *object, intptr_t desired)
{
avpriv_atomic_lock();
*object = desired;
avpriv_atomic_unlock();
}
#define atomic_store_explicit(object, desired, order) \
atomic_store(object, desired)
static inline intptr_t atomic_load(intptr_t *object)
{
intptr_t ret;
avpriv_atomic_lock();
ret = *object;
avpriv_atomic_unlock();
return ret;
}
#define atomic_load_explicit(object, order) \
atomic_load(object)
static inline intptr_t atomic_exchange(intptr_t *object, intptr_t desired)
{
intptr_t ret;
avpriv_atomic_lock();
ret = *object;
*object = desired;
avpriv_atomic_unlock();
return ret;
}
#define atomic_exchange_explicit(object, desired, order) \
atomic_exchange(object, desired)
static inline int atomic_compare_exchange_strong(intptr_t *object, intptr_t *expected,
intptr_t desired)
{
int ret;
avpriv_atomic_lock();
if (*object == *expected) {
ret = 1;
*object = desired;
} else {
ret = 0;
*expected = *object;
}
avpriv_atomic_unlock();
return ret;
}
#define atomic_compare_exchange_strong_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak(object, expected, desired) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_weak(object, expected, desired)
#define FETCH_MODIFY(opname, op) \
static inline intptr_t atomic_fetch_ ## opname(intptr_t *object, intptr_t operand) \
{ \
intptr_t ret; \
avpriv_atomic_lock(); \
ret = *object; \
*object = *object op operand; \
avpriv_atomic_unlock(); \
return ret; \
}
FETCH_MODIFY(add, +)
FETCH_MODIFY(sub, -)
FETCH_MODIFY(or, |)
FETCH_MODIFY(xor, ^)
FETCH_MODIFY(and, &)
#undef FETCH_MODIFY
#define atomic_fetch_add_explicit(object, operand, order) \
atomic_fetch_add(object, operand)
#define atomic_fetch_sub_explicit(object, operand, order) \
atomic_fetch_sub(object, operand)
#define atomic_fetch_or_explicit(object, operand, order) \
atomic_fetch_or(object, operand)
#define atomic_fetch_xor_explicit(object, operand, order) \
atomic_fetch_xor(object, operand)
#define atomic_fetch_and_explicit(object, operand, order) \
atomic_fetch_and(object, operand)
#define atomic_flag_test_and_set(object) \
atomic_exchange(object, 1)
#define atomic_flag_test_and_set_explicit(object, order) \
atomic_flag_test_and_set(object)
#define atomic_flag_clear(object) \
atomic_store(object, 0)
#define atomic_flag_clear_explicit(object, order) \
atomic_flag_clear(object)
#endif /* COMPAT_ATOMICS_PTHREAD_STDATOMIC_H */

View File

@@ -0,0 +1,186 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_ATOMICS_SUNCC_STDATOMIC_H
#define COMPAT_ATOMICS_SUNCC_STDATOMIC_H
#include <atomic.h>
#include <mbarrier.h>
#include <stddef.h>
#include <stdint.h>
#define ATOMIC_FLAG_INIT 0
#define ATOMIC_VAR_INIT(value) (value)
#define atomic_init(obj, value) \
do { \
*(obj) = (value); \
} while(0)
#define kill_dependency(y) ((void)0)
#define atomic_thread_fence(order) \
__machine_rw_barrier();
#define atomic_signal_fence(order) \
((void)0)
#define atomic_is_lock_free(obj) 0
typedef intptr_t atomic_flag;
typedef intptr_t atomic_bool;
typedef intptr_t atomic_char;
typedef intptr_t atomic_schar;
typedef intptr_t atomic_uchar;
typedef intptr_t atomic_short;
typedef intptr_t atomic_ushort;
typedef intptr_t atomic_int;
typedef intptr_t atomic_uint;
typedef intptr_t atomic_long;
typedef intptr_t atomic_ulong;
typedef intptr_t atomic_llong;
typedef intptr_t atomic_ullong;
typedef intptr_t atomic_wchar_t;
typedef intptr_t atomic_int_least8_t;
typedef intptr_t atomic_uint_least8_t;
typedef intptr_t atomic_int_least16_t;
typedef intptr_t atomic_uint_least16_t;
typedef intptr_t atomic_int_least32_t;
typedef intptr_t atomic_uint_least32_t;
typedef intptr_t atomic_int_least64_t;
typedef intptr_t atomic_uint_least64_t;
typedef intptr_t atomic_int_fast8_t;
typedef intptr_t atomic_uint_fast8_t;
typedef intptr_t atomic_int_fast16_t;
typedef intptr_t atomic_uint_fast16_t;
typedef intptr_t atomic_int_fast32_t;
typedef intptr_t atomic_uint_fast32_t;
typedef intptr_t atomic_int_fast64_t;
typedef intptr_t atomic_uint_fast64_t;
typedef intptr_t atomic_intptr_t;
typedef intptr_t atomic_uintptr_t;
typedef intptr_t atomic_size_t;
typedef intptr_t atomic_ptrdiff_t;
typedef intptr_t atomic_intmax_t;
typedef intptr_t atomic_uintmax_t;
static inline void atomic_store(intptr_t *object, intptr_t desired)
{
*object = desired;
__machine_rw_barrier();
}
#define atomic_store_explicit(object, desired, order) \
atomic_store(object, desired)
static inline intptr_t atomic_load(intptr_t *object)
{
__machine_rw_barrier();
return *object;
}
#define atomic_load_explicit(object, order) \
atomic_load(object)
#define atomic_exchange(object, desired) \
atomic_swap_ptr(object, desired)
#define atomic_exchange_explicit(object, desired, order) \
atomic_exchange(object, desired)
static inline int atomic_compare_exchange_strong(intptr_t *object, intptr_t *expected,
intptr_t desired)
{
intptr_t old = *expected;
*expected = (intptr_t)atomic_cas_ptr(object, (void *)old, (void *)desired);
return *expected == old;
}
#define atomic_compare_exchange_strong_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak(object, expected, desired) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_weak(object, expected, desired)
static inline intptr_t atomic_fetch_add(intptr_t *object, intptr_t operand)
{
return atomic_add_ptr_nv(object, operand) - operand;
}
#define atomic_fetch_sub(object, operand) \
atomic_fetch_add(object, -(operand))
static inline intptr_t atomic_fetch_or(intptr_t *object, intptr_t operand)
{
intptr_t old;
do {
old = atomic_load(object);
} while (!atomic_compare_exchange_strong(object, old, old | operand));
return old;
}
static inline intptr_t atomic_fetch_xor(intptr_t *object, intptr_t operand)
{
intptr_t old;
do {
old = atomic_load(object);
} while (!atomic_compare_exchange_strong(object, old, old ^ operand));
return old;
}
static inline intptr_t atomic_fetch_and(intptr_t *object, intptr_t operand)
{
intptr_t old;
do {
old = atomic_load(object);
} while (!atomic_compare_exchange_strong(object, old, old & operand));
return old;
}
#define atomic_fetch_add_explicit(object, operand, order) \
atomic_fetch_add(object, operand)
#define atomic_fetch_sub_explicit(object, operand, order) \
atomic_fetch_sub(object, operand)
#define atomic_fetch_or_explicit(object, operand, order) \
atomic_fetch_or(object, operand)
#define atomic_fetch_xor_explicit(object, operand, order) \
atomic_fetch_xor(object, operand)
#define atomic_fetch_and_explicit(object, operand, order) \
atomic_fetch_and(object, operand)
#define atomic_flag_test_and_set(object) \
atomic_exchange(object, 1)
#define atomic_flag_test_and_set_explicit(object, order) \
atomic_flag_test_and_set(object)
#define atomic_flag_clear(object) \
atomic_store(object, 0)
#define atomic_flag_clear_explicit(object, order) \
atomic_flag_clear(object)
#endif /* COMPAT_ATOMICS_SUNCC_STDATOMIC_H */

View File

@@ -19,6 +19,7 @@
#ifndef COMPAT_ATOMICS_WIN32_STDATOMIC_H
#define COMPAT_ATOMICS_WIN32_STDATOMIC_H
#define WIN32_LEAN_AND_MEAN
#include <stddef.h>
#include <stdint.h>
#include <windows.h>
@@ -95,7 +96,7 @@ do { \
atomic_load(object)
#define atomic_exchange(object, desired) \
InterlockedExchangePointer((PVOID volatile *)object, (PVOID)desired)
InterlockedExchangePointer(object, desired);
#define atomic_exchange_explicit(object, desired, order) \
atomic_exchange(object, desired)

1064
compat/avisynth/avisynth_c.h Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,62 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_CAPI_H
#define AVS_CAPI_H
#ifdef __cplusplus
# define EXTERN_C extern "C"
#else
# define EXTERN_C
#endif
#ifndef AVSC_USE_STDCALL
# define AVSC_CC __cdecl
#else
# define AVSC_CC __stdcall
#endif
#define AVSC_INLINE static __inline
#ifdef BUILDING_AVSCORE
# define AVSC_EXPORT EXTERN_C
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
#else
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
# ifndef AVSC_NO_DECLSPEC
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
# else
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
# endif
#endif
#endif //AVS_CAPI_H

View File

@@ -0,0 +1,55 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_CONFIG_H
#define AVS_CONFIG_H
// Undefine this to get cdecl calling convention
#define AVSC_USE_STDCALL 1
// NOTE TO PLUGIN AUTHORS:
// Because FRAME_ALIGN can be substantially higher than the alignment
// a plugin actually needs, plugins should not use FRAME_ALIGN to check for
// alignment. They should always request the exact alignment value they need.
// This is to make sure that plugins work over the widest range of AviSynth
// builds possible.
#define FRAME_ALIGN 32
#if defined(_M_AMD64) || defined(__x86_64)
# define X86_64
#elif defined(_M_IX86) || defined(__i386__)
# define X86_32
#else
# error Unsupported CPU architecture.
#endif
#endif //AVS_CONFIG_H

View File

@@ -0,0 +1,51 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_TYPES_H
#define AVS_TYPES_H
// Define all types necessary for interfacing with avisynth.dll
// Raster types used by VirtualDub & Avisynth
typedef unsigned int Pixel32;
typedef unsigned char BYTE;
// Audio Sample information
typedef float SFLOAT;
#ifdef __GNUC__
typedef long long int INT64;
#else
typedef __int64 INT64;
#endif
#endif //AVS_TYPES_H

View File

@@ -0,0 +1,728 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
// MA 02110-1301 USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef __AVXSYNTH_C__
#define __AVXSYNTH_C__
#include "windowsPorts/windows2linux.h"
#include <stdarg.h>
#ifdef __cplusplus
# define EXTERN_C extern "C"
#else
# define EXTERN_C
#endif
#define AVSC_USE_STDCALL 1
#ifndef AVSC_USE_STDCALL
# define AVSC_CC __cdecl
#else
# define AVSC_CC __stdcall
#endif
#define AVSC_INLINE static __inline
#ifdef AVISYNTH_C_EXPORTS
# define AVSC_EXPORT EXTERN_C
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
#else
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
# ifndef AVSC_NO_DECLSPEC
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
# else
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
# endif
#endif
#ifdef __GNUC__
typedef long long int INT64;
#else
typedef __int64 INT64;
#endif
/////////////////////////////////////////////////////////////////////
//
// Constants
//
#ifndef __AVXSYNTH_H__
enum { AVISYNTH_INTERFACE_VERSION = 3 };
#endif
enum {AVS_SAMPLE_INT8 = 1<<0,
AVS_SAMPLE_INT16 = 1<<1,
AVS_SAMPLE_INT24 = 1<<2,
AVS_SAMPLE_INT32 = 1<<3,
AVS_SAMPLE_FLOAT = 1<<4};
enum {AVS_PLANAR_Y=1<<0,
AVS_PLANAR_U=1<<1,
AVS_PLANAR_V=1<<2,
AVS_PLANAR_ALIGNED=1<<3,
AVS_PLANAR_Y_ALIGNED=AVS_PLANAR_Y|AVS_PLANAR_ALIGNED,
AVS_PLANAR_U_ALIGNED=AVS_PLANAR_U|AVS_PLANAR_ALIGNED,
AVS_PLANAR_V_ALIGNED=AVS_PLANAR_V|AVS_PLANAR_ALIGNED};
// Colorspace properties.
enum {AVS_CS_BGR = 1<<28,
AVS_CS_YUV = 1<<29,
AVS_CS_INTERLEAVED = 1<<30,
AVS_CS_PLANAR = 1<<31};
// Specific colorformats
enum {
AVS_CS_UNKNOWN = 0,
AVS_CS_BGR24 = 1<<0 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
AVS_CS_BGR32 = 1<<1 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
AVS_CS_YUY2 = 1<<2 | AVS_CS_YUV | AVS_CS_INTERLEAVED,
AVS_CS_YV12 = 1<<3 | AVS_CS_YUV | AVS_CS_PLANAR, // y-v-u, planar
AVS_CS_I420 = 1<<4 | AVS_CS_YUV | AVS_CS_PLANAR, // y-u-v, planar
AVS_CS_IYUV = 1<<4 | AVS_CS_YUV | AVS_CS_PLANAR // same as above
};
enum {
AVS_IT_BFF = 1<<0,
AVS_IT_TFF = 1<<1,
AVS_IT_FIELDBASED = 1<<2};
enum {
AVS_FILTER_TYPE=1,
AVS_FILTER_INPUT_COLORSPACE=2,
AVS_FILTER_OUTPUT_TYPE=9,
AVS_FILTER_NAME=4,
AVS_FILTER_AUTHOR=5,
AVS_FILTER_VERSION=6,
AVS_FILTER_ARGS=7,
AVS_FILTER_ARGS_INFO=8,
AVS_FILTER_ARGS_DESCRIPTION=10,
AVS_FILTER_DESCRIPTION=11};
enum { //SUBTYPES
AVS_FILTER_TYPE_AUDIO=1,
AVS_FILTER_TYPE_VIDEO=2,
AVS_FILTER_OUTPUT_TYPE_SAME=3,
AVS_FILTER_OUTPUT_TYPE_DIFFERENT=4};
enum {
AVS_CACHE_NOTHING=0,
AVS_CACHE_RANGE=1,
AVS_CACHE_ALL=2,
AVS_CACHE_AUDIO=3,
AVS_CACHE_AUDIO_NONE=4,
AVS_CACHE_AUDIO_AUTO=5
};
#define AVS_FRAME_ALIGN 16
typedef struct AVS_Clip AVS_Clip;
typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment;
/////////////////////////////////////////////////////////////////////
//
// AVS_VideoInfo
//
// AVS_VideoInfo is layed out identicly to VideoInfo
typedef struct AVS_VideoInfo {
int width, height; // width=0 means no video
unsigned fps_numerator, fps_denominator;
int num_frames;
int pixel_type;
int audio_samples_per_second; // 0 means no audio
int sample_type;
INT64 num_audio_samples;
int nchannels;
// Imagetype properties
int image_type;
} AVS_VideoInfo;
// useful functions of the above
AVSC_INLINE int avs_has_video(const AVS_VideoInfo * p)
{ return (p->width!=0); }
AVSC_INLINE int avs_has_audio(const AVS_VideoInfo * p)
{ return (p->audio_samples_per_second!=0); }
AVSC_INLINE int avs_is_rgb(const AVS_VideoInfo * p)
{ return !!(p->pixel_type&AVS_CS_BGR); }
AVSC_INLINE int avs_is_rgb24(const AVS_VideoInfo * p)
{ return (p->pixel_type&AVS_CS_BGR24)==AVS_CS_BGR24; } // Clear out additional properties
AVSC_INLINE int avs_is_rgb32(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_BGR32) == AVS_CS_BGR32 ; }
AVSC_INLINE int avs_is_yuv(const AVS_VideoInfo * p)
{ return !!(p->pixel_type&AVS_CS_YUV ); }
AVSC_INLINE int avs_is_yuy2(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_YUY2) == AVS_CS_YUY2; }
AVSC_INLINE int avs_is_yv12(const AVS_VideoInfo * p)
{ return ((p->pixel_type & AVS_CS_YV12) == AVS_CS_YV12)||((p->pixel_type & AVS_CS_I420) == AVS_CS_I420); }
AVSC_INLINE int avs_is_color_space(const AVS_VideoInfo * p, int c_space)
{ return ((p->pixel_type & c_space) == c_space); }
AVSC_INLINE int avs_is_property(const AVS_VideoInfo * p, int property)
{ return ((p->pixel_type & property)==property ); }
AVSC_INLINE int avs_is_planar(const AVS_VideoInfo * p)
{ return !!(p->pixel_type & AVS_CS_PLANAR); }
AVSC_INLINE int avs_is_field_based(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_FIELDBASED); }
AVSC_INLINE int avs_is_parity_known(const AVS_VideoInfo * p)
{ return ((p->image_type & AVS_IT_FIELDBASED)&&(p->image_type & (AVS_IT_BFF | AVS_IT_TFF))); }
AVSC_INLINE int avs_is_bff(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_BFF); }
AVSC_INLINE int avs_is_tff(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_TFF); }
AVSC_INLINE int avs_bits_per_pixel(const AVS_VideoInfo * p)
{
switch (p->pixel_type) {
case AVS_CS_BGR24: return 24;
case AVS_CS_BGR32: return 32;
case AVS_CS_YUY2: return 16;
case AVS_CS_YV12:
case AVS_CS_I420: return 12;
default: return 0;
}
}
AVSC_INLINE int avs_bytes_from_pixels(const AVS_VideoInfo * p, int pixels)
{ return pixels * (avs_bits_per_pixel(p)>>3); } // Will work on planar images, but will return only luma planes
AVSC_INLINE int avs_row_size(const AVS_VideoInfo * p)
{ return avs_bytes_from_pixels(p,p->width); } // Also only returns first plane on planar images
AVSC_INLINE int avs_bmp_size(const AVS_VideoInfo * vi)
{ if (avs_is_planar(vi)) {int p = vi->height * ((avs_row_size(vi)+3) & ~3); p+=p>>1; return p; } return vi->height * ((avs_row_size(vi)+3) & ~3); }
AVSC_INLINE int avs_samples_per_second(const AVS_VideoInfo * p)
{ return p->audio_samples_per_second; }
AVSC_INLINE int avs_bytes_per_channel_sample(const AVS_VideoInfo * p)
{
switch (p->sample_type) {
case AVS_SAMPLE_INT8: return sizeof(signed char);
case AVS_SAMPLE_INT16: return sizeof(signed short);
case AVS_SAMPLE_INT24: return 3;
case AVS_SAMPLE_INT32: return sizeof(signed int);
case AVS_SAMPLE_FLOAT: return sizeof(float);
default: return 0;
}
}
AVSC_INLINE int avs_bytes_per_audio_sample(const AVS_VideoInfo * p)
{ return p->nchannels*avs_bytes_per_channel_sample(p);}
AVSC_INLINE INT64 avs_audio_samples_from_frames(const AVS_VideoInfo * p, INT64 frames)
{ return ((INT64)(frames) * p->audio_samples_per_second * p->fps_denominator / p->fps_numerator); }
AVSC_INLINE int avs_frames_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
{ return (int)(samples * (INT64)p->fps_numerator / (INT64)p->fps_denominator / (INT64)p->audio_samples_per_second); }
AVSC_INLINE INT64 avs_audio_samples_from_bytes(const AVS_VideoInfo * p, INT64 bytes)
{ return bytes / avs_bytes_per_audio_sample(p); }
AVSC_INLINE INT64 avs_bytes_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
{ return samples * avs_bytes_per_audio_sample(p); }
AVSC_INLINE int avs_audio_channels(const AVS_VideoInfo * p)
{ return p->nchannels; }
AVSC_INLINE int avs_sample_type(const AVS_VideoInfo * p)
{ return p->sample_type;}
// useful mutator
AVSC_INLINE void avs_set_property(AVS_VideoInfo * p, int property)
{ p->image_type|=property; }
AVSC_INLINE void avs_clear_property(AVS_VideoInfo * p, int property)
{ p->image_type&=~property; }
AVSC_INLINE void avs_set_field_based(AVS_VideoInfo * p, int isfieldbased)
{ if (isfieldbased) p->image_type|=AVS_IT_FIELDBASED; else p->image_type&=~AVS_IT_FIELDBASED; }
AVSC_INLINE void avs_set_fps(AVS_VideoInfo * p, unsigned numerator, unsigned denominator)
{
unsigned x=numerator, y=denominator;
while (y) { // find gcd
unsigned t = x%y; x = y; y = t;
}
p->fps_numerator = numerator/x;
p->fps_denominator = denominator/x;
}
AVSC_INLINE int avs_is_same_colorspace(AVS_VideoInfo * x, AVS_VideoInfo * y)
{
return (x->pixel_type == y->pixel_type)
|| (avs_is_yv12(x) && avs_is_yv12(y));
}
/////////////////////////////////////////////////////////////////////
//
// AVS_VideoFrame
//
// VideoFrameBuffer holds information about a memory block which is used
// for video data. For efficiency, instances of this class are not deleted
// when the refcount reaches zero; instead they're stored in a linked list
// to be reused. The instances are deleted when the corresponding AVS
// file is closed.
// AVS_VideoFrameBuffer is layed out identicly to VideoFrameBuffer
// DO NOT USE THIS STRUCTURE DIRECTLY
typedef struct AVS_VideoFrameBuffer {
unsigned char * data;
int data_size;
// sequence_number is incremented every time the buffer is changed, so
// that stale views can tell they're no longer valid.
long sequence_number;
long refcount;
} AVS_VideoFrameBuffer;
// VideoFrame holds a "window" into a VideoFrameBuffer.
// AVS_VideoFrame is layed out identicly to IVideoFrame
// DO NOT USE THIS STRUCTURE DIRECTLY
typedef struct AVS_VideoFrame {
int refcount;
AVS_VideoFrameBuffer * vfb;
int offset, pitch, row_size, height, offsetU, offsetV, pitchUV; // U&V offsets are from top of picture.
} AVS_VideoFrame;
// Access functions for AVS_VideoFrame
AVSC_INLINE int avs_get_pitch(const AVS_VideoFrame * p) {
return p->pitch;}
AVSC_INLINE int avs_get_pitch_p(const AVS_VideoFrame * p, int plane) {
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V: return p->pitchUV;}
return p->pitch;}
AVSC_INLINE int avs_get_row_size(const AVS_VideoFrame * p) {
return p->row_size; }
AVSC_INLINE int avs_get_row_size_p(const AVS_VideoFrame * p, int plane) {
int r;
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV) return p->row_size>>1;
else return 0;
case AVS_PLANAR_U_ALIGNED: case AVS_PLANAR_V_ALIGNED:
if (p->pitchUV) {
r = ((p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)) )>>1; // Aligned rowsize
if (r < p->pitchUV)
return r;
return p->row_size>>1;
} else return 0;
case AVS_PLANAR_Y_ALIGNED:
r = (p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)); // Aligned rowsize
if (r <= p->pitch)
return r;
return p->row_size;
}
return p->row_size;
}
AVSC_INLINE int avs_get_height(const AVS_VideoFrame * p) {
return p->height;}
AVSC_INLINE int avs_get_height_p(const AVS_VideoFrame * p, int plane) {
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV) return p->height>>1;
return 0;
}
return p->height;}
AVSC_INLINE const unsigned char* avs_get_read_ptr(const AVS_VideoFrame * p) {
return p->vfb->data + p->offset;}
AVSC_INLINE const unsigned char* avs_get_read_ptr_p(const AVS_VideoFrame * p, int plane)
{
switch (plane) {
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
default: return p->vfb->data + p->offset;}
}
AVSC_INLINE int avs_is_writable(const AVS_VideoFrame * p) {
return (p->refcount == 1 && p->vfb->refcount == 1);}
AVSC_INLINE unsigned char* avs_get_write_ptr(const AVS_VideoFrame * p)
{
if (avs_is_writable(p)) {
++p->vfb->sequence_number;
return p->vfb->data + p->offset;
} else
return 0;
}
AVSC_INLINE unsigned char* avs_get_write_ptr_p(const AVS_VideoFrame * p, int plane)
{
if (plane==AVS_PLANAR_Y && avs_is_writable(p)) {
++p->vfb->sequence_number;
return p->vfb->data + p->offset;
} else if (plane==AVS_PLANAR_Y) {
return 0;
} else {
switch (plane) {
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
default: return p->vfb->data + p->offset;
}
}
}
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(void, avs_release_video_frame)(AVS_VideoFrame *);
// makes a shallow copy of a video frame
AVSC_API(AVS_VideoFrame *, avs_copy_video_frame)(AVS_VideoFrame *);
#if defined __cplusplus
}
#endif // __cplusplus
#ifndef AVSC_NO_DECLSPEC
AVSC_INLINE void avs_release_frame(AVS_VideoFrame * f)
{avs_release_video_frame(f);}
AVSC_INLINE AVS_VideoFrame * avs_copy_frame(AVS_VideoFrame * f)
{return avs_copy_video_frame(f);}
#endif
/////////////////////////////////////////////////////////////////////
//
// AVS_Value
//
// Treat AVS_Value as a fat pointer. That is use avs_copy_value
// and avs_release_value appropiaty as you would if AVS_Value was
// a pointer.
// To maintain source code compatibility with future versions of the
// avisynth_c API don't use the AVS_Value directly. Use the helper
// functions below.
// AVS_Value is layed out identicly to AVSValue
typedef struct AVS_Value AVS_Value;
struct AVS_Value {
short type; // 'a'rray, 'c'lip, 'b'ool, 'i'nt, 'f'loat, 's'tring, 'v'oid, or 'l'ong
// for some function e'rror
short array_size;
union {
void * clip; // do not use directly, use avs_take_clip
char boolean;
int integer;
INT64 integer64; // match addition of __int64 to avxplugin.h
float floating_pt;
const char * string;
const AVS_Value * array;
} d;
};
// AVS_Value should be initilized with avs_void.
// Should also set to avs_void after the value is released
// with avs_copy_value. Consider it the equalvent of setting
// a pointer to NULL
static const AVS_Value avs_void = {'v'};
AVSC_API(void, avs_copy_value)(AVS_Value * dest, AVS_Value src);
AVSC_API(void, avs_release_value)(AVS_Value);
AVSC_INLINE int avs_defined(AVS_Value v) { return v.type != 'v'; }
AVSC_INLINE int avs_is_clip(AVS_Value v) { return v.type == 'c'; }
AVSC_INLINE int avs_is_bool(AVS_Value v) { return v.type == 'b'; }
AVSC_INLINE int avs_is_int(AVS_Value v) { return v.type == 'i'; }
AVSC_INLINE int avs_is_float(AVS_Value v) { return v.type == 'f' || v.type == 'i'; }
AVSC_INLINE int avs_is_string(AVS_Value v) { return v.type == 's'; }
AVSC_INLINE int avs_is_array(AVS_Value v) { return v.type == 'a'; }
AVSC_INLINE int avs_is_error(AVS_Value v) { return v.type == 'e'; }
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(AVS_Clip *, avs_take_clip)(AVS_Value, AVS_ScriptEnvironment *);
AVSC_API(void, avs_set_to_clip)(AVS_Value *, AVS_Clip *);
#if defined __cplusplus
}
#endif // __cplusplus
AVSC_INLINE int avs_as_bool(AVS_Value v)
{ return v.d.boolean; }
AVSC_INLINE int avs_as_int(AVS_Value v)
{ return v.d.integer; }
AVSC_INLINE const char * avs_as_string(AVS_Value v)
{ return avs_is_error(v) || avs_is_string(v) ? v.d.string : 0; }
AVSC_INLINE double avs_as_float(AVS_Value v)
{ return avs_is_int(v) ? v.d.integer : v.d.floating_pt; }
AVSC_INLINE const char * avs_as_error(AVS_Value v)
{ return avs_is_error(v) ? v.d.string : 0; }
AVSC_INLINE const AVS_Value * avs_as_array(AVS_Value v)
{ return v.d.array; }
AVSC_INLINE int avs_array_size(AVS_Value v)
{ return avs_is_array(v) ? v.array_size : 1; }
AVSC_INLINE AVS_Value avs_array_elt(AVS_Value v, int index)
{ return avs_is_array(v) ? v.d.array[index] : v; }
// only use these functions on am AVS_Value that does not already have
// an active value. Remember, treat AVS_Value as a fat pointer.
AVSC_INLINE AVS_Value avs_new_value_bool(int v0)
{ AVS_Value v = {0}; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
AVSC_INLINE AVS_Value avs_new_value_int(int v0)
{ AVS_Value v = {0}; v.type = 'i'; v.d.integer = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_string(const char * v0)
{ AVS_Value v = {0}; v.type = 's'; v.d.string = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_float(float v0)
{ AVS_Value v = {0}; v.type = 'f'; v.d.floating_pt = v0; return v;}
AVSC_INLINE AVS_Value avs_new_value_error(const char * v0)
{ AVS_Value v = {0}; v.type = 'e'; v.d.string = v0; return v; }
#ifndef AVSC_NO_DECLSPEC
AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0)
{ AVS_Value v = {0}; avs_set_to_clip(&v, v0); return v; }
#endif
AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size)
{ AVS_Value v = {0}; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
/////////////////////////////////////////////////////////////////////
//
// AVS_Clip
//
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(void, avs_release_clip)(AVS_Clip *);
AVSC_API(AVS_Clip *, avs_copy_clip)(AVS_Clip *);
AVSC_API(const char *, avs_clip_get_error)(AVS_Clip *); // return 0 if no error
AVSC_API(const AVS_VideoInfo *, avs_get_video_info)(AVS_Clip *);
AVSC_API(int, avs_get_version)(AVS_Clip *);
AVSC_API(AVS_VideoFrame *, avs_get_frame)(AVS_Clip *, int n);
// The returned video frame must be released with avs_release_video_frame
AVSC_API(int, avs_get_parity)(AVS_Clip *, int n);
// return field parity if field_based, else parity of first field in frame
AVSC_API(int, avs_get_audio)(AVS_Clip *, void * buf,
INT64 start, INT64 count);
// start and count are in samples
AVSC_API(int, avs_set_cache_hints)(AVS_Clip *,
int cachehints, size_t frame_range);
#if defined __cplusplus
}
#endif // __cplusplus
// This is the callback type used by avs_add_function
typedef AVS_Value (AVSC_CC * AVS_ApplyFunc)
(AVS_ScriptEnvironment *, AVS_Value args, void * user_data);
typedef struct AVS_FilterInfo AVS_FilterInfo;
struct AVS_FilterInfo
{
// these members should not be modified outside of the AVS_ApplyFunc callback
AVS_Clip * child;
AVS_VideoInfo vi;
AVS_ScriptEnvironment * env;
AVS_VideoFrame * (AVSC_CC * get_frame)(AVS_FilterInfo *, int n);
int (AVSC_CC * get_parity)(AVS_FilterInfo *, int n);
int (AVSC_CC * get_audio)(AVS_FilterInfo *, void * buf,
INT64 start, INT64 count);
int (AVSC_CC * set_cache_hints)(AVS_FilterInfo *, int cachehints,
int frame_range);
void (AVSC_CC * free_filter)(AVS_FilterInfo *);
// Should be set when ever there is an error to report.
// It is cleared before any of the above methods are called
const char * error;
// this is to store whatever and may be modified at will
void * user_data;
};
// Create a new filter
// fi is set to point to the AVS_FilterInfo so that you can
// modify it once it is initilized.
// store_child should generally be set to true. If it is not
// set than ALL methods (the function pointers) must be defined
// If it is set than you do not need to worry about freeing the child
// clip.
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(AVS_Clip *, avs_new_c_filter)(AVS_ScriptEnvironment * e,
AVS_FilterInfo * * fi,
AVS_Value child, int store_child);
#if defined __cplusplus
}
#endif // __cplusplus
/////////////////////////////////////////////////////////////////////
//
// AVS_ScriptEnvironment
//
// For GetCPUFlags. These are backwards-compatible with those in VirtualDub.
enum {
/* slowest CPU to support extension */
AVS_CPU_FORCE = 0x01, // N/A
AVS_CPU_FPU = 0x02, // 386/486DX
AVS_CPU_MMX = 0x04, // P55C, K6, PII
AVS_CPU_INTEGER_SSE = 0x08, // PIII, Athlon
AVS_CPU_SSE = 0x10, // PIII, Athlon XP/MP
AVS_CPU_SSE2 = 0x20, // PIV, Hammer
AVS_CPU_3DNOW = 0x40, // K6-2
AVS_CPU_3DNOW_EXT = 0x80, // Athlon
AVS_CPU_X86_64 = 0xA0, // Hammer (note: equiv. to 3DNow + SSE2,
// which only Hammer will have anyway)
};
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(const char *, avs_get_error)(AVS_ScriptEnvironment *); // return 0 if no error
AVSC_API(long, avs_get_cpu_flags)(AVS_ScriptEnvironment *);
AVSC_API(int, avs_check_version)(AVS_ScriptEnvironment *, int version);
AVSC_API(char *, avs_save_string)(AVS_ScriptEnvironment *, const char* s, int length);
AVSC_API(char *, avs_sprintf)(AVS_ScriptEnvironment *, const char * fmt, ...);
AVSC_API(char *, avs_vsprintf)(AVS_ScriptEnvironment *, const char * fmt, va_list val);
// note: val is really a va_list; I hope everyone typedefs va_list to a pointer
AVSC_API(int, avs_add_function)(AVS_ScriptEnvironment *,
const char * name, const char * params,
AVS_ApplyFunc apply, void * user_data);
AVSC_API(int, avs_function_exists)(AVS_ScriptEnvironment *, const char * name);
AVSC_API(AVS_Value, avs_invoke)(AVS_ScriptEnvironment *, const char * name,
AVS_Value args, const char** arg_names);
// The returned value must be be released with avs_release_value
AVSC_API(AVS_Value, avs_get_var)(AVS_ScriptEnvironment *, const char* name);
// The returned value must be be released with avs_release_value
AVSC_API(int, avs_set_var)(AVS_ScriptEnvironment *, const char* name, AVS_Value val);
AVSC_API(int, avs_set_global_var)(AVS_ScriptEnvironment *, const char* name, const AVS_Value val);
//void avs_push_context(AVS_ScriptEnvironment *, int level=0);
//void avs_pop_context(AVS_ScriptEnvironment *);
AVSC_API(AVS_VideoFrame *, avs_new_video_frame_a)(AVS_ScriptEnvironment *,
const AVS_VideoInfo * vi, int align);
// align should be at least 16
#if defined __cplusplus
}
#endif // __cplusplus
#ifndef AVSC_NO_DECLSPEC
AVSC_INLINE
AVS_VideoFrame * avs_new_video_frame(AVS_ScriptEnvironment * env,
const AVS_VideoInfo * vi)
{return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
AVSC_INLINE
AVS_VideoFrame * avs_new_frame(AVS_ScriptEnvironment * env,
const AVS_VideoInfo * vi)
{return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
#endif
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(int, avs_make_writable)(AVS_ScriptEnvironment *, AVS_VideoFrame * * pvf);
AVSC_API(void, avs_bit_blt)(AVS_ScriptEnvironment *, unsigned char* dstp, int dst_pitch, const unsigned char* srcp, int src_pitch, int row_size, int height);
typedef void (AVSC_CC *AVS_ShutdownFunc)(void* user_data, AVS_ScriptEnvironment * env);
AVSC_API(void, avs_at_exit)(AVS_ScriptEnvironment *, AVS_ShutdownFunc function, void * user_data);
AVSC_API(AVS_VideoFrame *, avs_subframe)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height);
// The returned video frame must be be released
AVSC_API(int, avs_set_memory_max)(AVS_ScriptEnvironment *, int mem);
AVSC_API(int, avs_set_working_dir)(AVS_ScriptEnvironment *, const char * newdir);
// avisynth.dll exports this; it's a way to use it as a library, without
// writing an AVS script or without going through AVIFile.
AVSC_API(AVS_ScriptEnvironment *, avs_create_script_environment)(int version);
#if defined __cplusplus
}
#endif // __cplusplus
// this symbol is the entry point for the plugin and must
// be defined
AVSC_EXPORT
const char * AVSC_CC avisynth_c_plugin_init(AVS_ScriptEnvironment* env);
#if defined __cplusplus
extern "C"
{
#endif // __cplusplus
AVSC_API(void, avs_delete_script_environment)(AVS_ScriptEnvironment *);
AVSC_API(AVS_VideoFrame *, avs_subframe_planar)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height, int rel_offsetU, int rel_offsetV, int new_pitchUV);
// The returned video frame must be be released
#if defined __cplusplus
}
#endif // __cplusplus
#endif //__AVXSYNTH_C__

View File

@@ -0,0 +1,85 @@
#ifndef __DATA_TYPE_CONVERSIONS_H__
#define __DATA_TYPE_CONVERSIONS_H__
#include <stdint.h>
#include <wchar.h>
#ifdef __cplusplus
namespace avxsynth {
#endif // __cplusplus
typedef int64_t __int64;
typedef int32_t __int32;
#ifdef __cplusplus
typedef bool BOOL;
#else
typedef uint32_t BOOL;
#endif // __cplusplus
typedef void* HMODULE;
typedef void* LPVOID;
typedef void* PVOID;
typedef PVOID HANDLE;
typedef HANDLE HWND;
typedef HANDLE HINSTANCE;
typedef void* HDC;
typedef void* HBITMAP;
typedef void* HICON;
typedef void* HFONT;
typedef void* HGDIOBJ;
typedef void* HBRUSH;
typedef void* HMMIO;
typedef void* HACMSTREAM;
typedef void* HACMDRIVER;
typedef void* HIC;
typedef void* HACMOBJ;
typedef HACMSTREAM* LPHACMSTREAM;
typedef void* HACMDRIVERID;
typedef void* LPHACMDRIVER;
typedef unsigned char BYTE;
typedef BYTE* LPBYTE;
typedef char TCHAR;
typedef TCHAR* LPTSTR;
typedef const TCHAR* LPCTSTR;
typedef char* LPSTR;
typedef LPSTR LPOLESTR;
typedef const char* LPCSTR;
typedef LPCSTR LPCOLESTR;
typedef wchar_t WCHAR;
typedef unsigned short WORD;
typedef unsigned int UINT;
typedef UINT MMRESULT;
typedef uint32_t DWORD;
typedef DWORD COLORREF;
typedef DWORD FOURCC;
typedef DWORD HRESULT;
typedef DWORD* LPDWORD;
typedef DWORD* DWORD_PTR;
typedef int32_t LONG;
typedef int32_t* LONG_PTR;
typedef LONG_PTR LRESULT;
typedef uint32_t ULONG;
typedef uint32_t* ULONG_PTR;
//typedef __int64_t intptr_t;
typedef uint64_t _fsize_t;
//
// Structures
//
typedef struct _GUID {
DWORD Data1;
WORD Data2;
WORD Data3;
BYTE Data4[8];
} GUID;
typedef GUID REFIID;
typedef GUID CLSID;
typedef CLSID* LPCLSID;
typedef GUID IID;
#ifdef __cplusplus
}; // namespace avxsynth
#endif // __cplusplus
#endif // __DATA_TYPE_CONVERSIONS_H__

View File

@@ -0,0 +1,77 @@
#ifndef __WINDOWS2LINUX_H__
#define __WINDOWS2LINUX_H__
/*
* LINUX SPECIFIC DEFINITIONS
*/
//
// Data types conversions
//
#include <stdlib.h>
#include <string.h>
#include "basicDataTypeConversions.h"
#ifdef __cplusplus
namespace avxsynth {
#endif // __cplusplus
//
// purposefully define the following MSFT definitions
// to mean nothing (as they do not mean anything on Linux)
//
#define __stdcall
#define __cdecl
#define noreturn
#define __declspec(x)
#define STDAPI extern "C" HRESULT
#define STDMETHODIMP HRESULT __stdcall
#define STDMETHODIMP_(x) x __stdcall
#define STDMETHOD(x) virtual HRESULT x
#define STDMETHOD_(a, x) virtual a x
#ifndef TRUE
#define TRUE true
#endif
#ifndef FALSE
#define FALSE false
#endif
#define S_OK (0x00000000)
#define S_FALSE (0x00000001)
#define E_NOINTERFACE (0X80004002)
#define E_POINTER (0x80004003)
#define E_FAIL (0x80004005)
#define E_OUTOFMEMORY (0x8007000E)
#define INVALID_HANDLE_VALUE ((HANDLE)((LONG_PTR)-1))
#define FAILED(hr) ((hr) & 0x80000000)
#define SUCCEEDED(hr) (!FAILED(hr))
//
// Functions
//
#define MAKEDWORD(a,b,c,d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d))
#define MAKEWORD(a,b) (((a) << 8) | (b))
#define lstrlen strlen
#define lstrcpy strcpy
#define lstrcmpi strcasecmp
#define _stricmp strcasecmp
#define InterlockedIncrement(x) __sync_fetch_and_add((x), 1)
#define InterlockedDecrement(x) __sync_fetch_and_sub((x), 1)
// Windows uses (new, old) ordering but GCC has (old, new)
#define InterlockedCompareExchange(x,y,z) __sync_val_compare_and_swap(x,z,y)
#define UInt32x32To64(a, b) ( (uint64_t) ( ((uint64_t)((uint32_t)(a))) * ((uint32_t)(b)) ) )
#define Int64ShrlMod32(a, b) ( (uint64_t) ( (uint64_t)(a) >> (b) ) )
#define Int32x32To64(a, b) ((__int64)(((__int64)((long)(a))) * ((long)(b))))
#define MulDiv(nNumber, nNumerator, nDenominator) (int32_t) (((int64_t) (nNumber) * (int64_t) (nNumerator) + (int64_t) ((nDenominator)/2)) / (int64_t) (nDenominator))
#ifdef __cplusplus
}; // namespace avxsynth
#endif // __cplusplus
#endif // __WINDOWS2LINUX_H__

View File

@@ -1,195 +0,0 @@
/*
* Minimum CUDA compatibility definitions header
*
* Copyright (c) 2019 rcombs
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_CUDA_CUDA_RUNTIME_H
#define COMPAT_CUDA_CUDA_RUNTIME_H
// Common macros
#define __global__ __attribute__((global))
#define __device__ __attribute__((device))
#define __device_builtin__ __attribute__((device_builtin))
#define __align__(N) __attribute__((aligned(N)))
#define __inline__ __inline__ __attribute__((always_inline))
#define max(a, b) ((a) > (b) ? (a) : (b))
#define min(a, b) ((a) < (b) ? (a) : (b))
#define abs(x) ((x) < 0 ? -(x) : (x))
#define atomicAdd(a, b) (__atomic_fetch_add(a, b, __ATOMIC_SEQ_CST))
// Basic typedefs
typedef __device_builtin__ unsigned long long cudaTextureObject_t;
typedef struct __device_builtin__ __align__(2) uchar2
{
unsigned char x, y;
} uchar2;
typedef struct __device_builtin__ __align__(4) ushort2
{
unsigned short x, y;
} ushort2;
typedef struct __device_builtin__ __align__(8) float2
{
float x, y;
} float2;
typedef struct __device_builtin__ __align__(8) int2
{
int x, y;
} int2;
typedef struct __device_builtin__ uint3
{
unsigned int x, y, z;
} uint3;
typedef struct uint3 dim3;
typedef struct __device_builtin__ __align__(4) uchar4
{
unsigned char x, y, z, w;
} uchar4;
typedef struct __device_builtin__ __align__(8) ushort4
{
unsigned short x, y, z, w;
} ushort4;
typedef struct __device_builtin__ __align__(16) int4
{
int x, y, z, w;
} int4;
typedef struct __device_builtin__ __align__(16) float4
{
float x, y, z, w;
} float4;
// Accessors for special registers
#define GETCOMP(reg, comp) \
asm("mov.u32 %0, %%" #reg "." #comp ";" : "=r"(tmp)); \
ret.comp = tmp;
#define GET(name, reg) static inline __device__ uint3 name() {\
uint3 ret; \
unsigned tmp; \
GETCOMP(reg, x) \
GETCOMP(reg, y) \
GETCOMP(reg, z) \
return ret; \
}
GET(getBlockIdx, ctaid)
GET(getBlockDim, ntid)
GET(getThreadIdx, tid)
// Instead of externs for these registers, we turn access to them into calls into trivial ASM
#define blockIdx (getBlockIdx())
#define blockDim (getBlockDim())
#define threadIdx (getThreadIdx())
// Basic initializers (simple macros rather than inline functions)
#define make_int2(a, b) ((int2){.x = a, .y = b})
#define make_uchar2(a, b) ((uchar2){.x = a, .y = b})
#define make_ushort2(a, b) ((ushort2){.x = a, .y = b})
#define make_float2(a, b) ((float2){.x = a, .y = b})
#define make_int4(a, b, c, d) ((int4){.x = a, .y = b, .z = c, .w = d})
#define make_uchar4(a, b, c, d) ((uchar4){.x = a, .y = b, .z = c, .w = d})
#define make_ushort4(a, b, c, d) ((ushort4){.x = a, .y = b, .z = c, .w = d})
#define make_float4(a, b, c, d) ((float4){.x = a, .y = b, .z = c, .w = d})
// Conversions from the tex instruction's 4-register output to various types
#define TEX2D(type, ret) static inline __device__ void conv(type* out, unsigned a, unsigned b, unsigned c, unsigned d) {*out = (ret);}
TEX2D(unsigned char, a & 0xFF)
TEX2D(unsigned short, a & 0xFFFF)
TEX2D(float, a)
TEX2D(uchar2, make_uchar2(a & 0xFF, b & 0xFF))
TEX2D(ushort2, make_ushort2(a & 0xFFFF, b & 0xFFFF))
TEX2D(float2, make_float2(a, b))
TEX2D(uchar4, make_uchar4(a & 0xFF, b & 0xFF, c & 0xFF, d & 0xFF))
TEX2D(ushort4, make_ushort4(a & 0xFFFF, b & 0xFFFF, c & 0xFFFF, d & 0xFFFF))
TEX2D(float4, make_float4(a, b, c, d))
// Template calling tex instruction and converting the output to the selected type
template<typename T>
inline __device__ T tex2D(cudaTextureObject_t texObject, float x, float y)
{
T ret;
unsigned ret1, ret2, ret3, ret4;
asm("tex.2d.v4.u32.f32 {%0, %1, %2, %3}, [%4, {%5, %6}];" :
"=r"(ret1), "=r"(ret2), "=r"(ret3), "=r"(ret4) :
"l"(texObject), "f"(x), "f"(y));
conv(&ret, ret1, ret2, ret3, ret4);
return ret;
}
template<>
inline __device__ float4 tex2D<float4>(cudaTextureObject_t texObject, float x, float y)
{
float4 ret;
asm("tex.2d.v4.f32.f32 {%0, %1, %2, %3}, [%4, {%5, %6}];" :
"=r"(ret.x), "=r"(ret.y), "=r"(ret.z), "=r"(ret.w) :
"l"(texObject), "f"(x), "f"(y));
return ret;
}
template<>
inline __device__ float tex2D<float>(cudaTextureObject_t texObject, float x, float y)
{
return tex2D<float4>(texObject, x, y).x;
}
template<>
inline __device__ float2 tex2D<float2>(cudaTextureObject_t texObject, float x, float y)
{
float4 ret = tex2D<float4>(texObject, x, y);
return make_float2(ret.x, ret.y);
}
// Math helper functions
static inline __device__ float floorf(float a) { return __builtin_floorf(a); }
static inline __device__ float floor(float a) { return __builtin_floorf(a); }
static inline __device__ double floor(double a) { return __builtin_floor(a); }
static inline __device__ float ceilf(float a) { return __builtin_ceilf(a); }
static inline __device__ float ceil(float a) { return __builtin_ceilf(a); }
static inline __device__ double ceil(double a) { return __builtin_ceil(a); }
static inline __device__ float truncf(float a) { return __builtin_truncf(a); }
static inline __device__ float trunc(float a) { return __builtin_truncf(a); }
static inline __device__ double trunc(double a) { return __builtin_trunc(a); }
static inline __device__ float fabsf(float a) { return __builtin_fabsf(a); }
static inline __device__ float fabs(float a) { return __builtin_fabsf(a); }
static inline __device__ double fabs(double a) { return __builtin_fabs(a); }
static inline __device__ float sqrtf(float a) { return __builtin_sqrtf(a); }
static inline __device__ float __saturatef(float a) { return __nvvm_saturate_f(a); }
static inline __device__ float __sinf(float a) { return __nvvm_sin_approx_f(a); }
static inline __device__ float __cosf(float a) { return __nvvm_cos_approx_f(a); }
static inline __device__ float __expf(float a) { return __nvvm_ex2_approx_f(a * (float)__builtin_log2(__builtin_exp(1))); }
static inline __device__ float __powf(float a, float b) { return __nvvm_ex2_approx_f(__nvvm_lg2_approx_f(a) * b); }
// Misc helper functions
extern "C" __device__ int printf(const char*, ...);
#endif /* COMPAT_CUDA_CUDA_RUNTIME_H */

View File

@@ -16,8 +16,8 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_CUDA_DYNLINK_LOADER_H
#define COMPAT_CUDA_DYNLINK_LOADER_H
#ifndef AV_COMPAT_CUDA_DYNLINK_LOADER_H
#define AV_COMPAT_CUDA_DYNLINK_LOADER_H
#include "libavutil/log.h"
#include "compat/w32dlfcn.h"
@@ -30,4 +30,4 @@
#include <ffnvcodec/dynlink_loader.h>
#endif /* COMPAT_CUDA_DYNLINK_LOADER_H */
#endif

36
compat/cuda/ptx2c.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/bin/sh
# Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
set -e
OUT="$1"
IN="$2"
NAME="$(basename "$IN" | sed 's/\..*//')"
printf "const char %s_ptx[] = \\" "$NAME" > "$OUT"
while read LINE
do
printf "\n\t\"%s\\\n\"" "$(printf "%s" "$LINE" | sed -e 's/\r//g' -e 's/["\\]/\\&/g')" >> "$OUT"
done < "$IN"
printf ";\n" >> "$OUT"
exit 0

View File

@@ -1,47 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <math.h>
#define FUN(name, type, op) \
type name(type x, type y) \
{ \
if (fpclassify(x) == FP_NAN) return y; \
if (fpclassify(y) == FP_NAN) return x; \
return x op y ? x : y; \
}
FUN(fmin, double, <)
FUN(fmax, double, >)
FUN(fminf, float, <)
FUN(fmaxf, float, >)
long double fmodl(long double x, long double y)
{
return fmod(x, y);
}
long double scalbnl(long double x, int exp)
{
return scalbn(x, exp);
}
long double copysignl(long double x, long double y)
{
return copysign(x, y);
}

View File

@@ -1,25 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
double fmin(double, double);
double fmax(double, double);
float fminf(float, float);
float fmaxf(float, float);
long double fmodl(long double, long double);
long double scalbnl(long double, int);
long double copysignl(long double, long double);

View File

@@ -38,7 +38,7 @@ static int optind = 1;
static int optopt;
static char *optarg;
static int getopt(int argc, char *argv[], const char *opts)
static int getopt(int argc, char *argv[], char *opts)
{
static int sp = 1;
int c;

View File

@@ -59,7 +59,7 @@ int avpriv_vsnprintf(char *s, size_t n, const char *fmt,
* recommends to provide _snprintf/_vsnprintf() a buffer size that
* is one less than the actual buffer, and zero it before calling
* _snprintf/_vsnprintf() to workaround this problem.
* See https://web.archive.org/web/20151214111935/http://msdn.microsoft.com/en-us/library/1kt27hek(v=vs.80).aspx */
* See http://msdn.microsoft.com/en-us/library/1kt27hek(v=vs.80).aspx */
memset(s, 0, n);
va_copy(ap_copy, ap);
ret = _vsnprintf(s, n - 1, fmt, ap_copy);

View File

@@ -27,19 +27,15 @@
#define COMPAT_OS2THREADS_H
#define INCL_DOS
#define INCL_DOSERRORS
#include <os2.h>
#undef __STRICT_ANSI__ /* for _beginthread() */
#include <stdlib.h>
#include <time.h>
#include <sys/builtin.h>
#include <sys/fmutex.h>
#include "libavutil/attributes.h"
#include "libavutil/common.h"
#include "libavutil/time.h"
typedef struct {
TID tid;
@@ -167,28 +163,6 @@ static av_always_inline int pthread_cond_broadcast(pthread_cond_t *cond)
return 0;
}
static av_always_inline int pthread_cond_timedwait(pthread_cond_t *cond,
pthread_mutex_t *mutex,
const struct timespec *abstime)
{
int64_t abs_milli = abstime->tv_sec * 1000LL + abstime->tv_nsec / 1000000;
ULONG t = av_clip64(abs_milli - av_gettime() / 1000, 0, ULONG_MAX);
__atomic_increment(&cond->wait_count);
pthread_mutex_unlock(mutex);
APIRET ret = DosWaitEventSem(cond->event_sem, t);
__atomic_decrement(&cond->wait_count);
DosPostEventSem(cond->ack_sem);
pthread_mutex_lock(mutex);
return (ret == ERROR_TIMEOUT) ? ETIMEDOUT : 0;
}
static av_always_inline int pthread_cond_wait(pthread_cond_t *cond,
pthread_mutex_t *mutex)
{

View File

@@ -218,7 +218,7 @@ while (<F>) {
# Lines of the form '} SOME_VERSION_NAME_1.0;'
if (/^[ \t]*\}[ \tA-Z0-9_.a-z]+;[ \t]*$/) {
$glob = 'glob';
# We tried to match symbols against this version, but none matched.
# We tried to match symbols agains this version, but none matched.
# Emit dummy hidden symbol to avoid marking this version WEAK.
if ($matches_attempted && $matched_symbols == 0) {
print " hidden:\n";

View File

@@ -1,599 +0,0 @@
/*
* Copyright (C) 2023 Rémi Denis-Courmont
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2.1 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301, USA.
*/
#ifndef __STDC_VERSION_STDBIT_H__
#define __STDC_VERSION_STDBIT_H__ 202311L
#include <stdbool.h>
#include <limits.h> /* CHAR_BIT */
#define __STDC_ENDIAN_LITTLE__ 1234
#define __STDC_ENDIAN_BIG__ 4321
#ifdef __BYTE_ORDER__
# if (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
# define __STDC_ENDIAN_NATIVE__ __STDC_ENDIAN_LITTLE__
# elif (__BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)
# define __STDC_ENDIAN_NATIVE__ __STDC_ENDIAN_BIG__
# else
# define __STDC_ENDIAN_NATIVE__ 3412
# endif
#elif defined(_MSC_VER)
# define __STDC_ENDIAN_NATIVE__ __STDC_ENDIAN_LITTLE__
#else
# error Not implemented.
#endif
#define __stdbit_generic_type_func(func, value) \
_Generic (value, \
unsigned long long: stdc_##func##_ull((unsigned long long)(value)), \
unsigned long: stdc_##func##_ul((unsigned long)(value)), \
unsigned int: stdc_##func##_ui((unsigned int)(value)), \
unsigned short: stdc_##func##_us((unsigned short)(value)), \
unsigned char: stdc_##func##_uc((unsigned char)(value)))
#if defined (__GNUC__) || defined (__clang__)
static inline unsigned int stdc_leading_zeros_ull(unsigned long long value)
{
return value ? __builtin_clzll(value) : (CHAR_BIT * sizeof (value));
}
static inline unsigned int stdc_leading_zeros_ul(unsigned long value)
{
return value ? __builtin_clzl(value) : (CHAR_BIT * sizeof (value));
}
static inline unsigned int stdc_leading_zeros_ui(unsigned int value)
{
return value ? __builtin_clz(value) : (CHAR_BIT * sizeof (value));
}
static inline unsigned int stdc_leading_zeros_us(unsigned short value)
{
return stdc_leading_zeros_ui(value)
- CHAR_BIT * (sizeof (int) - sizeof (value));
}
static inline unsigned int stdc_leading_zeros_uc(unsigned char value)
{
return stdc_leading_zeros_ui(value) - (CHAR_BIT * (sizeof (int) - 1));
}
#else
static inline unsigned int __stdc_leading_zeros(unsigned long long value,
unsigned int size)
{
unsigned int zeros = size * CHAR_BIT;
while (value != 0) {
value >>= 1;
zeros--;
}
return zeros;
}
static inline unsigned int stdc_leading_zeros_ull(unsigned long long value)
{
return __stdc_leading_zeros(value, sizeof (value));
}
static inline unsigned int stdc_leading_zeros_ul(unsigned long value)
{
return __stdc_leading_zeros(value, sizeof (value));
}
static inline unsigned int stdc_leading_zeros_ui(unsigned int value)
{
return __stdc_leading_zeros(value, sizeof (value));
}
static inline unsigned int stdc_leading_zeros_us(unsigned short value)
{
return __stdc_leading_zeros(value, sizeof (value));
}
static inline unsigned int stdc_leading_zeros_uc(unsigned char value)
{
return __stdc_leading_zeros(value, sizeof (value));
}
#endif
#define stdc_leading_zeros(value) \
__stdbit_generic_type_func(leading_zeros, value)
static inline unsigned int stdc_leading_ones_ull(unsigned long long value)
{
return stdc_leading_zeros_ull(~value);
}
static inline unsigned int stdc_leading_ones_ul(unsigned long value)
{
return stdc_leading_zeros_ul(~value);
}
static inline unsigned int stdc_leading_ones_ui(unsigned int value)
{
return stdc_leading_zeros_ui(~value);
}
static inline unsigned int stdc_leading_ones_us(unsigned short value)
{
return stdc_leading_zeros_us(~value);
}
static inline unsigned int stdc_leading_ones_uc(unsigned char value)
{
return stdc_leading_zeros_uc(~value);
}
#define stdc_leading_ones(value) \
__stdbit_generic_type_func(leading_ones, value)
#if defined (__GNUC__) || defined (__clang__)
static inline unsigned int stdc_trailing_zeros_ull(unsigned long long value)
{
return value ? (unsigned int)__builtin_ctzll(value)
: (CHAR_BIT * sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_ul(unsigned long value)
{
return value ? (unsigned int)__builtin_ctzl(value)
: (CHAR_BIT * sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_ui(unsigned int value)
{
return value ? (unsigned int)__builtin_ctz(value)
: (CHAR_BIT * sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_us(unsigned short value)
{
return value ? (unsigned int)__builtin_ctz(value)
: (CHAR_BIT * sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_uc(unsigned char value)
{
return value ? (unsigned int)__builtin_ctz(value)
: (CHAR_BIT * sizeof (value));
}
#else
static inline unsigned int __stdc_trailing_zeros(unsigned long long value,
unsigned int size)
{
unsigned int zeros = 0;
if (!value)
return size * CHAR_BIT;
while ((value & 1) == 0) {
value >>= 1;
zeros++;
}
return zeros;
}
static inline unsigned int stdc_trailing_zeros_ull(unsigned long long value)
{
return __stdc_trailing_zeros(value, sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_ul(unsigned long value)
{
return __stdc_trailing_zeros(value, sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_ui(unsigned int value)
{
return __stdc_trailing_zeros(value, sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_us(unsigned short value)
{
return __stdc_trailing_zeros(value, sizeof (value));
}
static inline unsigned int stdc_trailing_zeros_uc(unsigned char value)
{
return __stdc_trailing_zeros(value, sizeof (value));
}
#endif
#define stdc_trailing_zeros(value) \
__stdbit_generic_type_func(trailing_zeros, value)
static inline unsigned int stdc_trailing_ones_ull(unsigned long long value)
{
return stdc_trailing_zeros_ull(~value);
}
static inline unsigned int stdc_trailing_ones_ul(unsigned long value)
{
return stdc_trailing_zeros_ul(~value);
}
static inline unsigned int stdc_trailing_ones_ui(unsigned int value)
{
return stdc_trailing_zeros_ui(~value);
}
static inline unsigned int stdc_trailing_ones_us(unsigned short value)
{
return stdc_trailing_zeros_us(~value);
}
static inline unsigned int stdc_trailing_ones_uc(unsigned char value)
{
return stdc_trailing_zeros_uc(~value);
}
#define stdc_trailing_ones(value) \
__stdbit_generic_type_func(trailing_ones, value)
static inline unsigned int stdc_first_leading_one_ull(unsigned long long value)
{
return value ? (stdc_leading_zeros_ull(value) + 1) : 0;
}
static inline unsigned int stdc_first_leading_one_ul(unsigned long value)
{
return value ? (stdc_leading_zeros_ul(value) + 1) : 0;
}
static inline unsigned int stdc_first_leading_one_ui(unsigned int value)
{
return value ? (stdc_leading_zeros_ui(value) + 1) : 0;
}
static inline unsigned int stdc_first_leading_one_us(unsigned short value)
{
return value ? (stdc_leading_zeros_us(value) + 1) : 0;
}
static inline unsigned int stdc_first_leading_one_uc(unsigned char value)
{
return value ? (stdc_leading_zeros_uc(value) + 1) : 0;
}
#define stdc_first_leading_one(value) \
__stdbit_generic_type_func(first_leading_one, value)
static inline unsigned int stdc_first_leading_zero_ull(unsigned long long value)
{
return stdc_leading_ones_ull(~value);
}
static inline unsigned int stdc_first_leading_zero_ul(unsigned long value)
{
return stdc_leading_ones_ul(~value);
}
static inline unsigned int stdc_first_leading_zero_ui(unsigned int value)
{
return stdc_leading_ones_ui(~value);
}
static inline unsigned int stdc_first_leading_zero_us(unsigned short value)
{
return stdc_leading_ones_us(~value);
}
static inline unsigned int stdc_first_leading_zero_uc(unsigned char value)
{
return stdc_leading_ones_uc(~value);
}
#define stdc_first_leading_zero(value) \
__stdbit_generic_type_func(first_leading_zero, value)
#if defined (__GNUC__) || defined (__clang__)
static inline unsigned int stdc_first_trailing_one_ull(unsigned long long value)
{
return __builtin_ffsll(value);
}
static inline unsigned int stdc_first_trailing_one_ul(unsigned long value)
{
return __builtin_ffsl(value);
}
static inline unsigned int stdc_first_trailing_one_ui(unsigned int value)
{
return __builtin_ffs(value);
}
static inline unsigned int stdc_first_trailing_one_us(unsigned short value)
{
return __builtin_ffs(value);
}
static inline unsigned int stdc_first_trailing_one_uc(unsigned char value)
{
return __builtin_ffs(value);
}
#else
static inline unsigned int stdc_first_trailing_one_ull(unsigned long long value)
{
return value ? (1 + stdc_trailing_zeros_ull(value)) : 0;
}
static inline unsigned int stdc_first_trailing_one_ul(unsigned long value)
{
return value ? (1 + stdc_trailing_zeros_ul(value)) : 0;
}
static inline unsigned int stdc_first_trailing_one_ui(unsigned int value)
{
return value ? (1 + stdc_trailing_zeros_ui(value)) : 0;
}
static inline unsigned int stdc_first_trailing_one_us(unsigned short value)
{
return value ? (1 + stdc_trailing_zeros_us(value)) : 0;
}
static inline unsigned int stdc_first_trailing_one_uc(unsigned char value)
{
return value ? (1 + stdc_trailing_zeros_uc(value)) : 0;
}
#endif
#define stdc_first_trailing_one(value) \
__stdbit_generic_type_func(first_trailing_one, value)
static inline unsigned int stdc_first_trailing_zero_ull(unsigned long long value)
{
return stdc_first_trailing_one_ull(~value);
}
static inline unsigned int stdc_first_trailing_zero_ul(unsigned long value)
{
return stdc_first_trailing_one_ul(~value);
}
static inline unsigned int stdc_first_trailing_zero_ui(unsigned int value)
{
return stdc_first_trailing_one_ui(~value);
}
static inline unsigned int stdc_first_trailing_zero_us(unsigned short value)
{
return stdc_first_trailing_one_us(~value);
}
static inline unsigned int stdc_first_trailing_zero_uc(unsigned char value)
{
return stdc_first_trailing_one_uc(~value);
}
#define stdc_first_trailing_zero(value) \
__stdbit_generic_type_func(first_trailing_zero, value)
#if defined (__GNUC__) || defined (__clang__)
static inline unsigned int stdc_count_ones_ull(unsigned long long value)
{
return __builtin_popcountll(value);
}
static inline unsigned int stdc_count_ones_ul(unsigned long value)
{
return __builtin_popcountl(value);
}
static inline unsigned int stdc_count_ones_ui(unsigned int value)
{
return __builtin_popcount(value);
}
static inline unsigned int stdc_count_ones_us(unsigned short value)
{
return __builtin_popcount(value);
}
static inline unsigned int stdc_count_ones_uc(unsigned char value)
{
return __builtin_popcount(value);
}
#else
static inline unsigned int __stdc_count_ones(unsigned long long value,
unsigned int size)
{
unsigned int ones = 0;
for (unsigned int c = 0; c < (size * CHAR_BIT); c++) {
ones += value & 1;
value >>= 1;
}
return ones;
}
static inline unsigned int stdc_count_ones_ull(unsigned long long value)
{
return __stdc_count_ones(value, sizeof (value));
}
static inline unsigned int stdc_count_ones_ul(unsigned long value)
{
return __stdc_count_ones(value, sizeof (value));
}
static inline unsigned int stdc_count_ones_ui(unsigned int value)
{
return __stdc_count_ones(value, sizeof (value));
}
static inline unsigned int stdc_count_ones_us(unsigned short value)
{
return __stdc_count_ones(value, sizeof (value));
}
static inline unsigned int stdc_count_ones_uc(unsigned char value)
{
return __stdc_count_ones(value, sizeof (value));
}
#endif
#define stdc_count_ones(value) \
__stdbit_generic_type_func(count_ones, value)
static inline unsigned int stdc_count_zeros_ull(unsigned long long value)
{
return stdc_count_ones_ull(~value);
}
static inline unsigned int stdc_count_zeros_ul(unsigned long value)
{
return stdc_count_ones_ul(~value);
}
static inline unsigned int stdc_count_zeros_ui(unsigned int value)
{
return stdc_count_ones_ui(~value);
}
static inline unsigned int stdc_count_zeros_us(unsigned short value)
{
return stdc_count_ones_us(~value);
}
static inline unsigned int stdc_count_zeros_uc(unsigned char value)
{
return stdc_count_ones_uc(~value);
}
#define stdc_count_zeros(value) \
__stdbit_generic_type_func(count_zeros, value)
static inline bool stdc_has_single_bit_ull(unsigned long long value)
{
return value && (value & (value - 1)) == 0;
}
static inline bool stdc_has_single_bit_ul(unsigned long value)
{
return value && (value & (value - 1)) == 0;
}
static inline bool stdc_has_single_bit_ui(unsigned int value)
{
return value && (value & (value - 1)) == 0;
}
static inline bool stdc_has_single_bit_us(unsigned short value)
{
return value && (value & (value - 1)) == 0;
}
static inline bool stdc_has_single_bit_uc(unsigned char value)
{
return value && (value & (value - 1)) == 0;
}
#define stdc_has_single_bit(value) \
__stdbit_generic_type_func(has_single_bit, value)
static inline unsigned int stdc_bit_width_ull(unsigned long long value)
{
return (CHAR_BIT * sizeof (value)) - stdc_leading_zeros_ull(value);
}
static inline unsigned int stdc_bit_width_ul(unsigned long value)
{
return (CHAR_BIT * sizeof (value)) - stdc_leading_zeros_ul(value);
}
static inline unsigned int stdc_bit_width_ui(unsigned int value)
{
return (CHAR_BIT * sizeof (value)) - stdc_leading_zeros_ui(value);
}
static inline unsigned int stdc_bit_width_us(unsigned short value)
{
return (CHAR_BIT * sizeof (value)) - stdc_leading_zeros_us(value);
}
static inline unsigned int stdc_bit_width_uc(unsigned char value)
{
return (CHAR_BIT * sizeof (value)) - stdc_leading_zeros_uc(value);
}
#define stdc_bit_width(value) \
__stdbit_generic_type_func(bit_width, value)
static inline unsigned long long stdc_bit_floor_ull(unsigned long long value)
{
return value ? (1ULL << (stdc_bit_width_ull(value) - 1)) : 0ULL;
}
static inline unsigned long stdc_bit_floor_ul(unsigned long value)
{
return value ? (1UL << (stdc_bit_width_ul(value) - 1)) : 0UL;
}
static inline unsigned int stdc_bit_floor_ui(unsigned int value)
{
return value ? (1U << (stdc_bit_width_ui(value) - 1)) : 0U;
}
static inline unsigned short stdc_bit_floor_us(unsigned short value)
{
return value ? (1U << (stdc_bit_width_us(value) - 1)) : 0U;
}
static inline unsigned int stdc_bit_floor_uc(unsigned char value)
{
return value ? (1U << (stdc_bit_width_uc(value) - 1)) : 0U;
}
#define stdc_bit_floor(value) \
__stdbit_generic_type_func(bit_floor, value)
/* NOTE: Bit ceiling undefines overflow. */
static inline unsigned long long stdc_bit_ceil_ull(unsigned long long value)
{
return 1ULL << (value ? stdc_bit_width_ull(value - 1) : 0);
}
static inline unsigned long stdc_bit_ceil_ul(unsigned long value)
{
return 1UL << (value ? stdc_bit_width_ul(value - 1) : 0);
}
static inline unsigned int stdc_bit_ceil_ui(unsigned int value)
{
return 1U << (value ? stdc_bit_width_ui(value - 1) : 0);
}
static inline unsigned short stdc_bit_ceil_us(unsigned short value)
{
return 1U << (value ? stdc_bit_width_us(value - 1) : 0);
}
static inline unsigned int stdc_bit_ceil_uc(unsigned char value)
{
return 1U << (value ? stdc_bit_width_uc(value - 1) : 0);
}
#define stdc_bit_ceil(value) \
__stdbit_generic_type_func(bit_ceil, value)
#endif /* __STDC_VERSION_STDBIT_H__ */

View File

@@ -20,41 +20,11 @@
#define COMPAT_W32DLFCN_H
#ifdef _WIN32
#include <stdint.h>
#include <windows.h>
#include "config.h"
#include "libavutil/macros.h"
#include "libavutil/mem.h"
#if (_WIN32_WINNT < 0x0602) || HAVE_WINRT
#include "libavutil/wchar_filename.h"
static inline wchar_t *get_module_filename(HMODULE module)
{
wchar_t *path = NULL, *new_path;
DWORD path_size = 0, path_len;
do {
path_size = path_size ? FFMIN(2 * path_size, INT16_MAX + 1) : MAX_PATH;
new_path = av_realloc_array(path, path_size, sizeof *path);
if (!new_path) {
av_free(path);
return NULL;
}
path = new_path;
// Returns path_size in case of insufficient buffer.
// Whether the error is set or not and whether the output
// is null-terminated or not depends on the version of Windows.
path_len = GetModuleFileNameW(module, path, path_size);
} while (path_len && path_size <= INT16_MAX && path_size <= path_len);
if (!path_len) {
av_free(path);
return NULL;
}
return path;
}
#endif
/**
* Safe function used to open dynamic libs. This attempts to improve program security
* by removing the current directory from the dll search path. Only dll's found in the
@@ -64,53 +34,29 @@ static inline wchar_t *get_module_filename(HMODULE module)
*/
static inline HMODULE win32_dlopen(const char *name)
{
wchar_t *name_w;
HMODULE module = NULL;
if (utf8towchar(name, &name_w))
name_w = NULL;
#if _WIN32_WINNT < 0x0602
// On Win7 and earlier we check if KB2533623 is available
// Need to check if KB2533623 is available
if (!GetProcAddress(GetModuleHandleW(L"kernel32.dll"), "SetDefaultDllDirectories")) {
wchar_t *path = NULL, *new_path;
DWORD pathlen, pathsize, namelen;
if (!name_w)
HMODULE module = NULL;
wchar_t *path = NULL, *name_w = NULL;
DWORD pathlen;
if (utf8towchar(name, &name_w))
goto exit;
namelen = wcslen(name_w);
path = (wchar_t *)av_mallocz_array(MAX_PATH, sizeof(wchar_t));
// Try local directory first
path = get_module_filename(NULL);
if (!path)
pathlen = GetModuleFileNameW(NULL, path, MAX_PATH);
pathlen = wcsrchr(path, '\\') - path;
if (pathlen == 0 || pathlen + wcslen(name_w) + 2 > MAX_PATH)
goto exit;
new_path = wcsrchr(path, '\\');
if (!new_path)
goto exit;
pathlen = new_path - path;
pathsize = pathlen + namelen + 2;
new_path = av_realloc_array(path, pathsize, sizeof *path);
if (!new_path)
goto exit;
path = new_path;
path[pathlen] = '\\';
wcscpy(path + pathlen + 1, name_w);
module = LoadLibraryExW(path, NULL, LOAD_WITH_ALTERED_SEARCH_PATH);
if (module == NULL) {
// Next try System32 directory
pathlen = GetSystemDirectoryW(path, pathsize);
if (!pathlen)
pathlen = GetSystemDirectoryW(path, MAX_PATH);
if (pathlen == 0 || pathlen + wcslen(name_w) + 2 > MAX_PATH)
goto exit;
// Buffer is not enough in two cases:
// 1. system directory + \ + module name
// 2. system directory even without the module name.
if (pathlen + namelen + 2 > pathsize) {
pathsize = pathlen + namelen + 2;
new_path = av_realloc_array(path, pathsize, sizeof *path);
if (!new_path)
goto exit;
path = new_path;
// Query again to handle the case #2.
pathlen = GetSystemDirectoryW(path, pathsize);
if (!pathlen)
goto exit;
}
path[pathlen] = L'\\';
path[pathlen] = '\\';
wcscpy(path + pathlen + 1, name_w);
module = LoadLibraryExW(path, NULL, LOAD_WITH_ALTERED_SEARCH_PATH);
}
@@ -127,19 +73,16 @@ exit:
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
#endif
#if HAVE_WINRT
if (!name_w)
wchar_t *name_w = NULL;
int ret;
if (utf8towchar(name, &name_w))
return NULL;
module = LoadPackagedLibrary(name_w, 0);
#else
#define LOAD_FLAGS (LOAD_LIBRARY_SEARCH_APPLICATION_DIR | LOAD_LIBRARY_SEARCH_SYSTEM32)
/* filename may be be in CP_ACP */
if (!name_w)
return LoadLibraryExA(name, NULL, LOAD_FLAGS);
module = LoadLibraryExW(name_w, NULL, LOAD_FLAGS);
#undef LOAD_FLAGS
#endif
ret = LoadPackagedLibrary(name_w, 0);
av_free(name_w);
return module;
return ret;
#else
return LoadLibraryExA(name, NULL, LOAD_LIBRARY_SEARCH_APPLICATION_DIR | LOAD_LIBRARY_SEARCH_SYSTEM32);
#endif
}
#define dlopen(name, flags) win32_dlopen(name)
#define dlclose FreeLibrary

View File

@@ -35,23 +35,21 @@
* As most functions here are used without checking return values,
* only implement return values as necessary. */
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <process.h>
#include <time.h>
#include "libavutil/attributes.h"
#include "libavutil/common.h"
#include "libavutil/internal.h"
#include "libavutil/mem.h"
#include "libavutil/time.h"
#include "libavutil/wchar_filename.h"
typedef struct w32pthread_t {
typedef struct pthread_t {
void *handle;
void *(*func)(void* arg);
void *arg;
void *ret;
} *pthread_t;
} pthread_t;
/* use light weight mutex/condition variable API for Windows Vista and later */
typedef SRWLOCK pthread_mutex_t;
@@ -63,55 +61,31 @@ typedef CONDITION_VARIABLE pthread_cond_t;
#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
#define PTHREAD_CANCEL_ENABLE 1
#define PTHREAD_CANCEL_DISABLE 0
#if HAVE_WINRT
#define THREADFUNC_RETTYPE DWORD
#else
#define THREADFUNC_RETTYPE unsigned
#endif
av_unused static THREADFUNC_RETTYPE
__stdcall attribute_align_arg win32thread_worker(void *arg)
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
{
pthread_t h = (pthread_t)arg;
pthread_t *h = (pthread_t*)arg;
h->ret = h->func(h->arg);
return 0;
}
av_unused static int pthread_create(pthread_t *thread, const void *unused_attr,
static av_unused int pthread_create(pthread_t *thread, const void *unused_attr,
void *(*start_routine)(void*), void *arg)
{
pthread_t ret;
ret = (pthread_t)av_mallocz(sizeof(*ret));
if (!ret)
return EAGAIN;
ret->func = start_routine;
ret->arg = arg;
thread->func = start_routine;
thread->arg = arg;
#if HAVE_WINRT
ret->handle = (void*)CreateThread(NULL, 0, win32thread_worker, ret,
0, NULL);
thread->handle = (void*)CreateThread(NULL, 0, win32thread_worker, thread,
0, NULL);
#else
ret->handle = (void*)_beginthreadex(NULL, 0, win32thread_worker, ret,
0, NULL);
thread->handle = (void*)_beginthreadex(NULL, 0, win32thread_worker, thread,
0, NULL);
#endif
if (!ret->handle) {
av_free(ret);
return EAGAIN;
}
*thread = ret;
return 0;
return !thread->handle;
}
av_unused static int pthread_join(pthread_t thread, void **value_ptr)
static av_unused int pthread_join(pthread_t thread, void **value_ptr)
{
DWORD ret = WaitForSingleObject(thread->handle, INFINITE);
DWORD ret = WaitForSingleObject(thread.handle, INFINITE);
if (ret != WAIT_OBJECT_0) {
if (ret == WAIT_ABANDONED)
return EINVAL;
@@ -119,9 +93,8 @@ av_unused static int pthread_join(pthread_t thread, void **value_ptr)
return EDEADLK;
}
if (value_ptr)
*value_ptr = thread->ret;
CloseHandle(thread->handle);
av_free(thread);
*value_ptr = thread.ret;
CloseHandle(thread.handle);
return 0;
}
@@ -149,7 +122,7 @@ static inline int pthread_mutex_unlock(pthread_mutex_t *m)
typedef INIT_ONCE pthread_once_t;
#define PTHREAD_ONCE_INIT INIT_ONCE_STATIC_INIT
av_unused static int pthread_once(pthread_once_t *once_control, void (*init_routine)(void))
static av_unused int pthread_once(pthread_once_t *once_control, void (*init_routine)(void))
{
BOOL pending = FALSE;
InitOnceBeginInitialize(once_control, 0, &pending, NULL);
@@ -183,65 +156,10 @@ static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex
return 0;
}
static inline int pthread_cond_timedwait(pthread_cond_t *cond, pthread_mutex_t *mutex,
const struct timespec *abstime)
{
int64_t abs_milli = abstime->tv_sec * 1000LL + abstime->tv_nsec / 1000000;
DWORD t = av_clip64(abs_milli - av_gettime() / 1000, 0, UINT32_MAX);
if (!SleepConditionVariableSRW(cond, mutex, t, 0)) {
DWORD err = GetLastError();
if (err == ERROR_TIMEOUT)
return ETIMEDOUT;
else
return EINVAL;
}
return 0;
}
static inline int pthread_cond_signal(pthread_cond_t *cond)
{
WakeConditionVariable(cond);
return 0;
}
static inline int pthread_setcancelstate(int state, int *oldstate)
{
return 0;
}
static inline int win32_thread_setname(const char *name)
{
#if !HAVE_UWP
typedef HRESULT (WINAPI *SetThreadDescriptionFn)(HANDLE, PCWSTR);
// Although SetThreadDescription lives in kernel32.dll, on Windows Server 2016,
// Windows 10 LTSB 2016 and Windows 10 version 1607, it was only available in
// kernelbase.dll. So, load it from there for maximum coverage.
HMODULE kernelbase = GetModuleHandleW(L"kernelbase.dll");
if (!kernelbase)
return AVERROR(ENOSYS);
SetThreadDescriptionFn pSetThreadDescription =
(SetThreadDescriptionFn)GetProcAddress(kernelbase, "SetThreadDescription");
if (!pSetThreadDescription)
return AVERROR(ENOSYS);
wchar_t *wname;
if (utf8towchar(name, &wname) < 0)
return AVERROR(ENOMEM);
HRESULT hr = pSetThreadDescription(GetCurrentThread(), wname);
av_free(wname);
return SUCCEEDED(hr) ? 0 : AVERROR(EINVAL);
#else
// UWP is not supported because we cannot use LoadLibrary/GetProcAddress to
// detect the availability of the SetThreadDescription API. There is a small
// gap in Windows builds 1507-1607 where it was not available. UWP allows
// querying the availability of APIs with QueryOptionalDelayLoadedAPI, but it
// requires /DELAYLOAD:kernel32.dll during linking, and we cannot enforce that.
return AVERROR(ENOSYS);
#endif
}
#endif /* COMPAT_W32PTHREADS_H */

View File

@@ -48,7 +48,7 @@ trap 'rm -f -- $libname' EXIT
if [ -n "$AR" ]; then
$AR rcs ${libname} $@ >/dev/null
else
lib.exe -out:${libname} $@ >/dev/null
lib -out:${libname} $@ >/dev/null
fi
if [ $? != 0 ]; then
echo "Could not create temporary library." >&2
@@ -108,7 +108,7 @@ if [ -n "$NM" ]; then
cut -d' ' -f3 |
sed -e "s/^${prefix}//")
else
dump=$(dumpbin.exe -linkermember:1 ${libname} |
dump=$(dumpbin -linkermember:1 ${libname} |
sed -e '/public symbols/,$!d' -e '/^ \{1,\}Summary/,$d' -e "s/ \{1,\}${prefix}/ /" -e 's/ \{1,\}/ /g' |
tail -n +2 |
cut -d' ' -f3)

View File

@@ -4,6 +4,6 @@ LINK_EXE_PATH=$(dirname "$(command -v cl)")/link
if [ -x "$LINK_EXE_PATH" ]; then
"$LINK_EXE_PATH" $@
else
link.exe $@
link $@
fi
exit $?

View File

@@ -1,32 +0,0 @@
#!/bin/sh
if [ "$1" = "--version" ]; then
rc.exe -?
exit $?
fi
if [ $# -lt 2 ]; then
echo "Usage: mswindres [-I/include/path ...] [-DSOME_DEFINE ...] [-o output.o] input.rc [output.o]" >&2
exit 0
fi
EXTRA_OPTS="-nologo"
while [ $# -gt 2 ]; do
case $1 in
-D*) EXTRA_OPTS="$EXTRA_OPTS -d$(echo $1 | sed -e "s/^..//" -e "s/ /\\\\ /g")" ;;
-I*) EXTRA_OPTS="$EXTRA_OPTS -i$(echo $1 | sed -e "s/^..//" -e "s/ /\\\\ /g")" ;;
-o) OPT_OUT="$2"; shift ;;
esac
shift
done
IN="$1"
if [ -z "$OPT_OUT" ]; then
OUT="$2"
else
OUT="$OPT_OUT"
fi
eval set -- $EXTRA_OPTS
rc.exe "$@" -fo "$OUT" "$IN"

3229
configure vendored

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -38,7 +38,7 @@ PROJECT_NAME = FFmpeg
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER =
PROJECT_NUMBER = 4.1.2
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
@@ -1093,7 +1093,7 @@ HTML_STYLESHEET =
# cascading style sheets that are included after the standard style sheets
# created by doxygen. Using this option one can overrule certain style aspects.
# This is preferred over using HTML_STYLESHEET since it does not replace the
# standard style sheet and is therefore more robust against future updates.
# standard style sheet and is therefor more robust against future updates.
# Doxygen will copy the style sheet files to the output directory.
# Note: The order of the extra stylesheet files is of importance (e.g. the last
# stylesheet in the list overrules the setting of the previous ones in the
@@ -1636,7 +1636,7 @@ EXTRA_PACKAGES =
# Note: Only use a user-defined header if you know what you are doing! The
# following commands have a special meaning inside the header: $title,
# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
# $projectbrief, $projectlogo. Doxygen will replace $title with the empty string,
# $projectbrief, $projectlogo. Doxygen will replace $title with the empy string,
# for the replacement values of the other commands the user is referred to
# HTML_HEADER.
# This tag requires that the tag GENERATE_LATEX is set to YES.
@@ -1980,7 +1980,6 @@ PREDEFINED = __attribute__(x)= \
av_alloc_size(...)= \
AV_GCC_VERSION_AT_LEAST(x,y)=1 \
AV_GCC_VERSION_AT_MOST(x,y)=0 \
"FF_PAD_STRUCTURE(name,size,...)=typedef struct name { __VA_ARGS__ } name;" \
__GNUC__
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this

View File

@@ -19,7 +19,6 @@ MANPAGES3 = $(LIBRARIES-yes:%=doc/%.3)
MANPAGES = $(MANPAGES1) $(MANPAGES3)
PODPAGES = $(AVPROGS-yes:%=doc/%.pod) $(AVPROGS-yes:%=doc/%-all.pod) $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
doc/community.html \
doc/developer.html \
doc/faq.html \
doc/fate.html \
@@ -28,10 +27,6 @@ HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMP
doc/mailing-list-faq.html \
doc/nut.html \
doc/platform.html \
doc/drawvg-reference.html \
$(SRC_PATH)/doc/bootstrap.min.css \
$(SRC_PATH)/doc/style.min.css \
$(SRC_PATH)/doc/default.css \
TXTPAGES = doc/fate.txt \
@@ -61,7 +56,7 @@ GENTEXI := $(GENTEXI:%=doc/avoptions_%.texi)
$(GENTEXI): TAG = GENTEXI
$(GENTEXI): doc/avoptions_%.texi: doc/print_options$(HOSTEXESUF)
$(M)doc/print_options$(HOSTEXESUF) $* > $@
$(M)doc/print_options $* > $@
doc/%.html: TAG = HTML
doc/%-all.html: TAG = HTML
@@ -107,7 +102,7 @@ DOXY_INPUT_DEPS = $(addprefix $(SRC_PATH)/, $(DOXY_INPUT)) ffbuild/config.mak
doc/doxy/html: TAG = DOXY
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT_DEPS)
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $$PWD/doc/doxy $(SRC_PATH) doc/Doxyfile $(DOXYGEN) $(DOXY_INPUT);
$(M)OUT_DIR=$$PWD/doc/doxy; cd $(SRC_PATH); ./doc/doxy-wrapper.sh $$OUT_DIR $< $(DOXYGEN) $(DOXY_INPUT);
install-doc: install-html install-man

View File

@@ -3,9 +3,9 @@
The FFmpeg developers.
For details about the authorship, see the Git history of the project
(https://git.ffmpeg.org/ffmpeg), e.g. by typing the command
(git://source.ffmpeg.org/ffmpeg), e.g. by typing the command
@command{git log} in the FFmpeg source directory, or browsing the
online repository at @url{https://git.ffmpeg.org/ffmpeg}.
online repository at @url{http://source.ffmpeg.org}.
Maintainers for the specific components are listed in the file
@file{MAINTAINERS} in the source code tree.

View File

@@ -81,15 +81,12 @@ Top-left position.
@end table
@item tick_rate
Set the tick rate (@emph{time_scale / num_units_in_display_tick}) in
Set the tick rate (@emph{num_units_in_display_tick / time_scale}) in
the timing info in the sequence header.
@item num_ticks_per_picture
Set the number of ticks in each picture, to indicate that the stream
has a fixed framerate. Ignored if @option{tick_rate} is not also set.
@item delete_padding
Deletes Padding OBUs.
@end table
@section chomp
@@ -101,34 +98,9 @@ Remove zero padding at the end of a packet.
Extract the core from a DCA/DTS stream, dropping extensions such as
DTS-HD.
@section dovi_rpu
Manipulate Dolby Vision metadata in a HEVC/AV1 bitstream, optionally enabling
metadata compression.
@table @option
@item strip
If enabled, strip all Dolby Vision metadata (configuration record + RPU data
blocks) from the stream.
@item compression
Which compression level to enable.
@table @samp
@item none
No metadata compression.
@item limited
Limited metadata compression scheme. Should be compatible with most devices.
This is the default.
@item extended
Extended metadata compression. Devices are not required to support this. Note
that this level currently behaves the same as @samp{limited} in libavcodec.
@end table
@end table
@section dump_extra
Add extradata to the beginning of the filtered packets except when
said packets already exactly begin with the extradata that is intended
to be added.
Add extradata to the beginning of the filtered packets.
@table @option
@item freq
@@ -145,7 +117,7 @@ add extradata to all packets
@end table
@end table
If not specified it is assumed @samp{k}.
If not specified it is assumed @samp{e}.
For example the following @command{ffmpeg} command forces a global
header (thus disabling individual packet headers) in the H.264 packets
@@ -155,86 +127,10 @@ the header stored in extradata to the key packets:
ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts
@end example
@section dv_error_marker
Blocks in DV which are marked as damaged are replaced by blocks of the specified color.
@table @option
@item color
The color to replace damaged blocks by
@item sta
A 16 bit mask which specifies which of the 16 possible error status values are
to be replaced by colored blocks. 0xFFFE is the default which replaces all non 0
error status values.
@table @samp
@item ok
No error, no concealment
@item err
Error, No concealment
@item res
Reserved
@item notok
Error or concealment
@item notres
Not reserved
@item Aa, Ba, Ca, Ab, Bb, Cb, A, B, C, a, b, erri, erru
The specific error status code
@end table
see page 44-46 or section 5.5 of
@url{http://web.archive.org/web/20060927044735/http://www.smpte.org/smpte_store/standards/pdf/s314m.pdf}
@end table
@section eac3_core
Extract the core from a E-AC-3 stream, dropping extra channels.
@section eia608_to_smpte436m
Convert from a @code{EIA_608} stream to a @code{SMPTE_436M_ANC} data stream, wrapping the closed captions in CTA-708 CDP VANC packets.
@table @option
@item line_number
Choose which line number the generated VANC packets should go on. You generally want either line 9 (the default) or 11.
@item wrapping_type
Choose the SMPTE 436M wrapping type, defaults to @samp{vanc_frame}.
It accepts the values:
@table @samp
@item vanc_frame
VANC frame (interlaced or segmented progressive frame)
@item vanc_field_1
@item vanc_field_2
@item vanc_progressive_frame
@end table
@item sample_coding
Choose the SMPTE 436M sample coding, defaults to @samp{8bit_luma}.
It accepts the values:
@table @samp
@item 8bit_luma
8-bit component luma samples
@item 8bit_color_diff
8-bit component color difference samples
@item 8bit_luma_and_color_diff
8-bit component luma and color difference samples
@item 10bit_luma
10-bit component luma samples
@item 10bit_color_diff
10-bit component color difference samples
@item 10bit_luma_and_color_diff
10-bit component luma and color difference samples
@item 8bit_luma_parity_error
8-bit component luma samples with parity error
@item 8bit_color_diff_parity_error
8-bit component color difference samples with parity error
@item 8bit_luma_and_color_diff_parity_error
8-bit component luma and color difference samples with parity error
@end table
@item initial_cdp_sequence_cntr
The initial value of the CDP's 16-bit unsigned integer @code{cdp_hdr_sequence_cntr} and @code{cdp_ftr_sequence_cntr} fields. Defaults to 0.
@item cdp_frame_rate
Set the CDP's @code{cdp_frame_rate} field. This doesn't actually change the timing of the data stream, it just changes the values inserted in that field in the generated CDP packets. Defaults to @samp{30000/1001}.
@end table
@section extract_extradata
Extract the in-band extradata.
@@ -268,13 +164,6 @@ Identical to @option{pass_types}, except the units in the given set
removed and all others passed through.
@end table
The types used by pass_types and remove_types correspond to NAL unit types
(nal_unit_type) in H.264, HEVC and H.266 (see Table 7-1 in the H.264
and HEVC specifications or Table 5 in the H.266 specification), to
marker values for JPEG (without 0xFF prefix) and to start codes without
start code prefix (i.e. the byte following the 0x000001) for MPEG-2.
For VP8 and VP9, every unit has type zero.
Extradata is unchanged by this transformation, but note that if the stream
contains inline parameter sets then the output may be unusable if they are
removed.
@@ -289,21 +178,6 @@ To remove all AUDs, SEI and filler from an H.265 stream:
ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=35|38-40' OUTPUT
@end example
To remove all user data from a MPEG-2 stream, including Closed Captions:
@example
ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=178' OUTPUT
@end example
To remove all SEI from a H264 stream, including Closed Captions:
@example
ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=6' OUTPUT
@end example
To remove all prefix and suffix SEI from a HEVC stream, including Closed Captions and dynamic HDR:
@example
ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=39|40' OUTPUT
@end example
@section hapqa_extract
Extract Rgb or Alpha part of an HAPQA file, without recompression, in order to create an HAPQ or an HAPAlphaOnly file.
@@ -338,20 +212,12 @@ Modify metadata embedded in an H.264 stream.
Insert or remove AUD NAL units in all access units of the stream.
@table @samp
@item pass
@item insert
@item remove
@end table
Default is pass.
@item sample_aspect_ratio
Set the sample aspect ratio of the stream in the VUI parameters.
See H.264 table E-1.
@item overscan_appropriate_flag
Set whether the stream is suitable for display using overscan
or not (see H.264 section E.2.1).
@item video_format
@item video_full_range_flag
@@ -369,7 +235,7 @@ Set the chroma sample location in the stream (see H.264 section
E.2.1 and figure E-1).
@item tick_rate
Set the tick rate (time_scale / num_units_in_tick) in the VUI
Set the tick rate (num_units_in_tick / time_scale) in the VUI
parameters. This is the smallest time unit representable in the
stream, and in many cases represents the field rate of the stream
(double the frame rate).
@@ -378,11 +244,6 @@ Set whether the stream has fixed framerate - typically this indicates
that the framerate is exactly half the tick rate, but the exact
meaning is dependent on interlacing and the picture structure (see
H.264 section E.2.1 and table E-6).
@item zero_new_constraint_set_flags
Zero constraint_set4_flag and constraint_set5_flag in the SPS. These
bits were reserved in a previous version of the H.264 spec, and thus
some hardware decoders require these to be zero. The result of zeroing
this is still a valid bitstream.
@item crop_left
@item crop_right
@@ -406,37 +267,6 @@ insert the string ``hello'' associated with the given UUID.
@item delete_filler
Deletes both filler NAL units and filler SEI messages.
@item display_orientation
Insert, extract or remove Display orientation SEI messages.
See H.264 section D.1.27 and D.2.27 for syntax and semantics.
@table @samp
@item pass
@item insert
@item remove
@item extract
@end table
Default is pass.
Insert mode works in conjunction with @code{rotate} and @code{flip} options.
Any pre-existing Display orientation messages will be removed in insert or remove mode.
Extract mode attaches the display matrix to the packet as side data.
@item rotate
Set rotation in display orientation SEI (anticlockwise angle in degrees).
Range is -360 to +360. Default is NaN.
@item flip
Set flip in display orientation SEI.
@table @samp
@item horizontal
@item vertical
@end table
Default is unset.
@item level
Set the level in the SPS. Refer to H.264 section A.3 and tables A-1
to A-5.
@@ -469,21 +299,12 @@ Please note that this filter is auto-inserted for MPEG-TS (muxer
@section h264_redundant_pps
This applies a specific fixup to some Blu-ray BDMV H264 streams
which contain redundant PPSs. The PPSs modify irrelevant parameters
of the stream, confusing other transformations which require
the correct extradata.
This applies a specific fixup to some Blu-ray streams which contain
redundant PPSs modifying irrelevant parameters of the stream which
confuse other transformations which require correct extradata.
The encoder used on these impacted streams adds extra PPSs throughout
the stream, varying the initial QP and whether weighted prediction
was enabled. This causes issues after copying the stream into
a global header container, as the starting PPS is not suitable
for the rest of the stream. One side effect, for example,
is seeking will return garbled output until a new PPS appears.
This BSF removes the extra PPSs and rewrites the slice headers
such that the stream uses a single leading PPS in the global header,
which resolves the issue.
A new single global PPS is created, and all of the redundant PPSs
within the stream are removed.
@section hevc_metadata
@@ -517,8 +338,8 @@ Set the chroma sample location in the stream (see H.265 section
E.3.1 and figure E.1).
@item tick_rate
Set the tick rate in the VPS and VUI parameters (time_scale /
num_units_in_tick). Combined with @option{num_ticks_poc_diff_one}, this can
Set the tick rate in the VPS and VUI parameters (num_units_in_tick /
time_scale). Combined with @option{num_ticks_poc_diff_one}, this can
set a constant framerate in the stream. Note that it is likely to be
overridden by container parameters when the stream is in a container.
@@ -537,19 +358,6 @@ will replace the current ones if the stream is already cropped.
These fields are set in pixels. Note that some sizes may not be
representable if the chroma is subsampled (H.265 section 7.4.3.2.1).
@item width
@item height
Set width and height after crop.
@item level
Set the level in the VPS and SPS. See H.265 section A.4 and tables
A.6 and A.7.
The argument must be the name of a level (for example, @samp{5.1}), a
@emph{general_level_idc} value (for example, @samp{153} for level 5.1),
or the special name @samp{auto} indicating that the filter should
attempt to guess the level from the input stream properties.
@end table
@section hevc_mp4toannexb
@@ -635,6 +443,10 @@ metadata header from each subtitle packet.
See also the @ref{text2movsub} filter.
@section mp3decomp
Decompress non-standard compressed MP3 audio headers.
@section mpeg2_metadata
Modify metadata embedded in an MPEG-2 stream.
@@ -699,185 +511,25 @@ container. Can be used for fuzzing or testing error resilience/concealment.
Parameters:
@table @option
@item amount
Accepts an expression whose evaluation per-packet determines how often bytes in that
packet will be modified. A value below 0 will result in a variable frequency.
Default is 0 which results in no modification. However, if neither amount nor drop is specified,
amount will be set to @var{-1}. See below for accepted variables.
@item drop
Accepts an expression evaluated per-packet whose value determines whether that packet is dropped.
Evaluation to a positive value results in the packet being dropped. Evaluation to a negative
value results in a variable chance of it being dropped, roughly inverse in proportion to the magnitude
of the value. Default is 0 which results in no drops. See below for accepted variables.
A numeral string, whose value is related to how often output bytes will
be modified. Therefore, values below or equal to 0 are forbidden, and
the lower the more frequent bytes will be modified, with 1 meaning
every byte is modified.
@item dropamount
Accepts a non-negative integer, which assigns a variable chance of it being dropped, roughly inverse
in proportion to the value. Default is 0 which results in no drops. This option is kept for backwards
compatibility and is equivalent to setting drop to a negative value with the same magnitude
i.e. @code{dropamount=4} is the same as @code{drop=-4}. Ignored if drop is also specified.
A numeral string, whose value is related to how often packets will be dropped.
Therefore, values below or equal to 0 are forbidden, and the lower the more
frequent packets will be dropped, with 1 meaning every packet is dropped.
@end table
Both @code{amount} and @code{drop} accept expressions containing the following variables:
@table @samp
@item n
The index of the packet, starting from zero.
@item tb
The timebase for packet timestamps.
@item pts
Packet presentation timestamp.
@item dts
Packet decoding timestamp.
@item nopts
Constant representing AV_NOPTS_VALUE.
@item startpts
First non-AV_NOPTS_VALUE PTS seen in the stream.
@item startdts
First non-AV_NOPTS_VALUE DTS seen in the stream.
@item duration
@itemx d
Packet duration, in timebase units.
@item pos
Packet position in input; may be -1 when unknown or not set.
@item size
Packet size, in bytes.
@item key
Whether packet is marked as a keyframe.
@item state
A pseudo random integer, primarily derived from the content of packet payload.
@end table
@subsection Examples
Apply modification to every byte but don't drop any packets.
The following example applies the modification to every byte but does not drop
any packets.
@example
ffmpeg -i INPUT -c copy -bsf noise=1 output.mkv
@end example
Drop every video packet not marked as a keyframe after timestamp 30s but do not
modify any of the remaining packets.
@example
ffmpeg -i INPUT -c copy -bsf:v noise=drop='gt(pts*tb\,30)*not(key)' output.mkv
@end example
Drop one second of audio every 10 seconds and add some random noise to the rest.
@example
ffmpeg -i INPUT -c copy -bsf:a noise=amount=-1:drop='between(mod(pts*tb\,10)\,9\,10)' output.mkv
ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
@end example
@section null
This bitstream filter passes the packets through unchanged.
@section pcm_rechunk
Repacketize PCM audio to a fixed number of samples per packet or a fixed packet
rate per second. This is similar to the @ref{asetnsamples,,asetnsamples audio
filter,ffmpeg-filters} but works on audio packets instead of audio frames.
@table @option
@item nb_out_samples, n
Set the number of samples per each output audio packet. The number is intended
as the number of samples @emph{per each channel}. Default value is 1024.
@item pad, p
If set to 1, the filter will pad the last audio packet with silence, so that it
will contain the same number of samples (or roughly the same number of samples,
see @option{frame_rate}) as the previous ones. Default value is 1.
@item frame_rate, r
This option makes the filter output a fixed number of packets per second instead
of a fixed number of samples per packet. If the audio sample rate is not
divisible by the frame rate then the number of samples will not be constant but
will vary slightly so that each packet will start as close to the frame
boundary as possible. Using this option has precedence over @option{nb_out_samples}.
@end table
You can generate the well known 1602-1601-1602-1601-1602 pattern of 48kHz audio
for NTSC frame rate using the @option{frame_rate} option.
@example
ffmpeg -f lavfi -i sine=r=48000:d=1 -c pcm_s16le -bsf pcm_rechunk=r=30000/1001 -f framecrc -
@end example
@section pgs_frame_merge
Merge a sequence of PGS Subtitle segments ending with an "end of display set"
segment into a single packet.
This is required by some containers that support PGS subtitles
(muxer @code{matroska}).
@section prores_metadata
Modify color property metadata embedded in prores stream.
@table @option
@item color_primaries
Set the color primaries.
Available values are:
@table @samp
@item auto
Keep the same color primaries property (default).
@item unknown
@item bt709
@item bt470bg
BT601 625
@item smpte170m
BT601 525
@item bt2020
@item smpte431
DCI P3
@item smpte432
P3 D65
@end table
@item transfer_characteristics
Set the color transfer.
Available values are:
@table @samp
@item auto
Keep the same transfer characteristics property (default).
@item unknown
@item bt709
BT 601, BT 709, BT 2020
@item smpte2084
SMPTE ST 2084
@item arib-std-b67
ARIB STD-B67
@end table
@item matrix_coefficients
Set the matrix coefficient.
Available values are:
@table @samp
@item auto
Keep the same colorspace property (default).
@item unknown
@item bt709
@item smpte170m
BT 601
@item bt2020nc
@end table
@end table
Set Rec709 colorspace for each frame of the file
@example
ffmpeg -i INPUT -c copy -bsf:v prores_metadata=color_primaries=bt709:color_trc=bt709:colorspace=bt709 output.mov
@end example
Set Hybrid Log-Gamma parameters for each frame of the file
@example
ffmpeg -i INPUT -c copy -bsf:v prores_metadata=color_primaries=bt2020:color_trc=arib-std-b67:colorspace=bt2020nc output.mov
@end example
@section remove_extra
Remove extradata from packets.
@@ -900,105 +552,6 @@ Remove extradata from all frames.
@end table
@end table
@section setts
Set PTS and DTS in packets.
It accepts the following parameters:
@table @option
@item ts
@item pts
@item dts
Set expressions for PTS, DTS or both.
@item duration
Set expression for duration.
@item time_base
Set output time base.
@end table
The expressions are evaluated through the eval API and can contain the following
constants:
@table @option
@item N
The count of the input packet. Starting from 0.
@item TS
The demux timestamp in input in case of @code{ts} or @code{dts} option or presentation
timestamp in case of @code{pts} option.
@item POS
The original position in the file of the packet, or undefined if undefined
for the current packet
@item DTS
The demux timestamp in input.
@item PTS
The presentation timestamp in input.
@item DURATION
The duration in input.
@item STARTDTS
The DTS of the first packet.
@item STARTPTS
The PTS of the first packet.
@item PREV_INDTS
The previous input DTS.
@item PREV_INPTS
The previous input PTS.
@item PREV_INDURATION
The previous input duration.
@item PREV_OUTDTS
The previous output DTS.
@item PREV_OUTPTS
The previous output PTS.
@item PREV_OUTDURATION
The previous output duration.
@item NEXT_DTS
The next input DTS.
@item NEXT_PTS
The next input PTS.
@item NEXT_DURATION
The next input duration.
@item TB
The timebase of stream packet belongs.
@item TB_OUT
The output timebase.
@item SR
The sample rate of stream packet belongs.
@item NOPTS
The AV_NOPTS_VALUE constant.
@end table
For example, to set PTS equal to DTS (not recommended if B-frames are involved):
@example
ffmpeg -i INPUT -c:a copy -bsf:a setts=pts=DTS out.mkv
@end example
@section showinfo
Log basic packet information. Mainly useful for testing, debugging,
and development.
@section smpte436m_to_eia608
Convert from a @code{SMPTE_436M_ANC} data stream to a @code{EIA_608} stream,
extracting the closed captions from CTA-708 CDP VANC packets, and ignoring all other data.
@anchor{text2movsub}
@section text2movsub
@@ -1013,12 +566,7 @@ Log trace output containing all syntax elements in the coded stream
headers (everything above the level of individual coded blocks).
This can be useful for debugging low-level stream issues.
Supports AV1, H.264, H.265, (M)JPEG, MPEG-2 and VP9, but depending
on the build only a subset of these may be available.
@section truehd_core
Extract the core from a TrueHD stream, dropping ATMOS data.
Supports H.264, H.265, MPEG-2 and VP9.
@section vp9_metadata
@@ -1026,9 +574,7 @@ Modify metadata embedded in a VP9 stream.
@table @option
@item color_space
Set the color space value in the frame header. Note that any frame
set to RGB will be implicitly set to PC range and that RGB is
incompatible with profiles 0 and 2.
Set the color space value in the frame header.
@table @samp
@item unknown
@item bt601
@@ -1040,8 +586,8 @@ incompatible with profiles 0 and 2.
@end table
@item color_range
Set the color range value in the frame header. Note that any value
imposed by the color space will take precedence over this value.
Set the color range value in the frame header. Note that this cannot
be set in RGB streams.
@table @samp
@item tv
@item pc

File diff suppressed because one or more lines are too long

View File

@@ -30,24 +30,17 @@ fate
fate-list
List all fate/regression test targets.
fate-list-failing
List the fate tests that failed the last time they were executed.
fate-clear-reports
Remove the test reports from previous test executions (getting rid of
potentially stale results from fate-list-failing).
install
Install headers, libraries and programs.
examples
Build all examples located in doc/examples.
checkheaders
Check headers dependencies.
libavformat/output-example
Build the libavformat basic example.
alltools
Build all tools in tools directory.
libswscale/swscale-test
Build the swscale self-test (useful also as an example).
config
Reconfigure the project with the current configuration.
@@ -55,8 +48,6 @@ config
tools/target_dec_<decoder>_fuzzer
Build fuzzer to fuzz the specified decoder.
tools/target_bsf_<filter>_fuzzer
Build fuzzer to fuzz the specified bitstream filter.
Useful standard make commands:
make -t <target>
@@ -70,3 +61,4 @@ make -j<num>
make -k
Continue build in case of errors, this is useful for the regression tests
sometimes but note that it will still not run all reg tests.

View File

@@ -3,7 +3,7 @@
@c man begin CODEC OPTIONS
libavcodec provides some generic global options, which can be set on
all the encoders and decoders. In addition, each codec may support
all the encoders and decoders. In addition each codec may support
so-called private options, which are specific for a given codec.
Sometimes, a global option may only affect a specific kind of codec,
@@ -50,13 +50,11 @@ Use internal 2pass ratecontrol in first pass mode.
Use internal 2pass ratecontrol in second pass mode.
@item gray
Only decode/encode grayscale.
@item emu_edge
Do not draw edges.
@item psnr
Set error[?] variables during encoding.
@item truncated
Input bitstream might be randomly truncated.
@item drop_changed
Don't output frames whose parameters differ from first decoded frame in stream.
Error AVERROR_INPUT_CHANGED is returned when a frame is dropped.
@item ildct
Use interlaced DCT.
@@ -70,14 +68,50 @@ This ensures that file and data checksums are reproducible and match between
platforms. Its primary use is for regression testing.
@item aic
Apply H263 advanced intra coding / mpeg4 ac prediction.
@item cbp
Deprecated, use mpegvideo private options instead.
@item qprd
Deprecated, use mpegvideo private options instead.
@item ilme
Apply interlaced motion estimation.
@item cgop
Use closed gop.
@item output_corrupt
Output even potentially corrupted frames.
@end table
@item me_method @var{integer} (@emph{encoding,video})
Set motion estimation method.
Possible values:
@table @samp
@item zero
zero motion estimation (fastest)
@item full
full motion estimation (slowest)
@item epzs
EPZS motion estimation (default)
@item esa
esa motion estimation (alias for full)
@item tesa
tesa motion estimation
@item dia
dia motion estimation (alias for epzs)
@item log
log motion estimation
@item phods
phods motion estimation
@item x1
X1 motion estimation
@item hex
hex motion estimation
@item umh
umh motion estimation
@item iter
iter motion estimation
@end table
@item extradata_size @var{integer}
Set extradata size.
@item time_base @var{rational number}
Set codec time base.
@@ -144,6 +178,24 @@ Default value is 0.
@item b_qfactor @var{float} (@emph{encoding,video})
Set qp factor between P and B frames.
@item rc_strategy @var{integer} (@emph{encoding,video})
Set ratecontrol method.
@item b_strategy @var{integer} (@emph{encoding,video})
Set strategy to choose between I/P/B-frames.
@item ps @var{integer} (@emph{encoding,video})
Set RTP payload size in bytes.
@item mv_bits @var{integer}
@item header_bits @var{integer}
@item i_tex_bits @var{integer}
@item p_tex_bits @var{integer}
@item i_count @var{integer}
@item p_count @var{integer}
@item skip_count @var{integer}
@item misc_bits @var{integer}
@item frame_bits @var{integer}
@item codec_tag @var{integer}
@item bug @var{flags} (@emph{decoding,video})
Workaround not auto detected encoder bugs.
@@ -152,6 +204,8 @@ Possible values:
@table @samp
@item autodetect
@item old_msmpeg4
some old lavc generated msmpeg4v3 files (no autodetection)
@item xvid_ilace
Xvid interlacing bug (autodetected if fourcc==XVIX)
@item ump4
@@ -160,6 +214,8 @@ Xvid interlacing bug (autodetected if fourcc==XVIX)
padding bug (autodetected)
@item amv
@item ac_vlc
illegal vlc bug (autodetected per fourcc)
@item qpel_chroma
@item std_qpel
@@ -180,6 +236,14 @@ Workaround various bugs in microsoft broken decoders.
trancated frames
@end table
@item lelim @var{integer} (@emph{encoding,video})
Set single coefficient elimination threshold for luminance (negative
values also consider DC coefficient).
@item celim @var{integer} (@emph{encoding,video})
Set single coefficient elimination threshold for chrominance (negative
values also consider dc coefficient)
@item strict @var{integer} (@emph{decoding/encoding,audio,video})
Specify how strictly to follow the standards.
@@ -233,8 +297,29 @@ consider things that a sane encoder should not do as an error
@item block_align @var{integer}
@item mpeg_quant @var{integer} (@emph{encoding,video})
Use MPEG quantizers instead of H.263.
@item qsquish @var{float} (@emph{encoding,video})
How to keep quantizer between qmin and qmax (0 = clip, 1 = use
differentiable function).
@item rc_qmod_amp @var{float} (@emph{encoding,video})
Set experimental quantizer modulation.
@item rc_qmod_freq @var{integer} (@emph{encoding,video})
Set experimental quantizer modulation.
@item rc_override_count @var{integer}
@item rc_eq @var{string} (@emph{encoding,video})
Set rate control equation. When computing the expression, besides the
standard functions defined in the section 'Expression Evaluation', the
following functions are available: bits2qp(bits), qp2bits(qp). Also
the following constants are available: iTex pTex tex mv fCode iCount
mcVar var isI isP isB avgQP qComp avgIITex avgPITex avgPPTex avgBPTex
avgTex.
@item maxrate @var{integer} (@emph{encoding,audio,video})
Set max bitrate tolerance (in bits/s). Requires bufsize to be set.
@@ -245,12 +330,18 @@ encode. It is of little use elsewise.
@item bufsize @var{integer} (@emph{encoding,audio,video})
Set ratecontrol buffer size (in bits).
@item rc_buf_aggressivity @var{float} (@emph{encoding,video})
Currently useless.
@item i_qfactor @var{float} (@emph{encoding,video})
Set QP factor between P and I frames.
@item i_qoffset @var{float} (@emph{encoding,video})
Set QP offset between P and I frames.
@item rc_init_cplx @var{float} (@emph{encoding,video})
Set initial complexity for 1-pass encoding.
@item dct @var{integer} (@emph{encoding,video})
Set DCT algorithm.
@@ -315,7 +406,11 @@ Automatically pick a IDCT compatible with the simple one
@item simpleneon
@item xvid
@item simplealpha
@item ipp
@item xvidmmx
@item faani
floating point AAN IDCT
@@ -338,6 +433,19 @@ favor predicting from the previous frame instead of the current
@item bits_per_coded_sample @var{integer}
@item pred @var{integer} (@emph{encoding,video})
Set prediction method.
Possible values:
@table @samp
@item left
@item plane
@item median
@end table
@item aspect @var{rational number} (@emph{encoding,video})
Set sample aspect ratio.
@@ -532,28 +640,13 @@ noise preserving sum of squared differences
@item dia_size @var{integer} (@emph{encoding,video})
Set diamond type & size for motion estimation.
@table @samp
@item (1024, INT_MAX)
full motion estimation(slowest)
@item (768, 1024]
umh motion estimation
@item (512, 768]
hex motion estimation
@item (256, 512]
l2s diamond motion estimation
@item [2,256]
var diamond motion estimation
@item (-1, 2)
small diamond motion estimation
@item -1
funny diamond motion estimation
@item (INT_MIN, -1)
sab diamond motion estimation
@end table
@item last_pred @var{integer} (@emph{encoding,video})
Set amount of motion predictors from the previous frame.
@item preme @var{integer} (@emph{encoding,video})
Set pre motion estimation.
@item precmp @var{integer} (@emph{encoding,video})
Set pre motion estimation compare function.
@@ -597,11 +690,40 @@ Set diamond type & size for motion estimation pre-pass.
@item subq @var{integer} (@emph{encoding,video})
Set sub pel motion estimation quality.
@item dtg_active_format @var{integer}
@item me_range @var{integer} (@emph{encoding,video})
Set limit motion vectors range (1023 for DivX player).
@item ibias @var{integer} (@emph{encoding,video})
Set intra quant bias.
@item pbias @var{integer} (@emph{encoding,video})
Set inter quant bias.
@item color_table_id @var{integer}
@item global_quality @var{integer} (@emph{encoding,audio,video})
@item coder @var{integer} (@emph{encoding,video})
Possible values:
@table @samp
@item vlc
variable length coder / huffman coder
@item ac
arithmetic coder
@item raw
raw (no encoding)
@item rle
run-length coder
@item deflate
deflate-based coder
@end table
@item context @var{integer} (@emph{encoding,video})
Set context model.
@item slice_flags @var{integer}
@item mbd @var{integer} (@emph{encoding,video})
@@ -617,16 +739,32 @@ use fewest bits
use best rate distortion
@end table
@item stream_codec_tag @var{integer}
@item sc_threshold @var{integer} (@emph{encoding,video})
Set scene change threshold.
@item lmin @var{integer} (@emph{encoding,video})
Set min lagrange factor (VBR).
@item lmax @var{integer} (@emph{encoding,video})
Set max lagrange factor (VBR).
@item nr @var{integer} (@emph{encoding,video})
Set noise reduction.
@item rc_init_occupancy @var{integer} (@emph{encoding,video})
Set number of bits which should be loaded into the rc buffer before
decoding starts.
@item flags2 @var{flags} (@emph{decoding/encoding,audio,video,subtitles})
@item flags2 @var{flags} (@emph{decoding/encoding,audio,video})
Possible values:
@table @samp
@item fast
Allow non spec compliant speedup tricks.
@item sgop
Deprecated, use mpegvideo private options instead.
@item noout
Skip bitstream encoding.
@item ignorecrop
@@ -637,36 +775,17 @@ Place global headers at every keyframe instead of in extradata.
Frame data might be split into multiple chunks.
@item showall
Show all frames before the first keyframe.
@item skiprd
Deprecated, use mpegvideo private options instead.
@item export_mvs
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
@item skip_manual
Do not skip samples and export skip information as frame side data.
@item ass_ro_flush_noop
Do not reset ASS ReadOrder field on flush.
@item icc_profiles
Generate/parse embedded ICC profiles from/to colorimetry tags.
@end table
@item export_side_data @var{flags} (@emph{decoding/encoding,audio,video,subtitles})
@item error @var{integer} (@emph{encoding,video})
Possible values:
@table @samp
@item mvs
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
@item prft
Export encoder Producer Reference Time into packet side-data (see @code{AV_PKT_DATA_PRFT})
for codecs that support it.
@item venc_params
Export video encoding parameters through frame side data (see @code{AV_FRAME_DATA_VIDEO_ENC_PARAMS})
for codecs that support it. At present, those are H.264 and VP9.
@item film_grain
Export film grain parameters through frame side data (see @code{AV_FRAME_DATA_FILM_GRAIN_PARAMS}).
Supported at present by AV1 decoders.
@item enhancements
Export picture enhancement metadata through frame side data, e.g. LCEVC (see @code{AV_FRAME_DATA_LCEVC}).
@end table
@item qns @var{integer} (@emph{encoding,video})
Deprecated, use mpegvideo private options instead.
@item threads @var{integer} (@emph{decoding/encoding,video})
Set the number of threads to be used, in case the selected codec
@@ -680,6 +799,12 @@ automatically select the number of threads to set
Default value is @samp{auto}.
@item me_threshold @var{integer} (@emph{encoding,video})
Set motion estimation threshold.
@item mb_threshold @var{integer} (@emph{encoding,video})
Set macroblock threshold.
@item dc @var{integer} (@emph{encoding,video})
Set intra_dc_precision.
@@ -694,29 +819,122 @@ Set number of macroblock rows at the bottom which are skipped.
@item profile @var{integer} (@emph{encoding,audio,video})
Set encoder codec profile. Default value is @samp{unknown}. Encoder specific
profiles are documented in the relevant encoder documentation.
Possible values:
@table @samp
@item unknown
@item aac_main
@item aac_low
@item aac_ssr
@item aac_ltp
@item aac_he
@item aac_he_v2
@item aac_ld
@item aac_eld
@item mpeg2_aac_low
@item mpeg2_aac_he
@item mpeg4_sp
@item mpeg4_core
@item mpeg4_main
@item mpeg4_asp
@item dts
@item dts_es
@item dts_96_24
@item dts_hd_hra
@item dts_hd_ma
@end table
@item level @var{integer} (@emph{encoding,audio,video})
Set the encoder level. This level depends on the specific codec, and
might correspond to the profile level. It is set by default to
@samp{unknown}.
Possible values:
@table @samp
@item unknown
@end table
@item lowres @var{integer} (@emph{decoding,audio,video})
Decode at 1= 1/2, 2=1/4, 3=1/8 resolutions.
@item skip_threshold @var{integer} (@emph{encoding,video})
Set frame skip threshold.
@item skip_factor @var{integer} (@emph{encoding,video})
Set frame skip factor.
@item skip_exp @var{integer} (@emph{encoding,video})
Set frame skip exponent.
Negative values behave identical to the corresponding positive ones, except
that the score is normalized.
Positive values exist primarily for compatibility reasons and are not so useful.
@item skipcmp @var{integer} (@emph{encoding,video})
Set frame skip compare function.
Possible values:
@table @samp
@item sad
sum of absolute differences, fast (default)
@item sse
sum of squared errors
@item satd
sum of absolute Hadamard transformed differences
@item dct
sum of absolute DCT transformed differences
@item psnr
sum of squared quantization errors (avoid, low quality)
@item bit
number of bits needed for the block
@item rd
rate distortion optimal, slow
@item zero
0
@item vsad
sum of absolute vertical differences
@item vsse
sum of squared vertical differences
@item nsse
noise preserving sum of squared differences
@item w53
5/3 wavelet, only used in snow
@item w97
9/7 wavelet, only used in snow
@item dctmax
@item chroma
@end table
@item border_mask @var{float} (@emph{encoding,video})
Increase the quantizer for macroblocks close to borders.
@item mblmin @var{integer} (@emph{encoding,video})
Set min macroblock lagrange factor (VBR).
@item mblmax @var{integer} (@emph{encoding,video})
Set max macroblock lagrange factor (VBR).
@item mepc @var{integer} (@emph{encoding,video})
Set motion estimation bitrate penalty compensation (1.0 = 256).
@item skip_loop_filter @var{integer} (@emph{decoding,video})
@item skip_idct @var{integer} (@emph{decoding,video})
@item skip_frame @var{integer} (@emph{decoding,video})
@@ -744,9 +962,6 @@ Discard all bidirectional frames.
@item nokey
Discard all frames excepts keyframes.
@item nointra
Discard all frames except I frames.
@item all
Discard all frames.
@end table
@@ -756,24 +971,48 @@ Default value is @samp{default}.
@item bidir_refine @var{integer} (@emph{encoding,video})
Refine the two motion vectors used in bidirectional macroblocks.
@item brd_scale @var{integer} (@emph{encoding,video})
Downscale frames for dynamic B-frame decision.
@item keyint_min @var{integer} (@emph{encoding,video})
Set minimum interval between IDR-frames.
@item refs @var{integer} (@emph{encoding,video})
Set reference frames to consider for motion compensation.
@item chromaoffset @var{integer} (@emph{encoding,video})
Set chroma qp offset from luma.
@item trellis @var{integer} (@emph{encoding,audio,video})
Set rate-distortion optimal quantization.
@item mv0_threshold @var{integer} (@emph{encoding,video})
@item b_sensitivity @var{integer} (@emph{encoding,video})
Adjust sensitivity of b_frame_strategy 1.
@item compression_level @var{integer} (@emph{encoding,audio,video})
@item min_prediction_order @var{integer} (@emph{encoding,audio})
@item max_prediction_order @var{integer} (@emph{encoding,audio})
@item timecode_frame_start @var{integer} (@emph{encoding,video})
Set GOP timecode frame start number, in non drop frame format.
@item request_channels @var{integer} (@emph{decoding,audio})
Set desired number of audio channels.
@item bits_per_raw_sample @var{integer}
@item channel_layout @var{integer} (@emph{decoding/encoding,audio})
See @ref{channel layout syntax,,the Channel Layout section in the ffmpeg-utils(1) manual,ffmpeg-utils}
for the required syntax.
Possible values:
@table @samp
@end table
@item request_channel_layout @var{integer} (@emph{decoding,audio})
Possible values:
@table @samp
@end table
@item rc_max_vbv_use @var{float} (@emph{encoding,video})
@item rc_min_vbv_use @var{float} (@emph{encoding,video})
@item ticks_per_frame @var{integer} (@emph{decoding/encoding,audio,video})
@item color_primaries @var{integer} (@emph{decoding/encoding,video})
Possible values:
@@ -873,12 +1112,6 @@ BT.2020 NCL
BT.2020 CL
@item smpte2085
SMPTE 2085
@item chroma-derived-nc
Chroma-derived NCL
@item chroma-derived-c
Chroma-derived CL
@item ictcp
ICtCp
@end table
@item color_range @var{integer} (@emph{decoding/encoding,video})
@@ -888,11 +1121,9 @@ Possible values:
@table @samp
@item tv
@item mpeg
@item limited
MPEG (219*2^(n-8))
@item pc
@item jpeg
@item full
JPEG (2^n-1)
@end table
@@ -913,14 +1144,6 @@ Possible values:
@end table
@item alpha_mode @var{integer} (@emph{decoding/encoding,video})
Possible values:
@table @samp
@item premultiplied
@item straight
@end table
@item log_level_offset @var{integer}
Set the log level offset.
@@ -1009,7 +1232,7 @@ instead of alpha. Default is 0.
@item dump_separator @var{string} (@emph{input})
Separator used to separate the fields printed on the command line about the
Stream parameters.
For example, to separate the fields with newlines and indentation:
For example to separate the fields with newlines and indention:
@example
ffprobe -dump_separator "
" -i ~/videos/matrixbench_mpeg2.mpg

View File

@@ -1,182 +0,0 @@
\input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Community
@titlepage
@center @titlefont{Community}
@end titlepage
@top
@contents
@anchor{Organisation}
@chapter Organisation
The FFmpeg project is organized through a community working on global consensus.
Decisions are taken by the ensemble of active members, through voting and are aided by two committees.
@anchor{General Assembly}
@chapter General Assembly
The ensemble of active members is called the General Assembly (GA).
The General Assembly is sovereign and legitimate for all its decisions regarding the FFmpeg project.
The General Assembly is made up of active contributors.
Contributors are considered "active contributors" if they have authored more than 20 patches in the last 36 months in the main FFmpeg repository, or if they have been voted in by the GA.
The list of active contributors is updated twice each year, on 1st January and 1st July, 0:00 UTC.
Additional members are added to the General Assembly through a vote after proposal by a member of the General Assembly. They are part of the GA for two years, after which they need a confirmation by the GA.
A script to generate the current members of the general assembly (minus members voted in) can be found in `tools/general_assembly.pl`.
@anchor{Voting}
@chapter Voting
Voting is done using a ranked voting system, currently running on https://vote.ffmpeg.org/ .
Majority vote means more than 50% of the expressed ballots.
@anchor{Technical Committee}
@chapter Technical Committee
The Technical Committee (TC) is here to arbitrate and make decisions when technical conflicts occur in the project. They will consider the merits of all the positions, judge them and make a decision.
The TC resolves technical conflicts but is not a technical steering committee.
Decisions by the TC are binding for all the contributors.
Decisions made by the TC can be re-opened after 1 year or by a majority vote of the General Assembly, requested by one of the member of the GA.
The TC is elected by the General Assembly for a duration of 1 year, and is composed of 5 members. Members can be re-elected if they wish. A majority vote in the General Assembly can trigger a new election of the TC.
The members of the TC can be elected from outside of the GA. Candidates for election can either be suggested or self-nominated.
The conflict resolution process is detailed in the resolution process document.
The TC can be contacted at <tc@@ffmpeg>.
@anchor{Resolution Process}
@section Resolution Process
The Technical Committee (TC) is here to arbitrate and make decisions when technical conflicts occur in the project.
The TC main role is to resolve technical conflicts. It is therefore not a technical steering committee, but it is understood that some decisions might impact the future of the project.
@subsection Seizing
The TC can take possession of any technical matter that it sees fit.
To involve the TC in a matter, email tc@ or CC them on an ongoing discussion.
As members of TC are developers, they also can email tc@ to raise an issue.
@subsection Announcement
The TC, once seized, must announce itself on the main mailing list, with a [TC] tag.
The TC has 2 modes of operation: a RFC one and an internal one.
If the TC thinks it needs the input from the larger community, the TC can call for a RFC. Else, it can decide by itself.
The decision to use a RFC process or an internal discussion is a discretionary decision of the TC.
The TC can also reject a seizure for a few reasons such as: the matter was not discussed enough previously; it lacks expertise to reach a beneficial decision on the matter; or the matter is too trivial.
@subsection RFC call
In the RFC mode, one person from the TC posts on the mailing list the technical question and will request input from the community.
The mail will have the following specification:
a precise title
a specific tag [TC RFC]
a top-level email
contain a precise question that does not exceed 100 words and that is answerable by developers
may have an extra description, or a link to a previous discussion, if deemed necessary,
contain a precise end date for the answers.
The answers from the community must be on the main mailing list and must have the following specification:
keep the tag and the title unchanged
limited to 400 words
a first-level, answering directly to the main email
answering to the question.
Further replies to answers are permitted, as long as they conform to the community standards of politeness, they are limited to 100 words, and are not nested more than once. (max-depth=2)
After the end-date, mails on the thread will be ignored.
Violations of those rules will be escalated through the Community Committee.
After all the emails are in, the TC has 96 hours to give its final decision. Exceptionally, the TC can request an extra delay, that will be notified on the mailing list.
@subsection Within TC
In the internal case, the TC has 96 hours to give its final decision. Exceptionally, the TC can request an extra delay.
@subsection Decisions
The decisions from the TC will be sent on the mailing list, with the [TC] tag.
Internally, the TC should take decisions with a majority, or using ranked-choice voting.
Each TC member must vote on such decision according to what is, in their view, best for the project.
If a TC member feels they are affected by a conflict of interest with regards to the case, they should announce it and recuse themselves from the TC
discussion and vote.
A conflict of interest is presumed to occur when a TC member has a personal interest (e.g. financial) in a specific outcome of the case.
The decision from the TC should be published with a summary of the reasons that lead to this decision.
The decisions from the TC are final, until the matters are reopened after no less than one year.
@anchor{Community Committee}
@chapter Community Committee
The Community Committee (CC) is here to arbitrage and make decisions when inter-personal conflicts occur in the project. It will decide quickly and take actions, for the sake of the project.
The CC can remove privileges of offending members, including removal of commit access and temporary ban from the community.
Decisions made by the CC can be re-opened after 1 year or by a majority vote of the General Assembly. Indefinite bans from the community must be confirmed by the General Assembly, in a majority vote.
The CC is elected by the General Assembly for a duration of 1 year, and is composed of 5 members. Members can be re-elected if they wish. A majority vote in the General Assembly can trigger a new election of the CC.
The members of the CC can be elected from outside of the GA. Candidates for election can either be suggested or self-nominated.
The CC is governed by and responsible for enforcing the Code of Conduct.
The CC can be contacted at <cc@@ffmpeg>.
@anchor{Code of Conduct}
@chapter Code of Conduct
Be friendly and respectful towards others and third parties.
Treat others the way you yourself want to be treated.
Be considerate. Not everyone shares the same viewpoint and priorities as you do.
Different opinions and interpretations help the project.
Looking at issues from a different perspective assists development.
Do not assume malice for things that can be attributed to incompetence. Even if
it is malice, it's rarely good to start with that as initial assumption.
Stay friendly even if someone acts contrarily. Everyone has a bad day
once in a while.
If you yourself have a bad day or are angry then try to take a break and reply
once you are calm and without anger if you have to.
Try to help other team members and cooperate if you can.
The goal of software development is to create technical excellence, not for any
individual to be better and "win" against the others. Large software projects
are only possible and successful through teamwork.
If someone struggles do not put them down. Give them a helping hand
instead and point them in the right direction.
Finally, keep in mind the immortal words of Bill and Ted,
"Be excellent to each other."
@bye

View File

@@ -25,64 +25,6 @@ enabled decoders.
A description of some of the currently available video decoders
follows.
@section av1
AOMedia Video 1 (AV1) decoder.
@subsection Options
@table @option
@item operating_point
Select an operating point of a scalable AV1 bitstream (0 - 31). Default is 0.
@end table
@section hevc
HEVC (AKA ITU-T H.265 or ISO/IEC 23008-2) decoder.
The decoder supports MV-HEVC multiview streams with at most two views. Views to
be output are selected by supplying a list of view IDs to the decoder (the
@option{view_ids} option). This option may be set either statically before
decoder init, or from the @code{get_format()} callback - useful for the case
when the view count or IDs change dynamically during decoding.
Only the base layer is decoded by default.
Note that if you are using the @code{ffmpeg} CLI tool, you should be using view
specifiers as documented in its manual, rather than the options documented here.
@subsection Options
@table @option
@item view_ids (MV-HEVC)
Specify a list of view IDs that should be output. This option can also be set to
a single '-1', which will cause all views defined in the VPS to be decoded and
output.
@item view_ids_available (MV-HEVC)
This option may be read by the caller to retrieve an array of view IDs available
in the active VPS. The array is empty for single-layer video.
The value of this option is guaranteed to be accurate when read from the
@code{get_format()} callback. It may also be set at other times (e.g. after
opening the decoder), but the value is informational only and may be incorrect
(e.g. when the stream contains multiple distinct VPS NALUs).
@item view_pos_available (MV-HEVC)
This option may be read by the caller to retrieve an array of view positions
(left, right, or unspecified) available in the active VPS, as
@code{AVStereo3DView} values. When the array is available, its elements apply to
the corresponding elements of @option{view_ids_available}, i.e.
@code{view_pos_available[i]} contains the position of view with ID
@code{view_ids_available[i]}.
Same validity restrictions as for @option{view_ids_available} apply to
this option.
@end table
@section rawvideo
Raw video decoder.
@@ -105,39 +47,6 @@ top-field-first is assumed
@end table
@section libdav1d
dav1d AV1 decoder.
libdav1d allows libavcodec to decode the AOMedia Video 1 (AV1) codec.
Requires the presence of the libdav1d headers and library during configuration.
You need to explicitly configure the build with @code{--enable-libdav1d}.
@subsection Options
The following options are supported by the libdav1d wrapper.
@table @option
@item max_frame_delay
Set max amount of frames the decoder may buffer internally. The default value is 0
(autodetect).
@item filmgrain
Apply film grain to the decoded video if present in the bitstream. Defaults to the
internal default of the library.
This option is deprecated and will be removed in the future. See the global option
@code{export_side_data} to export Film Grain parameters instead of applying it.
@item oppoint
Select an operating point of a scalable AV1 bitstream (0 - 31). Defaults to the
internal default of the library.
@item alllayers
Output all spatial layers of a scalable AV1 bitstream. The default value is false.
@end table
@section libdavs2
AVS2-P2/IEEE1857.4 video decoder wrapper.
@@ -146,108 +55,6 @@ This decoder allows libavcodec to decode AVS2 streams with davs2 library.
@c man end VIDEO DECODERS
@section libuavs3d
AVS3-P2/IEEE1857.10 video decoder.
libuavs3d allows libavcodec to decode AVS3 streams.
Requires the presence of the libuavs3d headers and library during configuration.
You need to explicitly configure the build with @code{--enable-libuavs3d}.
@subsection Options
The following option is supported by the libuavs3d wrapper.
@table @option
@item frame_threads
Set amount of frame threads to use during decoding. The default value is 0 (autodetect).
@end table
@section libxevd
eXtra-fast Essential Video Decoder (XEVD) MPEG-5 EVC decoder wrapper.
This decoder requires the presence of the libxevd headers and library
during configuration. You need to explicitly configure the build with
@option{--enable-libxevd}.
The xevd project website is at @url{https://github.com/mpeg5/xevd}.
@subsection Options
The following options are supported by the libxevd wrapper.
The xevd-equivalent options or values are listed in parentheses for easy migration.
To get a more accurate and extensive documentation of the libxevd options,
invoke the command @code{xevd_app --help} or consult the libxevd documentation.
@table @option
@item threads (@emph{threads})
Force to use a specific number of threads
@end table
@section QSV Decoders
The family of Intel QuickSync Video decoders (VC1, MPEG-2, H.264, HEVC,
JPEG/MJPEG, VP8, VP9, AV1, VVC).
@subsection Common Options
The following options are supported by all qsv decoders.
@table @option
@item @var{async_depth}
Internal parallelization depth, the higher the value the higher the latency.
@item @var{gpu_copy}
A GPU-accelerated copy between video and system memory
@table @samp
@item default
@item on
@item off
@end table
@end table
@subsection HEVC Options
Extra options for hevc_qsv.
@table @option
@item @var{load_plugin}
A user plugin to load in an internal session
@table @samp
@item none
@item hevc_sw
@item hevc_hw
@end table
@item @var{load_plugins}
A :-separate list of hexadecimal plugin UIDs to load in an internal session
@end table
@section v210
Uncompressed 4:2:2 10-bit decoder.
@subsection Options
@table @option
@item custom_stride
Set the line size of the v210 data in bytes. The default value is 0
(autodetect). You can use the special -1 value for a strideless v210 as seen in
BOXX files.
@end table
@c man end VIDEO DECODERS
@chapter Audio Decoders
@c man begin AUDIO DECODERS
@@ -267,7 +74,7 @@ the undocumented RealAudio 3 (a.k.a. dnet).
@item -drc_scale @var{value}
Dynamic Range Scale Factor. The factor to apply to dynamic range values
from the AC-3 stream. This factor is applied exponentially. The default value is 1.
from the AC-3 stream. This factor is applied exponentially.
There are 3 notable scale factor ranges:
@table @option
@item drc_scale == 0
@@ -346,15 +153,6 @@ value is 0 (disabled).
@end table
@section libmpeghdec
libmpeghdec decoder wrapper.
libmpeghdec allows libmpeghdec to decode the MPEG-H 3D audio codec.
Requires the presence of the libmpeghdec headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libmpeghdec --enable-nonfree}.
@section libopencore-amrnb
libopencore-amrnb decoder wrapper.
@@ -394,195 +192,7 @@ without this library.
@c man end AUDIO DECODERS
@chapter Subtitles Decoders
@c man begin SUBTITLES DECODERS
@section libaribb24
ARIB STD-B24 caption decoder.
Implements profiles A and C of the ARIB STD-B24 standard.
@subsection libaribb24 Decoder Options
@table @option
@item -aribb24-base-path @var{path}
Sets the base path for the libaribb24 library. This is utilized for reading of
configuration files (for custom unicode conversions), and for dumping of
non-text symbols as images under that location.
Unset by default.
@item -aribb24-skip-ruby-text @var{boolean}
Tells the decoder wrapper to skip text blocks that contain half-height ruby
text.
Enabled by default.
@end table
@section libaribcaption
Yet another ARIB STD-B24 caption decoder using external @dfn{libaribcaption}
library.
Implements profiles A and C of the Japanese ARIB STD-B24 standard,
Brazilian ABNT NBR 15606-1, and Philippines version of ISDB-T.
Requires the presence of the libaribcaption headers and library
(@url{https://github.com/xqq/libaribcaption}) during configuration.
You need to explicitly configure the build with @code{--enable-libaribcaption}.
If both @dfn{libaribb24} and @dfn{libaribcaption} are enabled, @dfn{libaribcaption}
decoder precedes.
@subsection libaribcaption Decoder Options
@table @option
@item -sub_type @var{subtitle_type}
Specifies the format of the decoded subtitles.
@table @samp
@item bitmap
Graphical image.
@item ass
ASS formatted text.
@item text
Simple text based output without formatting.
@end table
The default is @dfn{ass} as same as @dfn{libaribb24} decoder.
Some present players (e.g., @dfn{mpv}) expect ASS format for ARIB caption.
@item -caption_encoding @var{encoding_scheme}
Specifies the encoding scheme of input subtitle text.
@table @samp
@item auto
Automatically detect text encoding (default).
@item jis
8bit-char JIS encoding defined in ARIB STD B24.
This encoding used in Japan for ISDB captions.
@item utf8
UTF-8 encoding defined in ARIB STD B24.
This encoding is used in Philippines for ISDB-T captions.
@item latin
Latin character encoding defined in ABNT NBR 15606-1.
This encoding is used in South America for SBTVD / ISDB-Tb captions.
@end table
@item -font @var{font_name[,font_name2,...]}
Specify comma-separated list of font family names to be used for @dfn{bitmap}
or @dfn{ass} type subtitle rendering.
Only first font name is used for @dfn{ass} type subtitle.
If not specified, use internally defined default font family.
@item -ass_single_rect @var{boolean}
ARIB STD-B24 specifies that some captions may be displayed at different
positions at a time (multi-rectangle subtitle).
Since some players (e.g., old @dfn{mpv}) can't handle multiple ASS rectangles
in a single AVSubtitle, or multiple ASS rectangles of indeterminate duration
with the same start timestamp, this option can change the behavior so that
all the texts are displayed in a single ASS rectangle.
The default is @var{false}.
If your player cannot handle AVSubtitles with multiple ASS rectangles properly,
set this option to @var{true} or define @env{ASS_SINGLE_RECT=1} to change
default behavior at compilation.
@item -force_outline_text @var{boolean}
Specify whether always render outline text for all characters regardless of
the indication by character style.
The default is @var{false}.
@item -outline_width @var{number} (0.0 - 3.0)
Specify width for outline text, in dots (relative).
The default is @var{1.5}.
@item -ignore_background @var{boolean}
Specify whether to ignore background color rendering.
The default is @var{false}.
@item -ignore_ruby @var{boolean}
Specify whether to ignore rendering for ruby-like (furigana) characters.
The default is @var{false}.
@item -replace_drcs @var{boolean}
Specify whether to render replaced DRCS characters as Unicode characters.
The default is @var{true}.
@item -replace_msz_ascii @var{boolean}
Specify whether to replace MSZ (Middle Size; half width) fullwidth
alphanumerics with halfwidth alphanumerics.
The default is @var{true}.
@item -replace_msz_japanese @var{boolean}
Specify whether to replace some MSZ (Middle Size; half width) fullwidth
japanese special characters with halfwidth ones.
The default is @var{true}.
@item -replace_msz_glyph @var{boolean}
Specify whether to replace MSZ (Middle Size; half width) characters
with halfwidth glyphs if the fonts supports it.
This option works under FreeType or DirectWrite renderer
with Adobe-Japan1 compliant fonts.
e.g., IBM Plex Sans JP, Morisawa BIZ UDGothic, Morisawa BIZ UDMincho,
Yu Gothic, Yu Mincho, and Meiryo.
The default is @var{true}.
@item -canvas_size @var{image_size}
Specify the resolution of the canvas to render subtitles to; usually, this
should be frame size of input video.
This only applies when @code{-subtitle_type} is set to @var{bitmap}.
The libaribcaption decoder assumes input frame size for bitmap rendering as below:
@enumerate
@item
PROFILE_A : 1440 x 1080 with SAR (PAR) 4:3
@item
PROFILE_C : 320 x 180 with SAR (PAR) 1:1
@end enumerate
If actual frame size of input video does not match above assumption,
the rendered captions may be distorted.
To make the captions undistorted, add @code{-canvas_size} option to specify
actual input video size.
Note that the @code{-canvas_size} option is not required for video with
different size but same aspect ratio.
In such cases, the caption will be stretched or shrunk to actual video size
if @code{-canvas_size} option is not specified.
If @code{-canvas_size} option is specified with different size,
the caption will be stretched or shrunk as specified size with calculated SAR.
@end table
@subsection libaribcaption decoder usage examples
Display MPEG-TS file with ARIB subtitle by @code{ffplay} tool:
@example
ffplay -sub_type bitmap MPEG.TS
@end example
Display MPEG-TS file with input frame size 1920x1080 by @code{ffplay} tool:
@example
ffplay -sub_type bitmap -canvas_size 1920x1080 MPEG.TS
@end example
Embed ARIB subtitle in transcoded video:
@example
ffmpeg -sub_type bitmap -i src.m2t -filter_complex "[0:v][0:s]overlay" -vcodec h264 dest.mp4
@end example
@c man begin SUBTILES DECODERS
@section dvbsub
@@ -591,8 +201,6 @@ ffmpeg -sub_type bitmap -i src.m2t -filter_complex "[0:v][0:s]overlay" -vcodec h
@table @option
@item compute_clut
@table @option
@item -2
Compute clut once if no matching CLUT is in the stream.
@item -1
Compute clut if no matching CLUT is in the stream.
@item 0
@@ -621,7 +229,7 @@ palette is stored in the IFO file, and therefore not available when reading
from dumped VOB files.
The format for this option is a string containing 16 24-bits hexadecimal
numbers (without 0x prefix) separated by commas, for example @code{0d00ee,
numbers (without 0x prefix) separated by comas, for example @code{0d00ee,
ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
@@ -650,11 +258,6 @@ List of teletext page numbers to decode. Pages that do not match the specified
list are dropped. You may use the special @code{*} string to match all pages,
or @code{subtitle} to match all subtitle pages.
Default value is *.
@item txt_default_region
Set default character set used for decoding, a value between 0 and 87 (see
ETS 300 706, Section 15, Table 32). Default value is -1, which does not
override the libzvbi default. This option is needed for some legacy level 1.0
transmissions which cannot signal the proper charset.
@item txt_chop_top
Discards the top teletext line. Default value is 1.
@item txt_format
@@ -695,4 +298,4 @@ box and an end box, typically subtitles. Default value is 0 if
@end table
@c man end SUBTITLES DECODERS
@c man end SUBTILES DECODERS

View File

@@ -25,12 +25,16 @@ Audible Format 2, 3, and 4 demuxer.
This demuxer is used to demux Audible Format 2, 3, and 4 (.aa) files.
@section aac
@section applehttp
Raw Audio Data Transport Stream AAC demuxer.
Apple HTTP Live Streaming demuxer.
This demuxer is used to demux an ADTS input containing a single AAC stream
alongwith any ID3v1/2 or APE tags in it.
This demuxer presents all AVStreams from all variant streams.
The id field is set to the bitrate variant index number. By setting
the discard flags on AVStreams (by pressing 'a' or 'v' in ffplay),
the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is
available in a metadata key named "variant_bitrate".
@section apng
@@ -44,15 +48,12 @@ between the last fcTL and IEND chunks.
@table @option
@item -ignore_loop @var{bool}
Ignore the loop variable in the file if set. Default is enabled.
Ignore the loop variable in the file if set.
@item -max_fps @var{int}
Maximum framerate in frames per second. Default of 0 imposes no limit.
Maximum framerate in frames per second (0 for no limit).
@item -default_fps @var{int}
Default framerate in frames per second when none is specified in the file
(0 meaning as fast as possible). Default is 15.
(0 meaning as fast as possible).
@end table
@section asf
@@ -103,7 +104,8 @@ backslash or single quotes.
All subsequent file-related directives apply to that file.
@item @code{ffconcat version 1.0}
Identify the script type and version.
Identify the script type and version. It also sets the @option{safe} option
to 1 if it was -1.
To make FFmpeg recognize the format automatically, this directive must
appear exactly as is (no extra space or byte-order-mark) on the very first
@@ -157,16 +159,6 @@ directive) will be reduced based on their specified Out point.
Metadata of the packets of the file. The specified metadata will be set for
each file packet. You can specify this directive multiple times to add multiple
metadata entries.
This directive is deprecated, use @code{file_packet_meta} instead.
@item @code{file_packet_meta @var{key} @var{value}}
Metadata of the packets of the file. The specified metadata will be set for
each file packet. You can specify this directive multiple times to add multiple
metadata entries.
@item @code{option @var{key} @var{value}}
Option to access, open and probe the file.
Can be present multiple times.
@item @code{stream}
Introduce a stream in the virtual file.
@@ -184,20 +176,6 @@ subfiles will be used.
This is especially useful for MPEG-PS (VOB) files, where the order of the
streams is not reliable.
@item @code{stream_meta @var{key} @var{value}}
Metadata for the stream.
Can be present multiple times.
@item @code{stream_codec @var{value}}
Codec for the stream.
@item @code{stream_extradata @var{hex_string}}
Extradata for the string, encoded in hexadecimal.
@item @code{chapter @var{id} @var{start} @var{end}}
Add a chapter. @var{id} is an unique identifier, possibly small and
consecutive.
@end table
@subsection Options
@@ -207,8 +185,7 @@ This demuxer accepts the following option:
@table @option
@item safe
If set to 1, reject unsafe file paths and directives.
A file path is considered safe if it
If set to 1, reject unsafe file paths. A file path is considered safe if it
does not contain a protocol specification and is relative and all components
only contain characters from the portable character set (letters, digits,
period, underscore and hyphen) and have no period at the beginning of a
@@ -218,6 +195,9 @@ If set to 0, any file name is accepted.
The default is 1.
-1 is equivalent to 1 if the format was automatically
probed and 0 otherwise.
@item auto_convert
If set to 1, try to perform automatic conversions on packet data to make the
streams concatenable.
@@ -274,214 +254,11 @@ which streams to actually receive.
Each stream mirrors the @code{id} and @code{bandwidth} properties from the
@code{<Representation>} as metadata keys named "id" and "variant_bitrate" respectively.
@subsection Options
This demuxer accepts the following option:
@table @option
@item cenc_decryption_key
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
@end table
@section dvdvideo
DVD-Video demuxer, powered by libdvdnav and libdvdread.
Can directly ingest DVD titles, specifically sequential PGCs, into
a conversion pipeline. Menu assets, such as background video or audio,
can also be demuxed given the menu's coordinates (at best effort).
Block devices (DVD drives), ISO files, and directory structures are accepted.
Activate with @code{-f dvdvideo} in front of one of these inputs.
This demuxer does NOT have decryption code of any kind. You are on your own
working with encrypted DVDs, and should not expect support on the matter.
Underlying playback is handled by libdvdnav, and structure parsing by libdvdread.
FFmpeg must be built with GPL library support available as well as the
configure switches @code{--enable-libdvdnav} and @code{--enable-libdvdread}.
You will need to provide either the desired "title number" or exact PGC/PG coordinates.
Many open-source DVD players and tools can aid in providing this information.
If not specified, the demuxer will default to title 1 which works for many discs.
However, due to the flexibility of the format, it is recommended to check manually.
There are many discs that are authored strangely or with invalid headers.
If the input is a real DVD drive, please note that there are some drives which may
silently fail on reading bad sectors from the disc, returning random bits instead
which is effectively corrupt data. This is especially prominent on aging or rotting discs.
A second pass and integrity checks would be needed to detect the corruption.
This is not an FFmpeg issue.
@subsection Background
DVD-Video is not a directly accessible, linear container format in the
traditional sense. Instead, it allows for complex and programmatic playback of
carefully muxed MPEG-PS streams that are stored in headerless VOB files.
To the end-user, these streams are known simply as "titles", but the actual
logical playback sequence is defined by one or more "PGCs", or Program Group Chains,
within the title. The PGC is in turn comprised of multiple "PGs", or Programs",
which are the actual video segments (and for a typical video feature, sequentially
ordered). The PGC structure, along with stream layout and metadata, are stored in
IFO files that need to be parsed. PGCs can be thought of as playlists in easier terms.
An actual DVD player relies on user GUI interaction via menus and an internal VM
to drive the direction of demuxing. Generally, the user would either navigate (via menus)
or automatically be redirected to the PGC of their choice. During this process and
the subsequent playback, the DVD player's internal VM also maintains a state and
executes instructions that can create jumps to different sectors during playback.
This is why libdvdnav is involved, as a linear read of the MPEG-PS blobs on the
disc (VOBs) is not enough to produce the right sequence in many cases.
There are many other DVD structures (a long subject) that will not be discussed here.
NAV packets, in particular, are handled by this demuxer to build accurate timing
but not emitted as a stream. For a good high-level understanding, refer to:
@url{https://code.videolan.org/videolan/libdvdnav/-/blob/master/doc/dvd_structures}
@subsection Options
This demuxer accepts the following options:
@table @option
@item title @var{int}
The title number to play. Must be set if @option{pgc} and @option{pg} are not set.
Not applicable to menus.
Default is 0 (auto), which currently only selects the first available title (title 1)
and notifies the user about the implications.
@item chapter_start @var{int}
The chapter, or PTT (part-of-title), number to start at. Not applicable to menus.
Default is 1.
@item chapter_end @var{int}
The chapter, or PTT (part-of-title), number to end at. Not applicable to menus.
Default is 0, which is a special value to signal end at the last possible chapter.
@item angle @var{int}
The video angle number, referring to what is essentially an additional
video stream that is composed from alternate frames interleaved in the VOBs.
Not applicable to menus.
Default is 1.
@item region @var{int}
The region code to use for playback. Some discs may use this to default playback
at a particular angle in different regions. This option will not affect the region code
of a real DVD drive, if used as an input. Not applicable to menus.
Default is 0, "world".
@item menu @var{bool}
Demux menu assets instead of navigating a title. Requires exact coordinates
of the menu (@option{menu_lu}, @option{menu_vts}, @option{pgc}, @option{pg}).
Default is false.
@item menu_lu @var{int}
The menu language to demux. In DVD, menus are grouped by language.
Default is 1, the first language unit.
@item menu_vts @var{int}
The VTS where the menu lives, or 0 if it is a VMG menu (root-level).
Default is 1, menu of the first VTS.
@item pgc @var{int}
The entry PGC to start playback, in conjunction with @option{pg}.
Alternative to setting @option{title}.
Chapter markers are not supported at this time.
Must be explicitly set for menus.
Default is 0, automatically resolve from value of @option{title}.
@item pg @var{int}
The entry PG to start playback, in conjunction with @option{pgc}.
Alternative to setting @option{title}.
Chapter markers are not supported at this time.
Default is 1, the first PG of the PGC.
@item preindex @var{bool}
Enable this to have accurate chapter (PTT) markers and duration measurement,
which requires a slow second pass read in order to index the chapter marker
timestamps from NAV packets. This is non-ideal extra work for real optical drives.
It is recommended and faster to use this option with a backup of the DVD structure
stored on a hard drive. Not compatible with @option{pgc} and @option{pg}.
Default is 0, false.
@item trim @var{bool}
Skip padding cells (i.e. cells shorter than 1 second) from the beginning.
There exist many discs with filler segments at the beginning of the PGC,
often with junk data intended for controlling a real DVD player's
buffering speed and with no other material data value.
Not applicable to menus.
Default is 1, true.
@end table
@subsection Examples
@itemize
@item
Open title 3 from a given DVD structure:
@example
ffmpeg -f dvdvideo -title 3 -i <path to DVD> ...
@end example
@item
Open chapters 3-6 from title 1 from a given DVD structure:
@example
ffmpeg -f dvdvideo -chapter_start 3 -chapter_end 6 -title 1 -i <path to DVD> ...
@end example
@item
Open only chapter 5 from title 1 from a given DVD structure:
@example
ffmpeg -f dvdvideo -chapter_start 5 -chapter_end 5 -title 1 -i <path to DVD> ...
@end example
@item
Demux menu with language 1 from VTS 1, PGC 1, starting at PG 1:
@example
ffmpeg -f dvdvideo -menu 1 -menu_lu 1 -menu_vts 1 -pgc 1 -pg 1 -i <path to DVD> ...
@end example
@end itemize
@section ea
Electronic Arts Multimedia format demuxer.
This format is used by various Electronic Arts games.
@subsection Options
@table @option
@item merge_alpha @var{bool}
Normally the VP6 alpha channel (if exists) is returned as a secondary video
stream, by setting this option you can make the demuxer return a single video
stream which contains the alpha channel in addition to the ordinary video.
@end table
@section imf
Interoperable Master Format demuxer.
This demuxer presents audio and video streams found in an IMF Composition, as
specified in @url{https://doi.org/10.5594/SMPTE.ST2067-2.2020, SMPTE ST 2067-2}.
@example
ffmpeg [-assetmaps <path of ASSETMAP1>,<path of ASSETMAP2>,...] -i <path of CPL> ...
@end example
If @code{-assetmaps} is not specified, the demuxer looks for a file called
@file{ASSETMAP.xml} in the same directory as the CPL.
@section flv, live_flv, kux
@section flv, live_flv
Adobe Flash Video Format demuxer.
This demuxer is used to demux FLV files and RTMP network streams. In case of live network streams, if you force format, you may use live_flv option instead of flv to survive timestamp discontinuities.
KUX is a flv variant used on the Youku platform.
@example
ffmpeg -f flv -i myfile.flv ...
@@ -543,42 +320,19 @@ infinitely.
HLS demuxer
Apple HTTP Live Streaming demuxer.
This demuxer presents all AVStreams from all variant streams.
The id field is set to the bitrate variant index number. By setting
the discard flags on AVStreams (by pressing 'a' or 'v' in ffplay),
the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is
available in a metadata key named "variant_bitrate".
It accepts the following options:
@table @option
@item live_start_index
segment index to start live streams at (negative values are from the end).
@item prefer_x_start
prefer to use #EXT-X-START if it's in playlist instead of live_start_index.
@item allowed_extensions
',' separated list of file extensions that hls is allowed to access.
@item extension_picky
This blocks disallowed extensions from probing
It also requires all available segments to have matching extensions to the format
except mpegts, which is always allowed.
It is recommended to set the whitelists correctly instead of depending on extensions
Enabled by default.
@item max_reload
Maximum number of times a insufficient list is attempted to be reloaded.
Default value is 1000.
@item m3u8_hold_counters
The maximum number of times to load m3u8 when it refreshes without new segments.
Default value is 1000.
@item http_persistent
Use persistent HTTP connections. Applicable only for HTTP streams.
Enabled by default.
@@ -586,17 +340,6 @@ Enabled by default.
@item http_multiple
Use multiple HTTP connections for downloading HTTP segments.
Enabled by default for HTTP/1.1 servers.
@item http_seekable
Use HTTP partial requests for downloading HTTP segments.
0 = disable, 1 = enable, -1 = auto, Default is auto.
@item seg_format_options
Set options for the demuxer of media segments using a list of key=value pairs separated by @code{:}.
@item seg_max_retry
Maximum number of times to reload a segment on error, useful when segment skip on network error is not desired.
Default value is 0.
@end table
@section image2
@@ -664,9 +407,30 @@ Select a glob wildcard pattern type.
The pattern is interpreted like a @code{glob()} pattern. This is only
selectable if libavformat was compiled with globbing support.
@item glob_sequence @emph{(deprecated, will be removed)}
Select a mixed glob wildcard/sequence pattern.
If your version of libavformat was compiled with globbing support, and
the provided pattern contains at least one glob meta character among
@code{%*?[]@{@}} that is preceded by an unescaped "%", the pattern is
interpreted like a @code{glob()} pattern, otherwise it is interpreted
like a sequence pattern.
All glob special characters @code{%*?[]@{@}} must be prefixed
with "%". To escape a literal "%" you shall use "%%".
For example the pattern @code{foo-%*.jpeg} will match all the
filenames prefixed by "foo-" and terminating with ".jpeg", and
@code{foo-%?%?%?.jpeg} will match all the filenames prefixed with
"foo-", followed by a sequence of three characters, and terminating
with ".jpeg".
This pattern type is deprecated in favor of @var{glob} and
@var{sequence}.
@end table
Default value is @var{sequence}.
Default value is @var{glob_sequence}.
@item pixel_format
Set the pixel format of the images to read. If not specified the pixel
format is guessed from the first image file in the sequence.
@@ -686,17 +450,6 @@ nanosecond precision.
@item video_size
Set the video size of the images to read. If not specified the video
size is guessed from the first image file in the sequence.
@item export_path_metadata
If set to 1, will add two extra fields to the metadata found in input, making them
also available for other filters (see @var{drawtext} filter for examples). Default
value is 0. The extra fields are described below:
@table @option
@item lavf.image2dec.source_path
Corresponds to the full path to the input file being read.
@item lavf.image2dec.source_basename
Corresponds to the name of the file being read.
@end table
@end table
@subsection Examples
@@ -728,84 +481,14 @@ ffmpeg -framerate 10 -pattern_type glob -i "*.png" out.mkv
The Game Music Emu library is a collection of video game music file emulators.
See @url{https://bitbucket.org/mpyne/game-music-emu/overview} for more information.
See @url{http://code.google.com/p/game-music-emu/} for more information.
It accepts the following options:
Some files have multiple tracks. The demuxer will pick the first track by
default. The @option{track_index} option can be used to select a different
track. Track indexes start at 0. The demuxer exports the number of tracks as
@var{tracks} meta data entry.
@table @option
@item track_index
Set the index of which track to demux. The demuxer can only export one track.
Track indexes start at 0. Default is to pick the first track. Number of tracks
is exported as @var{tracks} metadata entry.
@item sample_rate
Set the sampling rate of the exported track. Range is 1000 to 999999. Default is 44100.
@item max_size @emph{(bytes)}
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size,
which in turn, acts as a ceiling for the size of files that can be read.
Default is 50 MiB.
@end table
@section libmodplug
ModPlug based module demuxer
See @url{https://github.com/Konstanty/libmodplug}
It will export one 2-channel 16-bit 44.1 kHz audio stream.
Optionally, a @code{pal8} 16-color video stream can be exported with or without printed metadata.
It accepts the following options:
@table @option
@item noise_reduction
Apply a simple low-pass filter. Can be 1 (on) or 0 (off). Default is 0.
@item reverb_depth
Set amount of reverb. Range 0-100. Default is 0.
@item reverb_delay
Set delay in ms, clamped to 40-250 ms. Default is 0.
@item bass_amount
Apply bass expansion a.k.a. XBass or megabass. Range is 0 (quiet) to 100 (loud). Default is 0.
@item bass_range
Set cutoff i.e. upper-bound for bass frequencies. Range is 10-100 Hz. Default is 0.
@item surround_depth
Apply a Dolby Pro-Logic surround effect. Range is 0 (quiet) to 100 (heavy). Default is 0.
@item surround_delay
Set surround delay in ms, clamped to 5-40 ms. Default is 0.
@item max_size
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size,
which in turn, acts as a ceiling for the size of files that can be read. Range is 0 to 100 MiB.
0 removes buffer size limit (not recommended). Default is 5 MiB.
@item video_stream_expr
String which is evaluated using the eval API to assign colors to the generated video stream.
Variables which can be used are @code{x}, @code{y}, @code{w}, @code{h}, @code{t}, @code{speed},
@code{tempo}, @code{order}, @code{pattern} and @code{row}.
@item video_stream
Generate video stream. Can be 1 (on) or 0 (off). Default is 0.
@item video_stream_w
Set video frame width in 'chars' where one char indicates 8 pixels. Range is 20-512. Default is 30.
@item video_stream_h
Set video frame height in 'chars' where one char indicates 8 pixels. Range is 20-512. Default is 30.
@item video_stream_ptxt
Print metadata on video stream. Includes @code{speed}, @code{tempo}, @code{order}, @code{pattern},
@code{row} and @code{ts} (time in ms). Can be 1 (on) or 0 (off). Default is 1.
@end table
For very large files, the @option{max_size} option may have to be adjusted.
@section libopenmpt
@@ -834,39 +517,9 @@ Set the sample rate for libopenmpt to output.
Range is from 1000 to INT_MAX. The value default is 48000.
@end table
@anchor{mccdec}
@section mcc
@section mov/mp4/3gp/QuickTime
Demuxer for MacCaption MCC files, it supports MCC versions 1.0 and 2.0.
MCC files store VANC data, which can include closed captions (EIA-608 and CEA-708), ancillary time code, pan-scan data, etc.
By default, for backward compatibility, the MCC demuxer extracts just the EIA-608 and CEA-708 closed captions and returns a @code{EIA_608} stream, ignoring all other VANC data.
You can change it to return all VANC data in a @code{SMPTE_436M_ANC} data stream by setting @option{-eia608_extract 0}
@subsection Examples
@itemize
@item
Convert a MCC file to Scenarist (SCC) format:
@example
ffmpeg -i CC.mcc -c:s copy CC.scc
@end example
Note that the SCC format only supports EIA-608, so this will discard all other data such as CEA-708 extensions.
@item
Merge a MCC file into a MXF file:
@example
ffmpeg -i video_and_audio.mxf -eia608_extract 0 -i CC.mcc -c copy -map 0 -map 1 out.mxf
@end example
This retains all VANC data and inserts it into the output MXF file as a @code{SMPTE_436M_ANC} data stream.
@end itemize
@section mov/mp4/3gp
Demuxer for Quicktime File Format & ISO/IEC Base Media File Format (ISO/IEC 14496-12 or MPEG-4 Part 12, ISO/IEC 15444-12 or JPEG 2000 Part 12).
Registered extensions: mov, mp4, m4a, 3gp, 3g2, mj2, psp, m4b, ism, ismv, isma, f4v
@subsection Options
QuickTime / MP4 demuxer.
This demuxer accepts the following options:
@table @option
@@ -877,95 +530,10 @@ Enabling this can theoretically leak information in some use cases.
@item use_absolute_path
Allows loading of external tracks via absolute paths, disabled by default.
Enabling this poses a security risk. It should only be enabled if the source
is known to be non-malicious.
@item seek_streams_individually
When seeking, identify the closest point in each stream individually and demux packets in
that stream from identified point. This can lead to a different sequence of packets compared
to demuxing linearly from the beginning. Default is true.
@item ignore_editlist
Ignore any edit list atoms. The demuxer, by default, modifies the stream index to reflect the
timeline described by the edit list. Default is false.
@item advanced_editlist
Modify the stream index to reflect the timeline described by the edit list. @code{ignore_editlist}
must be set to false for this option to be effective.
If both @code{ignore_editlist} and this option are set to false, then only the
start of the stream index is modified to reflect initial dwell time or starting timestamp
described by the edit list. Default is true.
@item ignore_chapters
Don't parse chapters. This includes GoPro 'HiLight' tags/moments. Note that chapters are
only parsed when input is seekable. Default is false.
@item use_mfra_for
For seekable fragmented input, set fragment's starting timestamp from media fragment random access box, if present.
Following options are available:
@table @samp
@item auto
Auto-detect whether to set mfra timestamps as PTS or DTS @emph{(default)}
@item dts
Set mfra timestamps as DTS
@item pts
Set mfra timestamps as PTS
@item 0
Don't use mfra box to set timestamps
@end table
@item use_tfdt
For fragmented input, set fragment's starting timestamp to @code{baseMediaDecodeTime} from the @code{tfdt} box.
Default is enabled, which will prefer to use the @code{tfdt} box to set DTS. Disable to use the @code{earliest_presentation_time} from the @code{sidx} box.
In either case, the timestamp from the @code{mfra} box will be used if it's available and @code{use_mfra_for} is
set to pts or dts.
@item export_all
Export unrecognized boxes within the @var{udta} box as metadata entries. The first four
characters of the box type are set as the key. Default is false.
@item export_xmp
Export entire contents of @var{XMP_} box and @var{uuid} box as a string with key @code{xmp}. Note that
if @code{export_all} is set and this option isn't, the contents of @var{XMP_} box are still exported
but with key @code{XMP_}. Default is false.
@item activation_bytes
4-byte key required to decrypt Audible AAX and AAX+ files. See Audible AAX subsection below.
@item audible_fixed_key
Fixed key used for handling Audible AAX/AAX+ files. It has been pre-set so should not be necessary to
specify.
@item decryption_key
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
@item max_stts_delta
Very high sample deltas written in a trak's stts box may occasionally be intended but usually they are written in
error or used to store a negative value for dts correction when treated as signed 32-bit integers. This option lets
the user set an upper limit, beyond which the delta is clamped to 1. Values greater than the limit if negative when
cast to int32 are used to adjust onward dts.
Unit is the track time scale. Range is 0 to UINT_MAX. Default is @code{UINT_MAX - 48000*10} which allows up to
a 10 second dts correction for 48 kHz audio streams while accommodating 99.9% of @code{uint32} range.
@item interleaved_read
Interleave packets from multiple tracks at demuxer level. For badly interleaved files, this prevents playback issues
caused by large gaps between packets in different tracks, as MOV/MP4 do not have packet placement requirements.
However, this can cause excessive seeking on very badly interleaved files, due to seeking between tracks, so disabling
it may prevent I/O issues, at the expense of playback.
is known to be non malicious.
@end table
@subsection Audible AAX
Audible AAX files are encrypted M4B files, and they can be decrypted by specifying a 4 byte activation secret.
@example
ffmpeg -activation_bytes 1CEB00DA -i test.aax -vn -c:a copy output.mp4
@end example
@section mpegts
MPEG-2 transport stream demuxer.
@@ -995,12 +563,8 @@ to 1 (-1 means automatic setting, 1 means enabled, 0 means
disabled). Default value is -1.
@item merge_pmt_versions
Reuse existing streams when a PMT's version is updated and elementary
Re-use existing streams when a PMT's version is updated and elementary
streams move to different PIDs. Default value is 0.
@item max_packet_size
Set maximum size, in bytes, of packet emitted by the demuxer. Payloads above this size
are split across multiple packets. Range is 1 to INT_MAX/2. Default is 204800 bytes.
@end table
@section mpjpeg
@@ -1047,36 +611,6 @@ the command:
ffplay -f rawvideo -pixel_format rgb24 -video_size 320x240 -framerate 10 input.raw
@end example
@anchor{rcwtdec}
@section rcwt
RCWT (Raw Captions With Time) is a format native to ccextractor, a commonly
used open source tool for processing 608/708 Closed Captions (CC) sources.
For more information on the format, see @ref{rcwtenc,,,ffmpeg-formats}.
This demuxer implements the specification as of March 2024, which has
been stable and unchanged since April 2014.
@subsection Examples
@itemize
@item
Render CC to ASS using the built-in decoder:
@example
ffmpeg -i CC.rcwt.bin CC.ass
@end example
Note that if your output appears to be empty, you may have to manually
set the decoder's @option{data_field} option to pick the desired CC substream.
@item
Convert an RCWT backup to Scenarist (SCC) format:
@example
ffmpeg -i CC.rcwt.bin -c:s copy CC.scc
@end example
Note that the SCC format does not support all of the possible CC extensions
that can be stored in RCWT (such as EIA-708).
@end itemize
@section sbg
SBaGen script demuxer.
@@ -1128,43 +662,4 @@ Example: convert the captions to a format most players understand:
ffmpeg -i http://www.ted.com/talks/subtitles/id/1/lang/en talk1-en.srt
@end example
@section vapoursynth
Vapoursynth wrapper.
Due to security concerns, Vapoursynth scripts will not
be autodetected so the input format has to be forced. For ff* CLI tools,
add @code{-f vapoursynth} before the input @code{-i yourscript.vpy}.
This demuxer accepts the following option:
@table @option
@item max_script_size
The demuxer buffers the entire script into memory. Adjust this value to set the maximum buffer size,
which in turn, acts as a ceiling for the size of scripts that can be read.
Default is 1 MiB.
@end table
@section w64
Sony Wave64 Audio demuxer.
This demuxer accepts the following options:
@table @option
@item max_size
See the same option for the @ref{wav} demuxer.
@end table
@anchor{wav}
@section wav
RIFF Wave Audio demuxer.
This demuxer accepts the following options:
@table @option
@item max_size
Specify the maximum packet size in bytes for the demuxed packets. By default
this is set to 0, which means that a sensible value is chosen based on the
input format.
@end table
@c man end DEMUXERS

View File

@@ -10,109 +10,44 @@
@contents
@chapter Introduction
@chapter Notes for external developers
This text is concerned with the development @emph{of} FFmpeg itself. Information
on using the FFmpeg libraries in other programs can be found elsewhere, e.g. in:
@itemize @bullet
@item
the installed header files
@item
@url{http://ffmpeg.org/doxygen/trunk/index.html, the Doxygen documentation}
generated from the headers
@item
the examples under @file{doc/examples}
@end itemize
This document is mostly useful for internal FFmpeg developers.
External developers who need to use the API in their application should
refer to the API doxygen documentation in the public headers, and
check the examples in @file{doc/examples} and in the source code to
see how the public API is employed.
You can use the FFmpeg libraries in your commercial program, but you
are encouraged to @emph{publish any patch you make}. In this case the
best way to proceed is to send your patches to the ffmpeg-devel
mailing list following the guidelines illustrated in the remainder of
this document.
For more detailed legal information about the use of FFmpeg in
external programs read the @file{LICENSE} file in the source tree and
consult @url{https://ffmpeg.org/legal.html}.
If you modify FFmpeg code for your own use case, you are highly encouraged to
@emph{submit your changes back to us}, using this document as a guide. There are
both pragmatic and ideological reasons to do so:
@chapter Contributing
There are 2 ways by which code gets into FFmpeg:
@itemize @bullet
@item
Maintaining external changes to keep up with upstream development is
time-consuming and error-prone. With your code in the main tree, it will be
maintained by FFmpeg developers.
@item
FFmpeg developers include leading experts in the field who can find bugs or
design flaws in your code.
@item
By supporting the project you find useful you ensure it continues to be
maintained and developed.
@item Submitting patches to the ffmpeg-devel mailing list.
See @ref{Submitting patches} for details.
@item Directly committing changes to the main tree.
@end itemize
All proposed code changes should be submitted for review to
@url{mailto:ffmpeg-devel@@ffmpeg.org, the development mailing list}, as
described in more detail in the @ref{Submitting patches} chapter. The code
should comply with the @ref{Development Policy} and follow the @ref{Coding Rules}.
Whichever way, changes should be reviewed by the maintainer of the code
before they are committed. And they should follow the @ref{Coding Rules}.
The developer making the commit and the author are responsible for their changes
and should try to fix issues their commit causes.
@anchor{Coding Rules}
@chapter Coding Rules
@section Language
FFmpeg is mainly programmed in the ISO C11 language, except for the public
headers which must stay C99 compatible.
Compiler-specific extensions may be used with good reason, but must not be
depended on, i.e. the code must still compile and work with compilers lacking
the extension.
The following C99 features must not be used anywhere in the codebase:
@itemize @bullet
@item
variable-length arrays;
@item
complex numbers;
@end itemize
@subsection SIMD/DSP
@anchor{SIMD/DSP}
As modern compilers are unable to generate efficient SIMD or other
performance-critical DSP code from plain C, handwritten assembly is used.
Usually such code is isolated in a separate function. Then the standard approach
is writing multiple versions of this function a plain C one that works
everywhere and may also be useful for debugging, and potentially multiple
architecture-specific optimized implementations. Initialization code then
chooses the best available version at runtime and loads it into a function
pointer; the function in question is then always called through this pointer.
The specific syntax used for writing assembly is:
@itemize @bullet
@item
NASM on x86;
@item
GAS on ARM and RISC-V.
@end itemize
A unit testing framework for assembly called @code{checkasm} lives under
@file{tests/checkasm}. All new assembly should come with @code{checkasm} tests;
adding tests for existing assembly that lacks them is also strongly encouraged.
@subsection Other languages
Other languages than C may be used in special cases:
@itemize @bullet
@item
Compiler intrinsics or inline assembly when the code in question cannot be
written in the standard way described in the @ref{SIMD/DSP} section. This
typically applies to code that needs to be inlined.
@item
Objective-C where required for interacting with macOS-specific interfaces.
@end itemize
@section Code formatting conventions
There are the following guidelines regarding the code style in files:
There are the following guidelines regarding the indentation in files:
@itemize @bullet
@item
@@ -132,137 +67,8 @@ K&R coding style is used.
@end itemize
The presentation is one inspired by 'indent -i4 -kr -nut'.
@subsection Examples
Some notable examples to illustrate common code style in FFmpeg:
@itemize @bullet
@item
Space around assignments and after
@code{if}/@code{do}/@code{while}/@code{for} keywords:
@example c, good
// Good
if (condition)
av_foo();
@end example
@example c, good
// Good
for (size_t i = 0; i < len; i++)
av_bar(i);
@end example
@example c, good
// Good
size_t size = 0;
@end example
However no spaces between the parentheses and condition, unless it helps
readability of complex conditions, so the following should not be done:
@example c, bad
// Bad style
if ( condition )
av_foo();
@end example
@item
No unnecessary parentheses, unless it helps readability:
@example c, good
// Good
int fields = ilace ? 2 : 1;
@end example
@item
Don't wrap single-line blocks in braces. Use braces only if there is an accompanying else statement. This keeps future code changes easier to keep track of.
@example c, good
// Good
if (bits_pixel == 24) @{
avctx->pix_fmt = AV_PIX_FMT_BGR24;
@} else if (bits_pixel == 8) @{
avctx->pix_fmt = AV_PIX_FMT_GRAY8;
@} else
return AVERROR_INVALIDDATA;
@end example
@item
Avoid assignments in conditions where it makes sense:
@example c, good
// Good
video_enc->chroma_intra_matrix = av_mallocz(sizeof(*video_enc->chroma_intra_matrix) * 64)
if (!video_enc->chroma_intra_matrix)
return AVERROR(ENOMEM);
@end example
@example c, bad
// Bad style
if (!(video_enc->chroma_intra_matrix = av_mallocz(sizeof(*video_enc->chroma_intra_matrix) * 64)))
return AVERROR(ENOMEM);
@end example
@example c, good
// Ok
while ((entry = av_dict_iterate(options, entry)))
av_log(ctx, AV_LOG_INFO, "Item '%s': '%s'\n", entry->key, entry->value);
@end example
@item
When declaring a pointer variable, the @code{*} goes with the variable not the type:
@example c, good
// Good
AVStream *stream;
@end example
@example c, bad
// Bad style
AVStream* stream;
@end example
@end itemize
If you work on a file that does not follow these guidelines consistently,
change the parts that you are editing to follow these guidelines but do
not make unrelated changes in the file to make it conform to these.
@subsection Vim configuration
In order to configure Vim to follow FFmpeg formatting conventions, paste
the following snippet into your @file{.vimrc}:
@example
" indentation rules for FFmpeg: 4 spaces, no tabs
set expandtab
set shiftwidth=4
set softtabstop=4
set cindent
set cinoptions=(0
" Allow tabs in Makefiles.
autocmd FileType make,automake set noexpandtab shiftwidth=8 softtabstop=8
" Trailing whitespace and tabs are forbidden, so highlight them.
highlight ForbiddenWhitespace ctermbg=red guibg=red
match ForbiddenWhitespace /\s\+$\|\t/
" Do not highlight spaces at the end of line while typing on that line.
autocmd InsertEnter * match ForbiddenWhitespace /\t\|\s\+\%#\@@<!$/
@end example
@subsection Emacs configuration
For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
@lisp
(c-add-style "ffmpeg"
'("k&r"
(c-basic-offset . 4)
(indent-tabs-mode . nil)
(show-trailing-whitespace . t)
(c-offsets-alist
(statement-cont . (c-lineup-assignments +)))
)
)
(setq c-default-style "ffmpeg")
@end lisp
The main priority in FFmpeg is simplicity and small code size in order to
minimize the bug count.
@section Comments
Use the JavaDoc/Doxygen format (see examples below) so that code documentation
@@ -304,52 +110,89 @@ int myfunc(int my_parameter)
...
@end example
@anchor{Naming conventions}
@section Naming conventions
@section C language features
Names of functions, variables, and struct members must be lowercase, using
underscores (_) to separate words. For example, @samp{avfilter_get_video_buffer}
is an acceptable function name and @samp{AVFilterGetVideo} is not.
FFmpeg is programmed in the ISO C90 language with a few additional
features from ISO C99, namely:
Struct, union, enum, and typedeffed type names must use CamelCase. All structs
and unions should be typedeffed to the same name as the struct/union tag, e.g.
@code{typedef struct AVFoo @{ ... @} AVFoo;}. Enums are typically not
typedeffed.
Enumeration constants and macros must be UPPERCASE, except for macros
masquerading as functions, which should use the function naming convention.
All identifiers in the libraries should be namespaced as follows:
@itemize @bullet
@item
No namespacing for identifiers with file and lower scope (e.g. local variables,
static functions), and struct and union members,
the @samp{inline} keyword;
@item
The @code{ff_} prefix must be used for variables and functions visible outside
of file scope, but only used internally within a single library, e.g.
@samp{ff_w64_demuxer}. This prevents name collisions when FFmpeg is statically
linked.
@samp{//} comments;
@item
designated struct initializers (@samp{struct s x = @{ .i = 17 @};});
@item
compound literals (@samp{x = (struct s) @{ 17, 23 @};}).
@item
for loops with variable definition (@samp{for (int i = 0; i < 8; i++)});
@item
Implementation defined behavior for signed integers is assumed to match the
expected behavior for two's complement. Non representable values in integer
casts are binary truncated. Shift right of signed values uses sign extension.
@end itemize
These features are supported by all compilers we care about, so we will not
accept patches to remove their use unless they absolutely do not impair
clarity and performance.
All code must compile with recent versions of GCC and a number of other
currently supported compilers. To ensure compatibility, please do not use
additional C99 features or GCC extensions. Especially watch out for:
@itemize @bullet
@item
mixing statements and declarations;
@item
@samp{long long} (use @samp{int64_t} instead);
@item
@samp{__attribute__} not protected by @samp{#ifdef __GNUC__} or similar;
@item
GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
@end itemize
@section Naming conventions
All names should be composed with underscores (_), not CamelCase. For example,
@samp{avfilter_get_video_buffer} is an acceptable function name and
@samp{AVFilterGetVideo} is not. The exception from this are type names, like
for example structs and enums; they should always be in CamelCase.
There are the following conventions for naming variables and functions:
@itemize @bullet
@item
For local variables no prefix is required.
@item
For file-scope variables and functions declared as @code{static}, no prefix
is required.
@item
For variables and functions visible outside of file scope, but only used
internally by a library, an @code{ff_} prefix should be used,
e.g. @samp{ff_w64_demuxer}.
@item
For variables and functions visible outside of file scope, used internally
across multiple libraries, use @code{avpriv_} as prefix, for example,
@samp{avpriv_report_missing_feature}.
@item
All other internal identifiers, like private type or macro names, should be
namespaced only to avoid possible internal conflicts. E.g. @code{H264_NAL_SPS}
vs. @code{HEVC_NAL_SPS}.
@item
Each library has its own prefix for public symbols, in addition to the
commonly used @code{av_} (@code{avformat_} for libavformat,
@code{avcodec_} for libavcodec, @code{swr_} for libswresample, etc).
Check the existing code and choose names accordingly.
@item
Other public identifiers (struct, union, enum, macro, type names) must use their
library's public prefix (@code{AV}, @code{Sws}, or @code{Swr}).
Note that some symbols without these prefixes are also exported for
retro-compatibility reasons. These exceptions are declared in the
@code{lib<name>/lib<name>.v} files.
@end itemize
Furthermore, name space reserved for the system should not be invaded.
@@ -363,50 +206,50 @@ symbols. If in doubt, just avoid names starting with @code{_} altogether.
@section Miscellaneous conventions
@itemize @bullet
@item
fprintf and printf are forbidden in libavformat and libavcodec,
please use av_log() instead.
@item
Casts should be used only when necessary. Unneeded parentheses
should also be avoided if they don't make the code easier to understand.
@end itemize
@anchor{Development Policy}
@section Editor configuration
In order to configure Vim to follow FFmpeg formatting conventions, paste
the following snippet into your @file{.vimrc}:
@example
" indentation rules for FFmpeg: 4 spaces, no tabs
set expandtab
set shiftwidth=4
set softtabstop=4
set cindent
set cinoptions=(0
" Allow tabs in Makefiles.
autocmd FileType make,automake set noexpandtab shiftwidth=8 softtabstop=8
" Trailing whitespace and tabs are forbidden, so highlight them.
highlight ForbiddenWhitespace ctermbg=red guibg=red
match ForbiddenWhitespace /\s\+$\|\t/
" Do not highlight spaces at the end of line while typing on that line.
autocmd InsertEnter * match ForbiddenWhitespace /\t\|\s\+\%#\@@<!$/
@end example
For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
@lisp
(c-add-style "ffmpeg"
'("k&r"
(c-basic-offset . 4)
(indent-tabs-mode . nil)
(show-trailing-whitespace . t)
(c-offsets-alist
(statement-cont . (c-lineup-assignments +)))
)
)
(setq c-default-style "ffmpeg")
@end lisp
@chapter Development Policy
@section Code behaviour
@subheading Correctness
The code must be valid. It must not crash, abort, access invalid pointers, leak
memory, cause data races or signed integer overflow, or otherwise cause
undefined behaviour. Error codes should be checked and, when applicable,
forwarded to the caller.
@subheading Thread- and library-safety
Our libraries may be called by multiple independent callers in the same process.
These calls may happen from any number of threads and the different call sites
may not be aware of each other - e.g. a user program may be calling our
libraries directly, and use one or more libraries that also call our libraries.
The code must behave correctly under such conditions.
@subheading Robustness
The code must treat as untrusted any bytestream received from a caller or read
from a file, network, etc. It must not misbehave when arbitrary data is sent to
it - typically it should print an error message and return
@code{AVERROR_INVALIDDATA} on encountering invalid input data.
@subheading Memory allocation
The code must use the @code{av_malloc()} family of functions from
@file{libavutil/mem.h} to perform all memory allocation, except in special cases
(e.g. when interacting with an external library that requires a specific
allocator to be used).
All allocations should be checked and @code{AVERROR(ENOMEM)} returned on
failure. A common mistake is that error paths leak memory - make sure that does
not happen.
@subheading stdio
Our libraries must not access the stdio streams stdin/stdout/stderr directly
(e.g. via @code{printf()} family of functions), as that is not library-safe. For
logging, use @code{av_log()}.
@section Patches/Committing
@subheading Licenses for patches must be compatible with FFmpeg.
Contributions should be licensed under the
@@ -429,24 +272,13 @@ missing samples or an implementation with a small subset of features.
Always check the mailing list for any reviewers with issues and test
FATE before you push.
@subheading Commit messages
Commit messages are highly important tools for informing other developers on
what a given change does and why. Every commit must always have a properly
filled out commit message with the following format:
@example
area changed: short 1 line description
details describing what and why and giving references.
@end example
If the commit addresses a known bug on our bug tracker or other external issue
(e.g. CVE), the commit message should include the relevant bug ID(s) or other
external identifiers. Note that this should be done in addition to a proper
explanation and not instead of it. Comments such as "fixed!" or "Changed it."
are not acceptable.
When applying patches that have been discussed at length on the mailing list,
reference the thread in the commit message.
@subheading Keep the main commit message short with an extended description below.
The commit message should have a short first line in the form of
a @samp{topic: short description} as a header, separated by a newline
from the body consisting of an explanation of why the change is necessary.
If the commit fixes a known bug on the bug tracker, the commit message
should include its bug ID. Referring to the issue on the bug tracker does
not exempt you from writing an excerpt of the bug in the commit message.
@subheading Testing must be adequate but not excessive.
If it works for you, others, and passes FATE then it should be OK to commit
@@ -465,6 +297,15 @@ later on.
Also if you have doubts about splitting or not splitting, do not hesitate to
ask/discuss it on the developer mailing list.
@subheading Ask before you change the build system (configure, etc).
Do not commit changes to the build system (Makefiles, configure script)
which change behavior, defaults etc, without asking first. The same
applies to compiler warning fixes, trivial looking fixes and to code
maintained by other developers. We usually have a reason for doing things
the way we do. Send your changes as patches to the ffmpeg-devel mailing
list, and if the code maintainers say OK, you may commit. This does not
apply to files you wrote and/or maintain.
@subheading Cosmetic changes should be kept in separate patches.
We refuse source indentation and other cosmetic changes if they are mixed
with functional changes, such commits will be rejected and removed. Every
@@ -479,15 +320,27 @@ NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
then either do NOT change the indentation of the inner part within (do not
move it to the right)! or do so in a separate commit
@subheading Commit messages should always be filled out properly.
Always fill out the commit log message. Describe in a few lines what you
changed and why. You can refer to mailing list postings if you fix a
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
Recommended format:
@example
area changed: Short 1 line description
details describing what and why and giving references.
@end example
@subheading Credit the author of the patch.
Make sure the author of the commit is set correctly. (see git commit --author)
If you apply a patch, send an
answer to ffmpeg-devel (or wherever you got the patch from) saying that
you applied the patch.
@subheading Credit any researchers
If a commit/patch fixes an issues found by some researcher, always credit the
researcher in the commit message for finding/reporting the issue.
@subheading Complex patches should refer to discussion surrounding them.
When applying patches that have been discussed (at length) on the mailing
list, reference the thread in the log message.
@subheading Always wait long enough before pushing changes
Do NOT commit to code actively maintained by others without permission.
@@ -497,6 +350,22 @@ time-frame (12h for build failures and security fixes, 3 days small changes,
Also note, the maintainer can simply ask for more time to review!
@section Code
@subheading API/ABI changes should be discussed before they are made.
Do not change behavior of the programs (renaming options etc) or public
API or ABI without first discussing it on the ffmpeg-devel mailing list.
Do not remove widely used functionality or features (redundant code can be removed).
@subheading Remember to check if you need to bump versions for libav*.
Depending on the change, you may need to change the version integer.
Incrementing the first component means no backward compatibility to
previous versions (e.g. removal of a function from the public API).
Incrementing the second component means backward compatible change
(e.g. addition of a function to the public API or extension of an
existing data structure).
Incrementing the third component means a noteworthy binary compatible
change (e.g. encoder bug fix that matters for the decoder). The third
component always starts at 100 to distinguish FFmpeg from Libav.
@subheading Warnings for correct code may be disabled if there is no other option.
Compiler warnings indicate potential bugs or code with bad style. If a type of
warning always points to correct and clean code, that warning should
@@ -506,150 +375,10 @@ If it is a bug, the bug has to be fixed. If it is not, the code should
be changed to not generate a warning unless that causes a slowdown
or obfuscates the code.
@section Library public interfaces
Every library in FFmpeg provides a set of public APIs in its installed headers,
which are those listed in the variable @code{HEADERS} in that library's
@file{Makefile}. All identifiers defined in those headers (except for those
explicitly documented otherwise), and corresponding symbols exported from
compiled shared or static libraries are considered public interfaces and must
comply with the API and ABI compatibility rules described in this section.
Public APIs must be backward compatible within a given major version. I.e. any
valid user code that compiles and works with a given library version must still
compile and work with any later version, as long as the major version number is
unchanged. "Valid user code" here means code that is calling our APIs in a
documented and/or intended manner and is not relying on any undefined behavior.
Incrementing the major version may break backward compatibility, but only to the
extent described in @ref{Major version bumps}.
We also guarantee backward ABI compatibility for shared and static libraries.
I.e. it should be possible to replace a shared or static build of our library
with a build of any later version (re-linking the user binary in the static
case) without breaking any valid user binaries, as long as the major version
number remains unchanged.
@subsection Adding new interfaces
Any new public identifiers in installed headers are considered new API - this
includes new functions, structs, macros, enum values, typedefs, new fields in
existing structs, new installed headers, etc. Consider the following
guidelines when adding new APIs.
@subsubheading Motivation
While new APIs can be added relatively easily, changing or removing them is much
harder due to abovementioned compatibility requirements. You should then
consider carefully whether the functionality you are adding really needs to be
exposed to our callers as new public API.
Your new API should have at least one well-established use case outside of the
library that cannot be easily achieved with existing APIs. Every library in
FFmpeg also has a defined scope - your new API must fit within it.
@subsubheading Replacing existing APIs
If your new API is replacing an existing one, it should be strictly superior to
it, so that the advantages of using the new API outweigh the cost to the
callers of changing their code. After adding the new API you should then
deprecate the old one and schedule it for removal, as described in
@ref{Removing interfaces}.
If you deem an existing API deficient and want to fix it, the preferred approach
in most cases is to add a differently-named replacement and deprecate the
existing API rather than modify it. It is important to make the changes visible
to our callers (e.g. through compile- or run-time deprecation warnings) and make
it clear how to transition to the new API (e.g. in the Doxygen documentation or
on the wiki).
@subsubheading API design
The FFmpeg libraries are used by a variety of callers to perform a wide range of
multimedia-related processing tasks. You should therefore - within reason - try
to design your new API for the broadest feasible set of use cases and avoid
unnecessarily limiting it to a specific type of callers (e.g. just media
playback or just transcoding).
@subsubheading Consistency
Check whether similar APIs already exist in FFmpeg. If they do, try to model
your new addition on them to achieve better overall consistency.
The naming of your new identifiers should follow the @ref{Naming conventions}
and be aligned with other similar APIs, if applicable.
@subsubheading Extensibility
You should also consider how your API might be extended in the future in a
backward-compatible way. If you are adding a new struct @code{AVFoo}, the
standard approach is requiring the caller to always allocate it through a
constructor function, typically named @code{av_foo_alloc()}. This way new fields
may be added to the end of the struct without breaking ABI compatibility.
Typically you will also want a destructor - @code{av_foo_free(AVFoo**)} that
frees the indirectly supplied object (and its contents, if applicable) and
writes @code{NULL} to the supplied pointer, thus eliminating the potential
dangling pointer in the caller's memory.
If you are adding new functions, consider whether it might be desirable to tweak
their behavior in the future - you may want to add a flags argument, even though
it would be unused initially.
@subsubheading Documentation
All new APIs must be documented as Doxygen-formatted comments above the
identifiers you add to the public headers. You should also briefly mention the
change in @file{doc/APIchanges}.
@subsubheading Bump the version
Backward-incompatible API or ABI changes require incrementing (bumping) the
major version number, as described in @ref{Major version bumps}. Major
bumps are significant events that happen on a schedule - so if your change
strictly requires one you should add it under @code{#if} preprocessor guards that
disable it until the next major bump happens.
New APIs that can be added without breaking API or ABI compatibility require
bumping the minor version number.
Incrementing the third (micro) version component means a noteworthy binary
compatible change (e.g. encoder bug fix that matters for the decoder). The third
component always starts at 100 to distinguish FFmpeg from Libav.
@anchor{Removing interfaces}
@subsection Removing interfaces
Due to abovementioned compatibility guarantees, removing APIs is an involved
process that should only be undertaken with good reason. Typically a deficient,
restrictive, or otherwise inadequate API is replaced by a superior one, though
it does at times happen that we remove an API without any replacement (e.g. when
the feature it provides is deemed not worth the maintenance effort, out of scope
of the project, fundamentally flawed, etc.).
The removal has two steps - first the API is deprecated and scheduled for
removal, but remains present and functional. The second step is actually
removing the API - this is described in @ref{Major version bumps}.
To deprecate an API you should signal to our users that they should stop using
it. E.g. if you intend to remove struct members or functions, you should mark
them with @code{attribute_deprecated}. When this cannot be done, it may be
possible to detect the use of the deprecated API at runtime and print a warning
(though take care not to print it too often). You should also document the
deprecation (and the replacement, if applicable) in the relevant Doxygen
documentation block.
Finally, you should define a deprecation guard along the lines of
@code{#define FF_API_<FOO> (LIBAVBAR_VERSION_MAJOR < XX)} (where XX is the major
version in which the API will be removed) in @file{libavbar/version_major.h}
(@file{version.h} in case of @code{libavutil}). Then wrap all uses of the
deprecated API in @code{#if FF_API_<FOO> .... #endif}, so that the code will
automatically get disabled once the major version reaches XX. You can also use
@code{FF_DISABLE_DEPRECATION_WARNINGS} and @code{FF_ENABLE_DEPRECATION_WARNINGS}
to suppress compiler deprecation warnings inside these guards. You should test
that the code compiles and works with the guard macro evaluating to both true
and false.
@anchor{Major version bumps}
@subsection Major version bumps
A major version bump signifies an API and/or ABI compatibility break. To reduce
the negative effects on our callers, who are required to adapt their code,
backward-incompatible changes during a major bump should be limited to:
@itemize @bullet
@item
Removing previously deprecated APIs.
@item
Performing ABI- but not API-breaking changes, like reordering struct contents.
@end itemize
@subheading Check untrusted input properly.
Never write to unallocated memory, never write over the end of arrays,
always check values read from some untrusted source before using them
as array index or other risky things.
@section Documentation/Other
@subheading Subscribe to the ffmpeg-devel mailing list.
@@ -693,6 +422,35 @@ finding a new maintainer and also don't forget to update the @file{MAINTAINERS}
We think our rules are not too hard. If you have comments, contact us.
@chapter Code of conduct
Be friendly and respectful towards others and third parties.
Treat others the way you yourself want to be treated.
Be considerate. Not everyone shares the same viewpoint and priorities as you do.
Different opinions and interpretations help the project.
Looking at issues from a different perspective assists development.
Do not assume malice for things that can be attributed to incompetence. Even if
it is malice, it's rarely good to start with that as initial assumption.
Stay friendly even if someone acts contrarily. Everyone has a bad day
once in a while.
If you yourself have a bad day or are angry then try to take a break and reply
once you are calm and without anger if you have to.
Try to help other team members and cooperate if you can.
The goal of software development is to create technical excellence, not for any
individual to be better and "win" against the others. Large software projects
are only possible and successful through teamwork.
If someone struggles do not put them down. Give them a helping hand
instead and point them in the right direction.
Finally, keep in mind the immortal words of Bill and Ted,
"Be excellent to each other."
@anchor{Submitting patches}
@chapter Submitting patches
@@ -733,27 +491,6 @@ patch is inline or attached per mail.
You can check @url{https://patchwork.ffmpeg.org}, if your patch does not show up, its mime type
likely was wrong.
@subheading How to setup git send-email?
Please see @url{https://git-send-email.io/}.
For gmail additionally see @url{https://shallowsky.com/blog/tech/email/gmail-app-passwds.html}.
@subheading Sending patches from email clients
Using @code{git send-email} might not be desirable for everyone. The
following trick allows to send patches via email clients in a safe
way. It has been tested with Outlook and Thunderbird (with X-Unsent
extension) and might work with other applications.
Create your patch like this:
@verbatim
git format-patch -s -o "outputfolder" --add-header "X-Unsent: 1" --suffix .eml --to ffmpeg-devel@ffmpeg.org -1 1a2b3c4d
@end verbatim
Now you'll just need to open the eml file with the email application
and execute 'Send'.
@subheading Reviews
Your patch will be reviewed on the mailing list. You will likely be asked
to make some changes and are expected to send in an improved version that
incorporates the requests from the review. This process may go through
@@ -782,7 +519,7 @@ number) in @file{libavcodec/version.h} or @file{libavformat/version.h}?
Did you register it in @file{allcodecs.c} or @file{allformats.c}?
@item
Did you add the AVCodecID to @file{codec_id.h}?
Did you add the AVCodecID to @file{avcodec.h}?
When adding new codec IDs, also add an entry to the codec descriptor
list in @file{libavcodec/codec_desc.c}.
@@ -797,7 +534,7 @@ already being compiled by some other rule, like a raw demuxer.
@item
Did you add an entry to the table of supported formats or codecs in
@file{doc/general_contents.texi}?
@file{doc/general.texi}?
@item
Did you add an entry in the Changelog?
@@ -885,7 +622,7 @@ If the patch fixes a bug, did you provide a verbose analysis of the bug?
If the patch fixes a bug, did you provide enough information, including
a sample, so the bug can be reproduced and the fix can be verified?
Note please do not attach samples >100k to mails but rather provide a
URL, you can upload to @url{https://streams.videolan.org/upload/}.
URL, you can upload to ftp://upload.ffmpeg.org.
@item
Did you provide a verbose summary about what the patch does change?
@@ -914,13 +651,15 @@ Lines with similar content should be aligned vertically when doing so
improves readability.
@item
Consider adding a regression test for your code. All new modules
should be covered by tests. That includes demuxers, muxers, decoders, encoders
filters, bitstream filters, parsers. If its not possible to do that, add
an explanation why to your patchset, its ok to not test if there's a reason.
Consider adding a regression test for your code.
@item
If you added NASM code please check that things still work with --disable-x86asm.
If you added YASM code please check that things still work with --disable-yasm.
@item
Make sure you check the return values of function and return appropriate
error codes. Especially memory allocation functions like @code{av_malloc()}
are notoriously left unchecked, which is a serious problem.
@item
Test your code with valgrind and or Address Sanitizer to ensure it's free
@@ -971,8 +710,6 @@ accordingly].
@section Adding files to the fate-suite dataset
If you need a sample uploaded send a mail to samples-request.
When there is no muxer or encoder available to generate test media for a
specific test then the media has to be included in the fate-suite.
First please make sure that the sample file is as small as possible to test the
@@ -1022,25 +759,6 @@ In case you need finer control over how valgrind is invoked, use the
@code{--target-exec='valgrind <your_custom_valgrind_options>} option in
your configure line instead.
@anchor{Maintenance}
@chapter Maintenance process
@anchor{MAINTAINERS}
@section MAINTAINERS
The developers maintaining each part of the codebase are listed in @file{MAINTAINERS}.
Being listed in @file{MAINTAINERS}, gives one the right to have git write access to
the specific repository.
@anchor{Becoming a maintainer}
@section Becoming a maintainer
People add themselves to @file{MAINTAINERS} by sending a patch like any other code
change. These get reviewed by the community like any other patch. It is expected
that, if someone has an objection to a new maintainer, she is willing to object
in public with her full name and is willing to take over maintainership for the area.
@anchor{Release process}
@chapter Release process

View File

@@ -1,13 +1,10 @@
#!/bin/sh
OUT_DIR="${1}"
SRC_DIR="${2}"
DOXYFILE="${3}"
DOXYGEN="${4}"
DOXYFILE="${2}"
DOXYGEN="${3}"
shift 4
cd ${SRC_DIR}
shift 3
if [ -e "VERSION" ]; then
VERSION=`cat "VERSION"`

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
/avio_list_dir
/avio_dir_cmd
/avio_reading
/decode_audio
/decode_video
@@ -22,4 +22,3 @@
/transcoding
/vaapi_encode
/vaapi_transcode
/qsv_transcode

View File

@@ -1,27 +1,26 @@
EXAMPLES-$(CONFIG_AVIO_HTTP_SERVE_FILES) += avio_http_serve_files
EXAMPLES-$(CONFIG_AVIO_LIST_DIR_EXAMPLE) += avio_list_dir
EXAMPLES-$(CONFIG_AVIO_READ_CALLBACK_EXAMPLE) += avio_read_callback
EXAMPLES-$(CONFIG_AVIO_DIR_CMD_EXAMPLE) += avio_dir_cmd
EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
EXAMPLES-$(CONFIG_DECODE_AUDIO_EXAMPLE) += decode_audio
EXAMPLES-$(CONFIG_DECODE_FILTER_AUDIO_EXAMPLE) += decode_filter_audio
EXAMPLES-$(CONFIG_DECODE_FILTER_VIDEO_EXAMPLE) += decode_filter_video
EXAMPLES-$(CONFIG_DECODE_VIDEO_EXAMPLE) += decode_video
EXAMPLES-$(CONFIG_DEMUX_DECODE_EXAMPLE) += demux_decode
EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
EXAMPLES-$(CONFIG_ENCODE_AUDIO_EXAMPLE) += encode_audio
EXAMPLES-$(CONFIG_ENCODE_VIDEO_EXAMPLE) += encode_video
EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
EXAMPLES-$(CONFIG_HTTP_MULTICLIENT_EXAMPLE) += http_multiclient
EXAMPLES-$(CONFIG_HW_DECODE_EXAMPLE) += hw_decode
EXAMPLES-$(CONFIG_MUX_EXAMPLE) += mux
EXAMPLES-$(CONFIG_QSV_DECODE_EXAMPLE) += qsv_decode
EXAMPLES-$(CONFIG_REMUX_EXAMPLE) += remux
EXAMPLES-$(CONFIG_RESAMPLE_AUDIO_EXAMPLE) += resample_audio
EXAMPLES-$(CONFIG_SCALE_VIDEO_EXAMPLE) += scale_video
EXAMPLES-$(CONFIG_SHOW_METADATA_EXAMPLE) += show_metadata
EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
EXAMPLES-$(CONFIG_TRANSCODE_EXAMPLE) += transcode
EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
EXAMPLES-$(CONFIG_VAAPI_ENCODE_EXAMPLE) += vaapi_encode
EXAMPLES-$(CONFIG_VAAPI_TRANSCODE_EXAMPLE) += vaapi_transcode
EXAMPLES-$(CONFIG_QSV_TRANSCODE_EXAMPLE) += qsv_transcode
EXAMPLES := $(EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
EXAMPLES_G := $(EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))
@@ -38,7 +37,7 @@ $(EXAMPLES_G): %$(PROGSSUF)_g$(EXESUF): %.o
examples: $(EXAMPLES)
$(EXAMPLES:%$(PROGSSUF)$(EXESUF)=%.o): | doc/examples
OUTDIRS += doc/examples
OBJDIRS += doc/examples
DOXY_INPUT += $(EXAMPLES:%$(PROGSSUF)$(EXESUF)=%.c)

View File

@@ -11,40 +11,33 @@ CFLAGS += -Wall -g
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
# missing the following targets, since they need special options in the FFmpeg build:
# qsv_decode
# qsv_transcode
# vaapi_encode
# vaapi_transcode
EXAMPLES=\
avio_http_serve_files \
avio_list_dir \
avio_read_callback \
EXAMPLES= avio_dir_cmd \
avio_reading \
decode_audio \
decode_filter_audio \
decode_filter_video \
decode_video \
demux_decode \
demuxing_decoding \
encode_audio \
encode_video \
extract_mvs \
filtering_video \
filtering_audio \
http_multiclient \
hw_decode \
mux \
remux \
resample_audio \
scale_video \
show_metadata \
metadata \
muxing \
remuxing \
resampling_audio \
scaling_video \
transcode_aac \
transcode
transcoding \
OBJS=$(addsuffix .o,$(EXAMPLES))
# the following examples make explicit use of the math library
avcodec: LDLIBS += -lm
encode_audio: LDLIBS += -lm
mux: LDLIBS += -lm
resample_audio: LDLIBS += -lm
muxing: LDLIBS += -lm
resampling_audio: LDLIBS += -lm
.phony: all clean-test clean

View File

@@ -7,10 +7,8 @@ that you have them installed and working on your system.
Method 1: build the installed examples in a generic read/write user directory
Copy to a read/write user directory and run:
make -f Makefile.example
It will link to the libraries on your system, assuming the PKG_CONFIG_PATH is
Copy to a read/write user directory and just use "make", it will link
to the libraries on your system, assuming the PKG_CONFIG_PATH is
correctly configured.
Method 2: build the examples in-tree
@@ -22,4 +20,4 @@ examples using "make examplesclean"
If you want to try the dedicated Makefile examples (to emulate the first
method), go into doc/examples and run a command such as
PKG_CONFIG_PATH=pc-uninstalled make -f Makefile.example
PKG_CONFIG_PATH=pc-uninstalled make.

178
doc/examples/avio_dir_cmd.c Normal file
View File

@@ -0,0 +1,178 @@
/*
* Copyright (c) 2014 Lukasz Marek
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
static const char *type_string(int type)
{
switch (type) {
case AVIO_ENTRY_DIRECTORY:
return "<DIR>";
case AVIO_ENTRY_FILE:
return "<FILE>";
case AVIO_ENTRY_BLOCK_DEVICE:
return "<BLOCK DEVICE>";
case AVIO_ENTRY_CHARACTER_DEVICE:
return "<CHARACTER DEVICE>";
case AVIO_ENTRY_NAMED_PIPE:
return "<PIPE>";
case AVIO_ENTRY_SYMBOLIC_LINK:
return "<LINK>";
case AVIO_ENTRY_SOCKET:
return "<SOCKET>";
case AVIO_ENTRY_SERVER:
return "<SERVER>";
case AVIO_ENTRY_SHARE:
return "<SHARE>";
case AVIO_ENTRY_WORKGROUP:
return "<WORKGROUP>";
case AVIO_ENTRY_UNKNOWN:
default:
break;
}
return "<UNKNOWN>";
}
static int list_op(const char *input_dir)
{
AVIODirEntry *entry = NULL;
AVIODirContext *ctx = NULL;
int cnt, ret;
char filemode[4], uid_and_gid[20];
if ((ret = avio_open_dir(&ctx, input_dir, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open directory: %s.\n", av_err2str(ret));
goto fail;
}
cnt = 0;
for (;;) {
if ((ret = avio_read_dir(ctx, &entry)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot list directory: %s.\n", av_err2str(ret));
goto fail;
}
if (!entry)
break;
if (entry->filemode == -1) {
snprintf(filemode, 4, "???");
} else {
snprintf(filemode, 4, "%3"PRIo64, entry->filemode);
}
snprintf(uid_and_gid, 20, "%"PRId64"(%"PRId64")", entry->user_id, entry->group_id);
if (cnt == 0)
av_log(NULL, AV_LOG_INFO, "%-9s %12s %30s %10s %s %16s %16s %16s\n",
"TYPE", "SIZE", "NAME", "UID(GID)", "UGO", "MODIFIED",
"ACCESSED", "STATUS_CHANGED");
av_log(NULL, AV_LOG_INFO, "%-9s %12"PRId64" %30s %10s %s %16"PRId64" %16"PRId64" %16"PRId64"\n",
type_string(entry->type),
entry->size,
entry->name,
uid_and_gid,
filemode,
entry->modification_timestamp,
entry->access_timestamp,
entry->status_change_timestamp);
avio_free_directory_entry(&entry);
cnt++;
};
fail:
avio_close_dir(&ctx);
return ret;
}
static int del_op(const char *url)
{
int ret = avpriv_io_delete(url);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Cannot delete '%s': %s.\n", url, av_err2str(ret));
return ret;
}
static int move_op(const char *src, const char *dst)
{
int ret = avpriv_io_move(src, dst);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Cannot move '%s' into '%s': %s.\n", src, dst, av_err2str(ret));
return ret;
}
static void usage(const char *program_name)
{
fprintf(stderr, "usage: %s OPERATION entry1 [entry2]\n"
"API example program to show how to manipulate resources "
"accessed through AVIOContext.\n"
"OPERATIONS:\n"
"list list content of the directory\n"
"move rename content in directory\n"
"del delete content in directory\n",
program_name);
}
int main(int argc, char *argv[])
{
const char *op = NULL;
int ret;
av_log_set_level(AV_LOG_DEBUG);
if (argc < 2) {
usage(argv[0]);
return 1;
}
avformat_network_init();
op = argv[1];
if (strcmp(op, "list") == 0) {
if (argc < 3) {
av_log(NULL, AV_LOG_INFO, "Missing argument for list operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = list_op(argv[2]);
}
} else if (strcmp(op, "del") == 0) {
if (argc < 3) {
av_log(NULL, AV_LOG_INFO, "Missing argument for del operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = del_op(argv[2]);
}
} else if (strcmp(op, "move") == 0) {
if (argc < 4) {
av_log(NULL, AV_LOG_INFO, "Missing argument for move operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = move_op(argv[2], argv[3]);
}
} else {
av_log(NULL, AV_LOG_INFO, "Invalid operation %s\n", op);
ret = AVERROR(EINVAL);
}
avformat_network_deinit();
return ret < 0 ? 1 : 0;
}

View File

@@ -1,155 +0,0 @@
/*
* Copyright (c) 2015 Stephan Holljes
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file libavformat multi-client network API usage example
* @example avio_http_serve_files.c
*
* Serve a file without decoding or demuxing it over the HTTP protocol. Multiple
* clients can connect and will receive the same file.
*/
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
#include <unistd.h>
static void process_client(AVIOContext *client, const char *in_uri)
{
AVIOContext *input = NULL;
uint8_t buf[1024];
int ret, n, reply_code;
uint8_t *resource = NULL;
while ((ret = avio_handshake(client)) > 0) {
av_opt_get(client, "resource", AV_OPT_SEARCH_CHILDREN, &resource);
// check for strlen(resource) is necessary, because av_opt_get()
// may return empty string.
if (resource && strlen(resource))
break;
av_freep(&resource);
}
if (ret < 0)
goto end;
av_log(client, AV_LOG_TRACE, "resource=%p\n", resource);
if (resource && resource[0] == '/' && !strcmp((resource + 1), in_uri)) {
reply_code = 200;
} else {
reply_code = AVERROR_HTTP_NOT_FOUND;
}
if ((ret = av_opt_set_int(client, "reply_code", reply_code, AV_OPT_SEARCH_CHILDREN)) < 0) {
av_log(client, AV_LOG_ERROR, "Failed to set reply_code: %s.\n", av_err2str(ret));
goto end;
}
av_log(client, AV_LOG_TRACE, "Set reply code to %d\n", reply_code);
while ((ret = avio_handshake(client)) > 0);
if (ret < 0)
goto end;
fprintf(stderr, "Handshake performed.\n");
if (reply_code != 200)
goto end;
fprintf(stderr, "Opening input file.\n");
if ((ret = avio_open2(&input, in_uri, AVIO_FLAG_READ, NULL, NULL)) < 0) {
av_log(input, AV_LOG_ERROR, "Failed to open input: %s: %s.\n", in_uri,
av_err2str(ret));
goto end;
}
for(;;) {
n = avio_read(input, buf, sizeof(buf));
if (n < 0) {
if (n == AVERROR_EOF)
break;
av_log(input, AV_LOG_ERROR, "Error reading from input: %s.\n",
av_err2str(n));
break;
}
avio_write(client, buf, n);
avio_flush(client);
}
end:
fprintf(stderr, "Flushing client\n");
avio_flush(client);
fprintf(stderr, "Closing client\n");
avio_close(client);
fprintf(stderr, "Closing input\n");
avio_close(input);
av_freep(&resource);
}
int main(int argc, char **argv)
{
AVDictionary *options = NULL;
AVIOContext *client = NULL, *server = NULL;
const char *in_uri, *out_uri;
int ret, pid;
av_log_set_level(AV_LOG_TRACE);
if (argc < 3) {
printf("usage: %s input http://hostname[:port]\n"
"API example program to serve http to multiple clients.\n"
"\n", argv[0]);
return 1;
}
in_uri = argv[1];
out_uri = argv[2];
avformat_network_init();
if ((ret = av_dict_set(&options, "listen", "2", 0)) < 0) {
fprintf(stderr, "Failed to set listen mode for server: %s\n", av_err2str(ret));
return ret;
}
if ((ret = avio_open2(&server, out_uri, AVIO_FLAG_WRITE, NULL, &options)) < 0) {
fprintf(stderr, "Failed to open server: %s\n", av_err2str(ret));
return ret;
}
fprintf(stderr, "Entering main loop.\n");
for(;;) {
if ((ret = avio_accept(server, &client)) < 0)
goto end;
fprintf(stderr, "Accepted client, forking process.\n");
// XXX: Since we don't reap our children and don't ignore signals
// this produces zombie processes.
pid = fork();
if (pid < 0) {
perror("Fork failed");
ret = AVERROR(errno);
goto end;
}
if (pid == 0) {
fprintf(stderr, "In child.\n");
process_client(client, in_uri);
avio_close(server);
exit(0);
}
if (pid > 0)
avio_close(client);
}
end:
avio_close(server);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Some errors occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

View File

@@ -1,137 +0,0 @@
/*
* Copyright (c) 2014 Lukasz Marek
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file libavformat AVIOContext list directory API usage example
* @example avio_list_dir.c
*
* Show how to list directories through the libavformat AVIOContext API.
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
static const char *type_string(int type)
{
switch (type) {
case AVIO_ENTRY_DIRECTORY:
return "<DIR>";
case AVIO_ENTRY_FILE:
return "<FILE>";
case AVIO_ENTRY_BLOCK_DEVICE:
return "<BLOCK DEVICE>";
case AVIO_ENTRY_CHARACTER_DEVICE:
return "<CHARACTER DEVICE>";
case AVIO_ENTRY_NAMED_PIPE:
return "<PIPE>";
case AVIO_ENTRY_SYMBOLIC_LINK:
return "<LINK>";
case AVIO_ENTRY_SOCKET:
return "<SOCKET>";
case AVIO_ENTRY_SERVER:
return "<SERVER>";
case AVIO_ENTRY_SHARE:
return "<SHARE>";
case AVIO_ENTRY_WORKGROUP:
return "<WORKGROUP>";
case AVIO_ENTRY_UNKNOWN:
default:
break;
}
return "<UNKNOWN>";
}
static int list_op(const char *input_dir)
{
AVIODirEntry *entry = NULL;
AVIODirContext *ctx = NULL;
int cnt, ret;
char filemode[4], uid_and_gid[20];
if ((ret = avio_open_dir(&ctx, input_dir, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open directory: %s.\n", av_err2str(ret));
goto fail;
}
cnt = 0;
for (;;) {
if ((ret = avio_read_dir(ctx, &entry)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot list directory: %s.\n", av_err2str(ret));
goto fail;
}
if (!entry)
break;
if (entry->filemode == -1) {
snprintf(filemode, 4, "???");
} else {
snprintf(filemode, 4, "%3"PRIo64, entry->filemode);
}
snprintf(uid_and_gid, 20, "%"PRId64"(%"PRId64")", entry->user_id, entry->group_id);
if (cnt == 0)
av_log(NULL, AV_LOG_INFO, "%-9s %12s %30s %10s %s %16s %16s %16s\n",
"TYPE", "SIZE", "NAME", "UID(GID)", "UGO", "MODIFIED",
"ACCESSED", "STATUS_CHANGED");
av_log(NULL, AV_LOG_INFO, "%-9s %12"PRId64" %30s %10s %s %16"PRId64" %16"PRId64" %16"PRId64"\n",
type_string(entry->type),
entry->size,
entry->name,
uid_and_gid,
filemode,
entry->modification_timestamp,
entry->access_timestamp,
entry->status_change_timestamp);
avio_free_directory_entry(&entry);
cnt++;
};
fail:
avio_close_dir(&ctx);
return ret;
}
static void usage(const char *program_name)
{
fprintf(stderr, "usage: %s input_dir\n"
"API example program to show how to list files in directory "
"accessed through AVIOContext.\n", program_name);
}
int main(int argc, char *argv[])
{
int ret;
av_log_set_level(AV_LOG_DEBUG);
if (argc < 2) {
usage(argv[0]);
return 1;
}
avformat_network_init();
ret = list_op(argv[1]);
avformat_network_deinit();
return ret < 0 ? 1 : 0;
}

View File

@@ -1,135 +0,0 @@
/*
* Copyright (c) 2014 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file libavformat AVIOContext read callback API usage example
* @example avio_read_callback.c
*
* Make libavformat demuxer access media content through a custom
* AVIOContext read callback.
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
#include <libavutil/file.h>
#include <libavutil/mem.h>
struct buffer_data {
uint8_t *ptr;
size_t size; ///< size left in the buffer
};
static int read_packet(void *opaque, uint8_t *buf, int buf_size)
{
struct buffer_data *bd = (struct buffer_data *)opaque;
buf_size = FFMIN(buf_size, bd->size);
if (!buf_size)
return AVERROR_EOF;
printf("ptr:%p size:%zu\n", bd->ptr, bd->size);
/* copy internal buffer data to buf */
memcpy(buf, bd->ptr, buf_size);
bd->ptr += buf_size;
bd->size -= buf_size;
return buf_size;
}
int main(int argc, char *argv[])
{
AVFormatContext *fmt_ctx = NULL;
AVIOContext *avio_ctx = NULL;
uint8_t *buffer = NULL, *avio_ctx_buffer = NULL;
size_t buffer_size, avio_ctx_buffer_size = 4096;
char *input_filename = NULL;
int ret = 0;
struct buffer_data bd = { 0 };
if (argc != 2) {
fprintf(stderr, "usage: %s input_file\n"
"API example program to show how to read from a custom buffer "
"accessed through AVIOContext.\n", argv[0]);
return 1;
}
input_filename = argv[1];
/* slurp file content into buffer */
ret = av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);
if (ret < 0)
goto end;
/* fill opaque structure used by the AVIOContext read callback */
bd.ptr = buffer;
bd.size = buffer_size;
if (!(fmt_ctx = avformat_alloc_context())) {
ret = AVERROR(ENOMEM);
goto end;
}
avio_ctx_buffer = av_malloc(avio_ctx_buffer_size);
if (!avio_ctx_buffer) {
ret = AVERROR(ENOMEM);
goto end;
}
avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
0, &bd, &read_packet, NULL, NULL);
if (!avio_ctx) {
av_freep(&avio_ctx_buffer);
ret = AVERROR(ENOMEM);
goto end;
}
fmt_ctx->pb = avio_ctx;
ret = avformat_open_input(&fmt_ctx, NULL, NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open input\n");
goto end;
}
ret = avformat_find_stream_info(fmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Could not find stream information\n");
goto end;
}
av_dump_format(fmt_ctx, 0, input_filename, 0);
end:
avformat_close_input(&fmt_ctx);
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */
if (avio_ctx)
av_freep(&avio_ctx->buffer);
avio_context_free(&avio_ctx);
av_file_unmap(buffer, buffer_size);
if (ret < 0) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

133
doc/examples/avio_reading.c Normal file
View File

@@ -0,0 +1,133 @@
/*
* Copyright (c) 2014 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* libavformat AVIOContext API example.
*
* Make libavformat demuxer access media content through a custom
* AVIOContext read callback.
* @example avio_reading.c
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
#include <libavutil/file.h>
struct buffer_data {
uint8_t *ptr;
size_t size; ///< size left in the buffer
};
static int read_packet(void *opaque, uint8_t *buf, int buf_size)
{
struct buffer_data *bd = (struct buffer_data *)opaque;
buf_size = FFMIN(buf_size, bd->size);
if (!buf_size)
return AVERROR_EOF;
printf("ptr:%p size:%zu\n", bd->ptr, bd->size);
/* copy internal buffer data to buf */
memcpy(buf, bd->ptr, buf_size);
bd->ptr += buf_size;
bd->size -= buf_size;
return buf_size;
}
int main(int argc, char *argv[])
{
AVFormatContext *fmt_ctx = NULL;
AVIOContext *avio_ctx = NULL;
uint8_t *buffer = NULL, *avio_ctx_buffer = NULL;
size_t buffer_size, avio_ctx_buffer_size = 4096;
char *input_filename = NULL;
int ret = 0;
struct buffer_data bd = { 0 };
if (argc != 2) {
fprintf(stderr, "usage: %s input_file\n"
"API example program to show how to read from a custom buffer "
"accessed through AVIOContext.\n", argv[0]);
return 1;
}
input_filename = argv[1];
/* slurp file content into buffer */
ret = av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);
if (ret < 0)
goto end;
/* fill opaque structure used by the AVIOContext read callback */
bd.ptr = buffer;
bd.size = buffer_size;
if (!(fmt_ctx = avformat_alloc_context())) {
ret = AVERROR(ENOMEM);
goto end;
}
avio_ctx_buffer = av_malloc(avio_ctx_buffer_size);
if (!avio_ctx_buffer) {
ret = AVERROR(ENOMEM);
goto end;
}
avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
0, &bd, &read_packet, NULL, NULL);
if (!avio_ctx) {
ret = AVERROR(ENOMEM);
goto end;
}
fmt_ctx->pb = avio_ctx;
ret = avformat_open_input(&fmt_ctx, NULL, NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open input\n");
goto end;
}
ret = avformat_find_stream_info(fmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Could not find stream information\n");
goto end;
}
av_dump_format(fmt_ctx, 0, input_filename, 0);
end:
avformat_close_input(&fmt_ctx);
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */
if (avio_ctx) {
av_freep(&avio_ctx->buffer);
av_freep(&avio_ctx);
}
av_file_unmap(buffer, buffer_size);
if (ret < 0) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

View File

@@ -21,11 +21,10 @@
*/
/**
* @file libavcodec audio decoding API usage example
* @example decode_audio.c
* @file
* audio decoding with libavcodec API example
*
* Decode data from an MP2 input file and generate a raw audio file to
* be played with ffplay.
* @example decode_audio.c
*/
#include <stdio.h>
@@ -40,35 +39,6 @@
#define AUDIO_INBUF_SIZE 20480
#define AUDIO_REFILL_THRESH 4096
static int get_format_from_sample_fmt(const char **fmt,
enum AVSampleFormat sample_fmt)
{
int i;
struct sample_fmt_entry {
enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
} sample_fmt_entries[] = {
{ AV_SAMPLE_FMT_U8, "u8", "u8" },
{ AV_SAMPLE_FMT_S16, "s16be", "s16le" },
{ AV_SAMPLE_FMT_S32, "s32be", "s32le" },
{ AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
{ AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
};
*fmt = NULL;
for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
struct sample_fmt_entry *entry = &sample_fmt_entries[i];
if (sample_fmt == entry->sample_fmt) {
*fmt = AV_NE(entry->fmt_be, entry->fmt_le);
return 0;
}
}
fprintf(stderr,
"sample format %s is not supported as output format\n",
av_get_sample_fmt_name(sample_fmt));
return -1;
}
static void decode(AVCodecContext *dec_ctx, AVPacket *pkt, AVFrame *frame,
FILE *outfile)
{
@@ -98,7 +68,7 @@ static void decode(AVCodecContext *dec_ctx, AVPacket *pkt, AVFrame *frame,
exit(1);
}
for (i = 0; i < frame->nb_samples; i++)
for (ch = 0; ch < dec_ctx->ch_layout.nb_channels; ch++)
for (ch = 0; ch < dec_ctx->channels; ch++)
fwrite(frame->data[ch] + data_size*i, 1, data_size, outfile);
}
}
@@ -116,9 +86,6 @@ int main(int argc, char **argv)
size_t data_size;
AVPacket *pkt;
AVFrame *decoded_frame = NULL;
enum AVSampleFormat sfmt;
int n_channels = 0;
const char *fmt;
if (argc <= 2) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
@@ -128,10 +95,6 @@ int main(int argc, char **argv)
outfilename = argv[2];
pkt = av_packet_alloc();
if (!pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
exit(1); /* or proper cleanup and returning */
}
/* find the MPEG audio decoder */
codec = avcodec_find_decoder(AV_CODEC_ID_MP2);
@@ -165,7 +128,7 @@ int main(int argc, char **argv)
}
outfile = fopen(outfilename, "wb");
if (!outfile) {
fprintf(stderr, "Could not open %s\n", outfilename);
av_free(c);
exit(1);
}
@@ -209,26 +172,6 @@ int main(int argc, char **argv)
pkt->size = 0;
decode(c, pkt, decoded_frame, outfile);
/* print output pcm infomations, because there have no metadata of pcm */
sfmt = c->sample_fmt;
if (av_sample_fmt_is_planar(sfmt)) {
const char *packed = av_get_sample_fmt_name(sfmt);
printf("Warning: the sample format the decoder produced is planar "
"(%s). This example will output the first channel only.\n",
packed ? packed : "?");
sfmt = av_get_packed_sample_fmt(sfmt);
}
n_channels = c->ch_layout.nb_channels;
if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
goto end;
printf("Play the output audio file with the command:\n"
"ffplay -f %s -ac %d -ar %d %s\n",
fmt, n_channels, c->sample_rate,
outfilename);
end:
fclose(outfile);
fclose(f);

View File

@@ -1,319 +0,0 @@
/*
* Copyright (c) 2010 Nicolas George
* Copyright (c) 2011 Stefano Sabatini
* Copyright (c) 2012 Clément Bœsch
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file audio decoding and filtering usage example
* @example decode_filter_audio.c
*
* Demux, decode and filter audio input file, generate a raw audio
* file to be played with ffplay.
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/channel_layout.h>
#include <libavutil/mem.h>
#include <libavutil/opt.h>
static const char *filter_descr = "aresample=8000,aformat=sample_fmts=s16:channel_layouts=mono";
static const char *player = "ffplay -f s16le -ar 8000 -ac 1 -";
static AVFormatContext *fmt_ctx;
static AVCodecContext *dec_ctx;
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
static int audio_stream_index = -1;
static int open_input_file(const char *filename)
{
const AVCodec *dec;
int ret;
if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
/* select the audio stream */
ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, &dec, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find an audio stream in the input file\n");
return ret;
}
audio_stream_index = ret;
/* create decoding context */
dec_ctx = avcodec_alloc_context3(dec);
if (!dec_ctx)
return AVERROR(ENOMEM);
avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[audio_stream_index]->codecpar);
/* init the audio decoder */
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open audio decoder\n");
return ret;
}
return 0;
}
static int init_filters(const char *filters_descr)
{
char args[512];
int ret = 0;
const AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
const AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
static const int out_sample_rate = 8000;
const AVFilterLink *outlink;
AVRational time_base = fmt_ctx->streams[audio_stream_index]->time_base;
filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer audio source: the decoded frames from the decoder will be inserted here. */
if (dec_ctx->ch_layout.order == AV_CHANNEL_ORDER_UNSPEC)
av_channel_layout_default(&dec_ctx->ch_layout, dec_ctx->ch_layout.nb_channels);
ret = snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=",
time_base.num, time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt));
av_channel_layout_describe(&dec_ctx->ch_layout, args + ret, sizeof(args) - ret);
ret = avfilter_graph_create_filter(&buffersrc_ctx, abuffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
/* buffer audio sink: to terminate the filter chain. */
buffersink_ctx = avfilter_graph_alloc_filter(filter_graph, abuffersink, "out");
if (!buffersink_ctx) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
ret = AVERROR(ENOMEM);
goto end;
}
ret = av_opt_set(buffersink_ctx, "sample_formats", "s16",
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set(buffersink_ctx, "channel_layouts", "mono",
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_array(buffersink_ctx, "samplerates", AV_OPT_SEARCH_CHILDREN,
0, 1, AV_OPT_TYPE_INT, &out_sample_rate);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
ret = avfilter_init_dict(buffersink_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot initialize audio buffer sink\n");
goto end;
}
/*
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Print summary of the sink buffer
* Note: args buffer is reused to store channel layout string */
outlink = buffersink_ctx->inputs[0];
av_channel_layout_describe(&outlink->ch_layout, args, sizeof(args));
av_log(NULL, AV_LOG_INFO, "Output: srate:%dHz fmt:%s chlayout:%s\n",
(int)outlink->sample_rate,
(char *)av_x_if_null(av_get_sample_fmt_name(outlink->format), "?"),
args);
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static void print_frame(const AVFrame *frame)
{
const int n = frame->nb_samples * frame->ch_layout.nb_channels;
const uint16_t *p = (uint16_t*)frame->data[0];
const uint16_t *p_end = p + n;
while (p < p_end) {
fputc(*p & 0xff, stdout);
fputc(*p>>8 & 0xff, stdout);
p++;
}
fflush(stdout);
}
int main(int argc, char **argv)
{
int ret;
AVPacket *packet = av_packet_alloc();
AVFrame *frame = av_frame_alloc();
AVFrame *filt_frame = av_frame_alloc();
if (!packet || !frame || !filt_frame) {
fprintf(stderr, "Could not allocate frame or packet\n");
exit(1);
}
if (argc != 2) {
fprintf(stderr, "Usage: %s file | %s\n", argv[0], player);
exit(1);
}
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = init_filters(filter_descr)) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(fmt_ctx, packet)) < 0)
break;
if (packet->stream_index == audio_stream_index) {
ret = avcodec_send_packet(dec_ctx, packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
break;
}
while (ret >= 0) {
ret = avcodec_receive_frame(dec_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
break;
} else if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while receiving a frame from the decoder\n");
goto end;
}
if (ret >= 0) {
/* push the audio data from decoded frame into the filtergraph */
if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the audio filtergraph\n");
break;
}
/* pull filtered audio from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
if (ret < 0)
goto end;
print_frame(filt_frame);
av_frame_unref(filt_frame);
}
av_frame_unref(frame);
}
}
}
av_packet_unref(packet);
}
if (ret == AVERROR_EOF) {
/* signal EOF to the filtergraph */
if (av_buffersrc_add_frame_flags(buffersrc_ctx, NULL, 0) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while closing the filtergraph\n");
goto end;
}
/* pull remaining frames from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
if (ret < 0)
goto end;
print_frame(filt_frame);
av_frame_unref(filt_frame);
}
}
end:
avfilter_graph_free(&filter_graph);
avcodec_free_context(&dec_ctx);
avformat_close_input(&fmt_ctx);
av_packet_free(&packet);
av_frame_free(&frame);
av_frame_free(&filt_frame);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
exit(1);
}
exit(0);
}

View File

@@ -1,317 +0,0 @@
/*
* Copyright (c) 2010 Nicolas George
* Copyright (c) 2011 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* API example for decoding and filtering
* @example decode_filter_video.c
*/
#include <stdio.h>
#include <stdlib.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/mem.h>
#include <libavutil/opt.h>
#include <libavutil/time.h>
const char *filter_descr = "scale=78:24,transpose=cclock";
/* other way:
scale=78:24 [scl]; [scl] transpose=cclock // assumes "[in]" and "[out]" to be input output pads respectively
*/
static AVFormatContext *fmt_ctx;
static AVCodecContext *dec_ctx;
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
static int video_stream_index = -1;
static int64_t last_pts = AV_NOPTS_VALUE;
static int open_input_file(const char *filename)
{
const AVCodec *dec;
int ret;
if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
/* select the video stream */
ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
return ret;
}
video_stream_index = ret;
/* create decoding context */
dec_ctx = avcodec_alloc_context3(dec);
if (!dec_ctx)
return AVERROR(ENOMEM);
avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar);
/* init the video decoder */
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
return ret;
}
return 0;
}
static int init_filters(const char *filters_descr)
{
char args[512];
int ret = 0;
const AVFilter *buffersrc = avfilter_get_by_name("buffer");
const AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
time_base.num, time_base.den,
dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
/* buffer video sink: to terminate the filter chain. */
buffersink_ctx = avfilter_graph_alloc_filter(filter_graph, buffersink, "out");
if (!buffersink_ctx) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
ret = AVERROR(ENOMEM);
goto end;
}
ret = av_opt_set(buffersink_ctx, "pixel_formats", "gray8",
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
ret = avfilter_init_dict(buffersink_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot initialize buffer sink\n");
goto end;
}
/*
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static void display_frame(const AVFrame *frame, AVRational time_base)
{
int x, y;
uint8_t *p0, *p;
int64_t delay;
if (frame->pts != AV_NOPTS_VALUE) {
if (last_pts != AV_NOPTS_VALUE) {
/* sleep roughly the right amount of time;
* usleep is in microseconds, just like AV_TIME_BASE. */
delay = av_rescale_q(frame->pts - last_pts,
time_base, AV_TIME_BASE_Q);
if (delay > 0 && delay < 1000000)
av_usleep(delay);
}
last_pts = frame->pts;
}
/* Trivial ASCII grayscale display. */
p0 = frame->data[0];
puts("\033c");
for (y = 0; y < frame->height; y++) {
p = p0;
for (x = 0; x < frame->width; x++)
putchar(" .-+#"[*(p++) / 52]);
putchar('\n');
p0 += frame->linesize[0];
}
fflush(stdout);
}
int main(int argc, char **argv)
{
int ret;
AVPacket *packet;
AVFrame *frame;
AVFrame *filt_frame;
if (argc != 2) {
fprintf(stderr, "Usage: %s file\n", argv[0]);
exit(1);
}
frame = av_frame_alloc();
filt_frame = av_frame_alloc();
packet = av_packet_alloc();
if (!frame || !filt_frame || !packet) {
fprintf(stderr, "Could not allocate frame or packet\n");
exit(1);
}
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = init_filters(filter_descr)) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(fmt_ctx, packet)) < 0)
break;
if (packet->stream_index == video_stream_index) {
ret = avcodec_send_packet(dec_ctx, packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
break;
}
while (ret >= 0) {
ret = avcodec_receive_frame(dec_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
break;
} else if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while receiving a frame from the decoder\n");
goto end;
}
frame->pts = frame->best_effort_timestamp;
/* push the decoded frame into the filtergraph */
if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
break;
}
/* pull filtered frames from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
if (ret < 0)
goto end;
display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
av_frame_unref(filt_frame);
}
av_frame_unref(frame);
}
}
av_packet_unref(packet);
}
if (ret == AVERROR_EOF) {
/* signal EOF to the filtergraph */
if (av_buffersrc_add_frame_flags(buffersrc_ctx, NULL, 0) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while closing the filtergraph\n");
goto end;
}
/* pull remaining frames from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
if (ret < 0)
goto end;
display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
av_frame_unref(filt_frame);
}
}
end:
avfilter_graph_free(&filter_graph);
avcodec_free_context(&dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_frame_free(&filt_frame);
av_packet_free(&packet);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
exit(1);
}
exit(0);
}

View File

@@ -21,11 +21,10 @@
*/
/**
* @file libavcodec video decoding API usage example
* @example decode_video.c *
* @file
* video decoding with libavcodec API example
*
* Read from an MPEG1 video file, decode frames, and generate PGM images as
* output.
* @example decode_video.c
*/
#include <stdio.h>
@@ -42,7 +41,7 @@ static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize,
FILE *f;
int i;
f = fopen(filename,"wb");
f = fopen(filename,"w");
fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);
for (i = 0; i < ysize; i++)
fwrite(buf + i * wrap, 1, xsize, f);
@@ -70,12 +69,12 @@ static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt,
exit(1);
}
printf("saving frame %3"PRId64"\n", dec_ctx->frame_num);
printf("saving frame %3d\n", dec_ctx->frame_number);
fflush(stdout);
/* the picture is allocated by the decoder. no need to
free it */
snprintf(buf, sizeof(buf), "%s-%"PRId64, filename, dec_ctx->frame_num);
snprintf(buf, sizeof(buf), "%s-%d", filename, dec_ctx->frame_number);
pgm_save(frame->data[0], frame->linesize[0],
frame->width, frame->height, buf);
}
@@ -93,12 +92,10 @@ int main(int argc, char **argv)
uint8_t *data;
size_t data_size;
int ret;
int eof;
AVPacket *pkt;
if (argc <= 2) {
fprintf(stderr, "Usage: %s <input file> <output file>\n"
"And check your input file is encoded by mpeg1video please.\n", argv[0]);
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
exit(0);
}
filename = argv[1];
@@ -152,16 +149,15 @@ int main(int argc, char **argv)
exit(1);
}
do {
while (!feof(f)) {
/* read raw data from the input file */
data_size = fread(inbuf, 1, INBUF_SIZE, f);
if (ferror(f))
if (!data_size)
break;
eof = !data_size;
/* use the parser to split the data into frames */
data = inbuf;
while (data_size > 0 || eof) {
while (data_size > 0) {
ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
if (ret < 0) {
@@ -173,10 +169,8 @@ int main(int argc, char **argv)
if (pkt->size)
decode(c, frame, pkt, outfilename);
else if (eof)
break;
}
} while (!eof);
}
/* flush the decoder */
decode(c, frame, NULL, outfilename);

View File

@@ -1,380 +0,0 @@
/*
* Copyright (c) 2012 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file libavformat and libavcodec demuxing and decoding API usage example
* @example demux_decode.c
*
* Show how to use the libavformat and libavcodec API to demux and decode audio
* and video data. Write the output as raw audio and input files to be played by
* ffplay.
*/
#include <libavutil/imgutils.h>
#include <libavutil/samplefmt.h>
#include <libavutil/timestamp.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static int width, height;
static enum AVPixelFormat pix_fmt;
static AVStream *video_stream = NULL, *audio_stream = NULL;
static const char *src_filename = NULL;
static const char *video_dst_filename = NULL;
static const char *audio_dst_filename = NULL;
static FILE *video_dst_file = NULL;
static FILE *audio_dst_file = NULL;
static uint8_t *video_dst_data[4] = {NULL};
static int video_dst_linesize[4];
static int video_dst_bufsize;
static int video_stream_idx = -1, audio_stream_idx = -1;
static AVFrame *frame = NULL;
static AVPacket *pkt = NULL;
static int video_frame_count = 0;
static int audio_frame_count = 0;
static int output_video_frame(AVFrame *frame)
{
if (frame->width != width || frame->height != height ||
frame->format != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
fprintf(stderr, "Error: Width, height and pixel format have to be "
"constant in a rawvideo file, but the width, height or "
"pixel format of the input video changed:\n"
"old: width = %d, height = %d, format = %s\n"
"new: width = %d, height = %d, format = %s\n",
width, height, av_get_pix_fmt_name(pix_fmt),
frame->width, frame->height,
av_get_pix_fmt_name(frame->format));
return -1;
}
printf("video_frame n:%d\n",
video_frame_count++);
/* copy decoded frame to destination buffer:
* this is required since rawvideo expects non aligned data */
av_image_copy2(video_dst_data, video_dst_linesize,
frame->data, frame->linesize,
pix_fmt, width, height);
/* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
return 0;
}
static int output_audio_frame(AVFrame *frame)
{
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
printf("audio_frame n:%d nb_samples:%d pts:%s\n",
audio_frame_count++, frame->nb_samples,
av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
return 0;
}
static int decode_packet(AVCodecContext *dec, const AVPacket *pkt)
{
int ret = 0;
// submit the packet to the decoder
ret = avcodec_send_packet(dec, pkt);
if (ret < 0) {
fprintf(stderr, "Error submitting a packet for decoding (%s)\n", av_err2str(ret));
return ret;
}
// get all the available frames from the decoder
while (ret >= 0) {
ret = avcodec_receive_frame(dec, frame);
if (ret < 0) {
// those two return values are special and mean there is no output
// frame available, but there were no errors during decoding
if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
return 0;
fprintf(stderr, "Error during decoding (%s)\n", av_err2str(ret));
return ret;
}
// write the frame data to output file
if (dec->codec->type == AVMEDIA_TYPE_VIDEO)
ret = output_video_frame(frame);
else
ret = output_audio_frame(frame);
av_frame_unref(frame);
}
return ret;
}
static int open_codec_context(int *stream_idx,
AVCodecContext **dec_ctx, AVFormatContext *fmt_ctx, enum AVMediaType type)
{
int ret, stream_index;
AVStream *st;
const AVCodec *dec = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
av_get_media_type_string(type), src_filename);
return ret;
} else {
stream_index = ret;
st = fmt_ctx->streams[stream_index];
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
/* Allocate a codec context for the decoder */
*dec_ctx = avcodec_alloc_context3(dec);
if (!*dec_ctx) {
fprintf(stderr, "Failed to allocate the %s codec context\n",
av_get_media_type_string(type));
return AVERROR(ENOMEM);
}
/* Copy codec parameters from input stream to output codec context */
if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0) {
fprintf(stderr, "Failed to copy %s codec parameters to decoder context\n",
av_get_media_type_string(type));
return ret;
}
/* Init the decoders */
if ((ret = avcodec_open2(*dec_ctx, dec, NULL)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
*stream_idx = stream_index;
}
return 0;
}
static int get_format_from_sample_fmt(const char **fmt,
enum AVSampleFormat sample_fmt)
{
int i;
struct sample_fmt_entry {
enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
} sample_fmt_entries[] = {
{ AV_SAMPLE_FMT_U8, "u8", "u8" },
{ AV_SAMPLE_FMT_S16, "s16be", "s16le" },
{ AV_SAMPLE_FMT_S32, "s32be", "s32le" },
{ AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
{ AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
};
*fmt = NULL;
for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
struct sample_fmt_entry *entry = &sample_fmt_entries[i];
if (sample_fmt == entry->sample_fmt) {
*fmt = AV_NE(entry->fmt_be, entry->fmt_le);
return 0;
}
}
fprintf(stderr,
"sample format %s is not supported as output format\n",
av_get_sample_fmt_name(sample_fmt));
return -1;
}
int main (int argc, char **argv)
{
int ret = 0;
if (argc != 4) {
fprintf(stderr, "usage: %s input_file video_output_file audio_output_file\n"
"API example program to show how to read frames from an input file.\n"
"This program reads frames from a file, decodes them, and writes decoded\n"
"video frames to a rawvideo file named video_output_file, and decoded\n"
"audio frames to a rawaudio file named audio_output_file.\n",
argv[0]);
exit(1);
}
src_filename = argv[1];
video_dst_filename = argv[2];
audio_dst_filename = argv[3];
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
/* retrieve stream information */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
fprintf(stderr, "Could not find stream information\n");
exit(1);
}
if (open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
video_stream = fmt_ctx->streams[video_stream_idx];
video_dst_file = fopen(video_dst_filename, "wb");
if (!video_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
ret = 1;
goto end;
}
/* allocate image where the decoded image will be put */
width = video_dec_ctx->width;
height = video_dec_ctx->height;
pix_fmt = video_dec_ctx->pix_fmt;
ret = av_image_alloc(video_dst_data, video_dst_linesize,
width, height, pix_fmt, 1);
if (ret < 0) {
fprintf(stderr, "Could not allocate raw video buffer\n");
goto end;
}
video_dst_bufsize = ret;
}
if (open_codec_context(&audio_stream_idx, &audio_dec_ctx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
audio_stream = fmt_ctx->streams[audio_stream_idx];
audio_dst_file = fopen(audio_dst_filename, "wb");
if (!audio_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename);
ret = 1;
goto end;
}
}
/* dump input information to stderr */
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!audio_stream && !video_stream) {
fprintf(stderr, "Could not find audio or video stream in the input, aborting\n");
ret = 1;
goto end;
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate frame\n");
ret = AVERROR(ENOMEM);
goto end;
}
pkt = av_packet_alloc();
if (!pkt) {
fprintf(stderr, "Could not allocate packet\n");
ret = AVERROR(ENOMEM);
goto end;
}
if (video_stream)
printf("Demuxing video from file '%s' into '%s'\n", src_filename, video_dst_filename);
if (audio_stream)
printf("Demuxing audio from file '%s' into '%s'\n", src_filename, audio_dst_filename);
/* read frames from the file */
while (av_read_frame(fmt_ctx, pkt) >= 0) {
// check if the packet belongs to a stream we are interested in, otherwise
// skip it
if (pkt->stream_index == video_stream_idx)
ret = decode_packet(video_dec_ctx, pkt);
else if (pkt->stream_index == audio_stream_idx)
ret = decode_packet(audio_dec_ctx, pkt);
av_packet_unref(pkt);
if (ret < 0)
break;
}
/* flush the decoders */
if (video_dec_ctx)
decode_packet(video_dec_ctx, NULL);
if (audio_dec_ctx)
decode_packet(audio_dec_ctx, NULL);
printf("Demuxing succeeded.\n");
if (video_stream) {
printf("Play the output video file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(pix_fmt), width, height,
video_dst_filename);
}
if (audio_stream) {
enum AVSampleFormat sfmt = audio_dec_ctx->sample_fmt;
int n_channels = audio_dec_ctx->ch_layout.nb_channels;
const char *fmt;
if (av_sample_fmt_is_planar(sfmt)) {
const char *packed = av_get_sample_fmt_name(sfmt);
printf("Warning: the sample format the decoder produced is planar "
"(%s). This example will output the first channel only.\n",
packed ? packed : "?");
sfmt = av_get_packed_sample_fmt(sfmt);
n_channels = 1;
}
if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
goto end;
printf("Play the output audio file with the command:\n"
"ffplay -f %s -ac %d -ar %d %s\n",
fmt, n_channels, audio_dec_ctx->sample_rate,
audio_dst_filename);
}
end:
avcodec_free_context(&video_dec_ctx);
avcodec_free_context(&audio_dec_ctx);
avformat_close_input(&fmt_ctx);
if (video_dst_file)
fclose(video_dst_file);
if (audio_dst_file)
fclose(audio_dst_file);
av_packet_free(&pkt);
av_frame_free(&frame);
av_free(video_dst_data[0]);
return ret < 0;
}

View File

@@ -0,0 +1,390 @@
/*
* Copyright (c) 2012 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* Demuxing and decoding example.
*
* Show how to use the libavformat and libavcodec API to demux and
* decode audio and video data.
* @example demuxing_decoding.c
*/
#include <libavutil/imgutils.h>
#include <libavutil/samplefmt.h>
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static int width, height;
static enum AVPixelFormat pix_fmt;
static AVStream *video_stream = NULL, *audio_stream = NULL;
static const char *src_filename = NULL;
static const char *video_dst_filename = NULL;
static const char *audio_dst_filename = NULL;
static FILE *video_dst_file = NULL;
static FILE *audio_dst_file = NULL;
static uint8_t *video_dst_data[4] = {NULL};
static int video_dst_linesize[4];
static int video_dst_bufsize;
static int video_stream_idx = -1, audio_stream_idx = -1;
static AVFrame *frame = NULL;
static AVPacket pkt;
static int video_frame_count = 0;
static int audio_frame_count = 0;
/* Enable or disable frame reference counting. You are not supposed to support
* both paths in your application but pick the one most appropriate to your
* needs. Look for the use of refcount in this example to see what are the
* differences of API usage between them. */
static int refcount = 0;
static int decode_packet(int *got_frame, int cached)
{
int ret = 0;
int decoded = pkt.size;
*got_frame = 0;
if (pkt.stream_index == video_stream_idx) {
/* decode video frame */
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
if (*got_frame) {
if (frame->width != width || frame->height != height ||
frame->format != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
fprintf(stderr, "Error: Width, height and pixel format have to be "
"constant in a rawvideo file, but the width, height or "
"pixel format of the input video changed:\n"
"old: width = %d, height = %d, format = %s\n"
"new: width = %d, height = %d, format = %s\n",
width, height, av_get_pix_fmt_name(pix_fmt),
frame->width, frame->height,
av_get_pix_fmt_name(frame->format));
return -1;
}
printf("video_frame%s n:%d coded_n:%d\n",
cached ? "(cached)" : "",
video_frame_count++, frame->coded_picture_number);
/* copy decoded frame to destination buffer:
* this is required since rawvideo expects non aligned data */
av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height);
/* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
}
} else if (pkt.stream_index == audio_stream_idx) {
/* decode audio frame */
ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(ret));
return ret;
}
/* Some audio decoders decode only part of the packet, and have to be
* called again with the remainder of the packet data.
* Sample: fate-suite/lossless-audio/luckynight-partial.shn
* Also, some decoders might over-read the packet. */
decoded = FFMIN(ret, pkt.size);
if (*got_frame) {
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
printf("audio_frame%s n:%d nb_samples:%d pts:%s\n",
cached ? "(cached)" : "",
audio_frame_count++, frame->nb_samples,
av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
}
}
/* If we use frame reference counting, we own the data and need
* to de-reference it when we don't use it anymore */
if (*got_frame && refcount)
av_frame_unref(frame);
return decoded;
}
static int open_codec_context(int *stream_idx,
AVCodecContext **dec_ctx, AVFormatContext *fmt_ctx, enum AVMediaType type)
{
int ret, stream_index;
AVStream *st;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
av_get_media_type_string(type), src_filename);
return ret;
} else {
stream_index = ret;
st = fmt_ctx->streams[stream_index];
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
/* Allocate a codec context for the decoder */
*dec_ctx = avcodec_alloc_context3(dec);
if (!*dec_ctx) {
fprintf(stderr, "Failed to allocate the %s codec context\n",
av_get_media_type_string(type));
return AVERROR(ENOMEM);
}
/* Copy codec parameters from input stream to output codec context */
if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0) {
fprintf(stderr, "Failed to copy %s codec parameters to decoder context\n",
av_get_media_type_string(type));
return ret;
}
/* Init the decoders, with or without reference counting */
av_dict_set(&opts, "refcounted_frames", refcount ? "1" : "0", 0);
if ((ret = avcodec_open2(*dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
*stream_idx = stream_index;
}
return 0;
}
static int get_format_from_sample_fmt(const char **fmt,
enum AVSampleFormat sample_fmt)
{
int i;
struct sample_fmt_entry {
enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
} sample_fmt_entries[] = {
{ AV_SAMPLE_FMT_U8, "u8", "u8" },
{ AV_SAMPLE_FMT_S16, "s16be", "s16le" },
{ AV_SAMPLE_FMT_S32, "s32be", "s32le" },
{ AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
{ AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
};
*fmt = NULL;
for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
struct sample_fmt_entry *entry = &sample_fmt_entries[i];
if (sample_fmt == entry->sample_fmt) {
*fmt = AV_NE(entry->fmt_be, entry->fmt_le);
return 0;
}
}
fprintf(stderr,
"sample format %s is not supported as output format\n",
av_get_sample_fmt_name(sample_fmt));
return -1;
}
int main (int argc, char **argv)
{
int ret = 0, got_frame;
if (argc != 4 && argc != 5) {
fprintf(stderr, "usage: %s [-refcount] input_file video_output_file audio_output_file\n"
"API example program to show how to read frames from an input file.\n"
"This program reads frames from a file, decodes them, and writes decoded\n"
"video frames to a rawvideo file named video_output_file, and decoded\n"
"audio frames to a rawaudio file named audio_output_file.\n\n"
"If the -refcount option is specified, the program use the\n"
"reference counting frame system which allows keeping a copy of\n"
"the data for longer than one decode call.\n"
"\n", argv[0]);
exit(1);
}
if (argc == 5 && !strcmp(argv[1], "-refcount")) {
refcount = 1;
argv++;
}
src_filename = argv[1];
video_dst_filename = argv[2];
audio_dst_filename = argv[3];
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
/* retrieve stream information */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
fprintf(stderr, "Could not find stream information\n");
exit(1);
}
if (open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
video_stream = fmt_ctx->streams[video_stream_idx];
video_dst_file = fopen(video_dst_filename, "wb");
if (!video_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
ret = 1;
goto end;
}
/* allocate image where the decoded image will be put */
width = video_dec_ctx->width;
height = video_dec_ctx->height;
pix_fmt = video_dec_ctx->pix_fmt;
ret = av_image_alloc(video_dst_data, video_dst_linesize,
width, height, pix_fmt, 1);
if (ret < 0) {
fprintf(stderr, "Could not allocate raw video buffer\n");
goto end;
}
video_dst_bufsize = ret;
}
if (open_codec_context(&audio_stream_idx, &audio_dec_ctx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
audio_stream = fmt_ctx->streams[audio_stream_idx];
audio_dst_file = fopen(audio_dst_filename, "wb");
if (!audio_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename);
ret = 1;
goto end;
}
}
/* dump input information to stderr */
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!audio_stream && !video_stream) {
fprintf(stderr, "Could not find audio or video stream in the input, aborting\n");
ret = 1;
goto end;
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate frame\n");
ret = AVERROR(ENOMEM);
goto end;
}
/* initialize packet, set data to NULL, let the demuxer fill it */
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
if (video_stream)
printf("Demuxing video from file '%s' into '%s'\n", src_filename, video_dst_filename);
if (audio_stream)
printf("Demuxing audio from file '%s' into '%s'\n", src_filename, audio_dst_filename);
/* read frames from the file */
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
AVPacket orig_pkt = pkt;
do {
ret = decode_packet(&got_frame, 0);
if (ret < 0)
break;
pkt.data += ret;
pkt.size -= ret;
} while (pkt.size > 0);
av_packet_unref(&orig_pkt);
}
/* flush cached frames */
pkt.data = NULL;
pkt.size = 0;
do {
decode_packet(&got_frame, 1);
} while (got_frame);
printf("Demuxing succeeded.\n");
if (video_stream) {
printf("Play the output video file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(pix_fmt), width, height,
video_dst_filename);
}
if (audio_stream) {
enum AVSampleFormat sfmt = audio_dec_ctx->sample_fmt;
int n_channels = audio_dec_ctx->channels;
const char *fmt;
if (av_sample_fmt_is_planar(sfmt)) {
const char *packed = av_get_sample_fmt_name(sfmt);
printf("Warning: the sample format the decoder produced is planar "
"(%s). This example will output the first channel only.\n",
packed ? packed : "?");
sfmt = av_get_packed_sample_fmt(sfmt);
n_channels = 1;
}
if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
goto end;
printf("Play the output audio file with the command:\n"
"ffplay -f %s -ac %d -ar %d %s\n",
fmt, n_channels, audio_dec_ctx->sample_rate,
audio_dst_filename);
}
end:
avcodec_free_context(&video_dec_ctx);
avcodec_free_context(&audio_dec_ctx);
avformat_close_input(&fmt_ctx);
if (video_dst_file)
fclose(video_dst_file);
if (audio_dst_file)
fclose(audio_dst_file);
av_frame_free(&frame);
av_free(video_dst_data[0]);
return ret < 0;
}

View File

@@ -21,10 +21,10 @@
*/
/**
* @file libavcodec encoding audio API usage examples
* @example encode_audio.c
* @file
* audio encoding with libavcodec API example.
*
* Generate a synthetic audio signal and encode it to an output MP2 file.
* @example encode_audio.c
*/
#include <stdint.h>
@@ -70,25 +70,26 @@ static int select_sample_rate(const AVCodec *codec)
}
/* select layout with the highest channel count */
static int select_channel_layout(const AVCodec *codec, AVChannelLayout *dst)
static int select_channel_layout(const AVCodec *codec)
{
const AVChannelLayout *p, *best_ch_layout;
const uint64_t *p;
uint64_t best_ch_layout = 0;
int best_nb_channels = 0;
if (!codec->ch_layouts)
return av_channel_layout_copy(dst, &(AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO);
if (!codec->channel_layouts)
return AV_CH_LAYOUT_STEREO;
p = codec->ch_layouts;
while (p->nb_channels) {
int nb_channels = p->nb_channels;
p = codec->channel_layouts;
while (*p) {
int nb_channels = av_get_channel_layout_nb_channels(*p);
if (nb_channels > best_nb_channels) {
best_ch_layout = p;
best_ch_layout = *p;
best_nb_channels = nb_channels;
}
p++;
}
return av_channel_layout_copy(dst, best_ch_layout);
return best_ch_layout;
}
static void encode(AVCodecContext *ctx, AVFrame *frame, AVPacket *pkt,
@@ -163,9 +164,8 @@ int main(int argc, char **argv)
/* select other audio parameters supported by the encoder */
c->sample_rate = select_sample_rate(codec);
ret = select_channel_layout(codec, &c->ch_layout);
if (ret < 0)
exit(1);
c->channel_layout = select_channel_layout(codec);
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
@@ -195,9 +195,7 @@ int main(int argc, char **argv)
frame->nb_samples = c->frame_size;
frame->format = c->sample_fmt;
ret = av_channel_layout_copy(&frame->ch_layout, &c->ch_layout);
if (ret < 0)
exit(1);
frame->channel_layout = c->channel_layout;
/* allocate the data buffers */
ret = av_frame_get_buffer(frame, 0);
@@ -220,7 +218,7 @@ int main(int argc, char **argv)
for (j = 0; j < c->frame_size; j++) {
samples[2*j] = (int)(sin(t) * 10000);
for (k = 1; k < c->ch_layout.nb_channels; k++)
for (k = 1; k < c->channels; k++)
samples[2*j + k] = samples[2*j];
t += tincr;
}

View File

@@ -21,10 +21,10 @@
*/
/**
* @file libavcodec encoding video API usage example
* @example encode_video.c
* @file
* video encoding with libavcodec API example
*
* Generate synthetic video data and encode it to an output file.
* @example encode_video.c
*/
#include <stdio.h>
@@ -145,7 +145,7 @@ int main(int argc, char **argv)
frame->width = c->width;
frame->height = c->height;
ret = av_frame_get_buffer(frame, 0);
ret = av_frame_get_buffer(frame, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate the video frame data\n");
exit(1);
@@ -155,25 +155,12 @@ int main(int argc, char **argv)
for (i = 0; i < 25; i++) {
fflush(stdout);
/* Make sure the frame data is writable.
On the first round, the frame is fresh from av_frame_get_buffer()
and therefore we know it is writable.
But on the next rounds, encode() will have called
avcodec_send_frame(), and the codec may have kept a reference to
the frame in its internal structures, that makes the frame
unwritable.
av_frame_make_writable() checks that and allocates a new buffer
for the frame only if necessary.
*/
/* make sure the frame data is writable */
ret = av_frame_make_writable(frame);
if (ret < 0)
exit(1);
/* Prepare a dummy image.
In real code, this is where you would have your own logic for
filling the frame. FFmpeg does not care what you put in the
frame.
*/
/* prepare a dummy image */
/* Y */
for (y = 0; y < c->height; y++) {
for (x = 0; x < c->width; x++) {
@@ -198,14 +185,8 @@ int main(int argc, char **argv)
/* flush the encoder */
encode(c, NULL, pkt, f);
/* Add sequence end code to have a real MPEG file.
It makes only sense because this tiny examples writes packets
directly. This is called "elementary stream" and only works for some
codecs. To create a valid file, you usually need to write packets
into a proper file format or protocol; see mux.c.
*/
if (codec->id == AV_CODEC_ID_MPEG1VIDEO || codec->id == AV_CODEC_ID_MPEG2VIDEO)
fwrite(endcode, 1, sizeof(endcode), f);
/* add sequence end code to have a real MPEG file */
fwrite(endcode, 1, sizeof(endcode), f);
fclose(f);
avcodec_free_context(&c);

View File

@@ -21,16 +21,7 @@
* THE SOFTWARE.
*/
/**
* @file libavcodec motion vectors extraction API usage example
* @example extract_mvs.c
*
* Read from input file, decode video stream and print a motion vectors
* representation to stdout.
*/
#include <libavutil/motion_vector.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
@@ -69,11 +60,10 @@ static int decode_packet(const AVPacket *pkt)
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64",%4d,%4d,%4d\n",
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags,
mv->motion_x, mv->motion_y, mv->motion_scale);
mv->dst_x, mv->dst_y, mv->flags);
}
}
av_frame_unref(frame);
@@ -88,7 +78,7 @@ static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
int ret;
AVStream *st;
AVCodecContext *dec_ctx = NULL;
const AVCodec *dec = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, &dec, 0);
@@ -114,9 +104,7 @@ static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
/* Init the video decoder */
av_dict_set(&opts, "flags2", "+export_mvs", 0);
ret = avcodec_open2(dec_ctx, dec, &opts);
av_dict_free(&opts);
if (ret < 0) {
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
@@ -133,7 +121,7 @@ static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
int main(int argc, char **argv)
{
int ret = 0;
AVPacket *pkt = NULL;
AVPacket pkt = { 0 };
if (argc != 2) {
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
@@ -168,20 +156,13 @@ int main(int argc, char **argv)
goto end;
}
pkt = av_packet_alloc();
if (!pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
ret = AVERROR(ENOMEM);
goto end;
}
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags,motion_x,motion_y,motion_scale\n");
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
/* read frames from the file */
while (av_read_frame(fmt_ctx, pkt) >= 0) {
if (pkt->stream_index == video_stream_idx)
ret = decode_packet(pkt);
av_packet_unref(pkt);
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
if (pkt.stream_index == video_stream_idx)
ret = decode_packet(&pkt);
av_packet_unref(&pkt);
if (ret < 0)
break;
}
@@ -193,6 +174,5 @@ end:
avcodec_free_context(&video_dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_packet_free(&pkt);
return ret < 0;
}

View File

@@ -19,11 +19,13 @@
*/
/**
* @file libavfilter audio filtering API usage example
* @example filter_audio.c
* @file
* libavfilter API usage example.
*
* This example will generate a sine wave audio, pass it through a simple filter
* chain, and then compute the MD5 checksum of the output data.
* @example filter_audio.c
* This example will generate a sine wave audio,
* pass it through a simple filter chain, and then compute the MD5 checksum of
* the output data.
*
* The filter chain it uses is:
* (input) -> abuffer -> volume -> aformat -> abuffersink -> (output)
@@ -41,19 +43,19 @@
#include <stdio.h>
#include <stdlib.h>
#include <libavutil/channel_layout.h>
#include <libavutil/md5.h>
#include <libavutil/mem.h>
#include <libavutil/opt.h>
#include <libavutil/samplefmt.h>
#include "libavutil/channel_layout.h"
#include "libavutil/md5.h"
#include "libavutil/mem.h"
#include "libavutil/opt.h"
#include "libavutil/samplefmt.h"
#include <libavfilter/avfilter.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
#include "libavfilter/buffersrc.h"
#define INPUT_SAMPLERATE 48000
#define INPUT_FORMAT AV_SAMPLE_FMT_FLTP
#define INPUT_CHANNEL_LAYOUT (AVChannelLayout)AV_CHANNEL_LAYOUT_5POINT0
#define INPUT_CHANNEL_LAYOUT AV_CH_LAYOUT_5POINT0
#define VOLUME_VAL 0.90
@@ -98,7 +100,7 @@ static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
}
/* Set the filter options through the AVOptions API. */
av_channel_layout_describe(&INPUT_CHANNEL_LAYOUT, ch_layout, sizeof(ch_layout));
av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, INPUT_CHANNEL_LAYOUT);
av_opt_set (abuffer_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);
av_opt_set (abuffer_ctx, "sample_fmt", av_get_sample_fmt_name(INPUT_FORMAT), AV_OPT_SEARCH_CHILDREN);
av_opt_set_q (abuffer_ctx, "time_base", (AVRational){ 1, INPUT_SAMPLERATE }, AV_OPT_SEARCH_CHILDREN);
@@ -152,8 +154,9 @@ static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
/* A third way of passing the options is in a string of the form
* key1=value1:key2=value2.... */
snprintf(options_str, sizeof(options_str),
"sample_fmts=%s:sample_rates=%d:channel_layouts=stereo",
av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100);
"sample_fmts=%s:sample_rates=%d:channel_layouts=0x%"PRIx64,
av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100,
(uint64_t)AV_CH_LAYOUT_STEREO);
err = avfilter_init_str(aformat_ctx, options_str);
if (err < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not initialize the aformat filter.\n");
@@ -212,7 +215,7 @@ static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
static int process_output(struct AVMD5 *md5, AVFrame *frame)
{
int planar = av_sample_fmt_is_planar(frame->format);
int channels = frame->ch_layout.nb_channels;
int channels = av_get_channel_layout_nb_channels(frame->channel_layout);
int planes = planar ? channels : 1;
int bps = av_get_bytes_per_sample(frame->format);
int plane_size = bps * frame->nb_samples * (planar ? 1 : channels);
@@ -245,7 +248,7 @@ static int get_input(AVFrame *frame, int frame_num)
/* Set up the frame properties and allocate the buffer for the data. */
frame->sample_rate = INPUT_SAMPLERATE;
frame->format = INPUT_FORMAT;
av_channel_layout_copy(&frame->ch_layout, &INPUT_CHANNEL_LAYOUT);
frame->channel_layout = INPUT_CHANNEL_LAYOUT;
frame->nb_samples = FRAME_SIZE;
frame->pts = frame_num * FRAME_SIZE;
@@ -270,6 +273,7 @@ int main(int argc, char *argv[])
AVFilterGraph *graph;
AVFilterContext *src, *sink;
AVFrame *frame;
uint8_t errstr[1024];
float duration;
int err, nb_frames, i;
@@ -294,7 +298,6 @@ int main(int argc, char *argv[])
md5 = av_md5_alloc();
if (!md5) {
av_frame_free(&frame);
fprintf(stderr, "Error allocating the MD5 context\n");
return 1;
}
@@ -302,10 +305,8 @@ int main(int argc, char *argv[])
/* Set up the filtergraph. */
err = init_filter_graph(&graph, &src, &sink);
if (err < 0) {
av_frame_free(&frame);
av_freep(&md5);
fprintf(stderr, "Unable to init filter graph:");
return 1;
goto fail;
}
/* the main filtering loop */
@@ -356,10 +357,7 @@ int main(int argc, char *argv[])
return 0;
fail:
avfilter_graph_free(&graph);
av_frame_free(&frame);
av_freep(&md5);
fprintf(stderr, "%s\n", av_err2str(err));
av_strerror(err, errstr, sizeof(errstr));
fprintf(stderr, "%s\n", errstr);
return 1;
}

View File

@@ -0,0 +1,292 @@
/*
* Copyright (c) 2010 Nicolas George
* Copyright (c) 2011 Stefano Sabatini
* Copyright (c) 2012 Clément Bœsch
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* API example for audio decoding and filtering
* @example filtering_audio.c
*/
#include <unistd.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
static const char *filter_descr = "aresample=8000,aformat=sample_fmts=s16:channel_layouts=mono";
static const char *player = "ffplay -f s16le -ar 8000 -ac 1 -";
static AVFormatContext *fmt_ctx;
static AVCodecContext *dec_ctx;
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
static int audio_stream_index = -1;
static int open_input_file(const char *filename)
{
int ret;
AVCodec *dec;
if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
/* select the audio stream */
ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, &dec, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find an audio stream in the input file\n");
return ret;
}
audio_stream_index = ret;
/* create decoding context */
dec_ctx = avcodec_alloc_context3(dec);
if (!dec_ctx)
return AVERROR(ENOMEM);
avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[audio_stream_index]->codecpar);
/* init the audio decoder */
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open audio decoder\n");
return ret;
}
return 0;
}
static int init_filters(const char *filters_descr)
{
char args[512];
int ret = 0;
const AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
const AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
static const enum AVSampleFormat out_sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };
static const int64_t out_channel_layouts[] = { AV_CH_LAYOUT_MONO, -1 };
static const int out_sample_rates[] = { 8000, -1 };
const AVFilterLink *outlink;
AVRational time_base = fmt_ctx->streams[audio_stream_index]->time_base;
filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer audio source: the decoded frames from the decoder will be inserted here. */
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout = av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
time_base.num, time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt), dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, abuffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
/* buffer audio sink: to terminate the filter chain. */
ret = avfilter_graph_create_filter(&buffersink_ctx, abuffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
}
ret = av_opt_set_int_list(buffersink_ctx, "sample_fmts", out_sample_fmts, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set_int_list(buffersink_ctx, "channel_layouts", out_channel_layouts, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_int_list(buffersink_ctx, "sample_rates", out_sample_rates, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
/*
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Print summary of the sink buffer
* Note: args buffer is reused to store channel layout string */
outlink = buffersink_ctx->inputs[0];
av_get_channel_layout_string(args, sizeof(args), -1, outlink->channel_layout);
av_log(NULL, AV_LOG_INFO, "Output: srate:%dHz fmt:%s chlayout:%s\n",
(int)outlink->sample_rate,
(char *)av_x_if_null(av_get_sample_fmt_name(outlink->format), "?"),
args);
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static void print_frame(const AVFrame *frame)
{
const int n = frame->nb_samples * av_get_channel_layout_nb_channels(frame->channel_layout);
const uint16_t *p = (uint16_t*)frame->data[0];
const uint16_t *p_end = p + n;
while (p < p_end) {
fputc(*p & 0xff, stdout);
fputc(*p>>8 & 0xff, stdout);
p++;
}
fflush(stdout);
}
int main(int argc, char **argv)
{
int ret;
AVPacket packet;
AVFrame *frame = av_frame_alloc();
AVFrame *filt_frame = av_frame_alloc();
if (!frame || !filt_frame) {
perror("Could not allocate frame");
exit(1);
}
if (argc != 2) {
fprintf(stderr, "Usage: %s file | %s\n", argv[0], player);
exit(1);
}
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = init_filters(filter_descr)) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
break;
if (packet.stream_index == audio_stream_index) {
ret = avcodec_send_packet(dec_ctx, &packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
break;
}
while (ret >= 0) {
ret = avcodec_receive_frame(dec_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
break;
} else if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while receiving a frame from the decoder\n");
goto end;
}
if (ret >= 0) {
/* push the audio data from decoded frame into the filtergraph */
if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the audio filtergraph\n");
break;
}
/* pull filtered audio from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
if (ret < 0)
goto end;
print_frame(filt_frame);
av_frame_unref(filt_frame);
}
av_frame_unref(frame);
}
}
}
av_packet_unref(&packet);
}
end:
avfilter_graph_free(&filter_graph);
avcodec_free_context(&dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_frame_free(&filt_frame);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
exit(1);
}
exit(0);
}

View File

@@ -0,0 +1,291 @@
/*
* Copyright (c) 2010 Nicolas George
* Copyright (c) 2011 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* API example for decoding and filtering
* @example filtering_video.c
*/
#define _XOPEN_SOURCE 600 /* for usleep */
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
const char *filter_descr = "scale=78:24,transpose=cclock";
/* other way:
scale=78:24 [scl]; [scl] transpose=cclock // assumes "[in]" and "[out]" to be input output pads respectively
*/
static AVFormatContext *fmt_ctx;
static AVCodecContext *dec_ctx;
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
static int video_stream_index = -1;
static int64_t last_pts = AV_NOPTS_VALUE;
static int open_input_file(const char *filename)
{
int ret;
AVCodec *dec;
if ((ret = avformat_open_input(&fmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
/* select the video stream */
ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
return ret;
}
video_stream_index = ret;
/* create decoding context */
dec_ctx = avcodec_alloc_context3(dec);
if (!dec_ctx)
return AVERROR(ENOMEM);
avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar);
/* init the video decoder */
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
return ret;
}
return 0;
}
static int init_filters(const char *filters_descr)
{
char args[512];
int ret = 0;
const AVFilter *buffersrc = avfilter_get_by_name("buffer");
const AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };
filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
time_base.num, time_base.den,
dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
/* buffer video sink: to terminate the filter chain. */
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts,
AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
/*
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static void display_frame(const AVFrame *frame, AVRational time_base)
{
int x, y;
uint8_t *p0, *p;
int64_t delay;
if (frame->pts != AV_NOPTS_VALUE) {
if (last_pts != AV_NOPTS_VALUE) {
/* sleep roughly the right amount of time;
* usleep is in microseconds, just like AV_TIME_BASE. */
delay = av_rescale_q(frame->pts - last_pts,
time_base, AV_TIME_BASE_Q);
if (delay > 0 && delay < 1000000)
usleep(delay);
}
last_pts = frame->pts;
}
/* Trivial ASCII grayscale display. */
p0 = frame->data[0];
puts("\033c");
for (y = 0; y < frame->height; y++) {
p = p0;
for (x = 0; x < frame->width; x++)
putchar(" .-+#"[*(p++) / 52]);
putchar('\n');
p0 += frame->linesize[0];
}
fflush(stdout);
}
int main(int argc, char **argv)
{
int ret;
AVPacket packet;
AVFrame *frame;
AVFrame *filt_frame;
if (argc != 2) {
fprintf(stderr, "Usage: %s file\n", argv[0]);
exit(1);
}
frame = av_frame_alloc();
filt_frame = av_frame_alloc();
if (!frame || !filt_frame) {
perror("Could not allocate frame");
exit(1);
}
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = init_filters(filter_descr)) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
break;
if (packet.stream_index == video_stream_index) {
ret = avcodec_send_packet(dec_ctx, &packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
break;
}
while (ret >= 0) {
ret = avcodec_receive_frame(dec_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
break;
} else if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while receiving a frame from the decoder\n");
goto end;
}
frame->pts = frame->best_effort_timestamp;
/* push the decoded frame into the filtergraph */
if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
break;
}
/* pull filtered frames from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
if (ret < 0)
goto end;
display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
av_frame_unref(filt_frame);
}
av_frame_unref(frame);
}
}
av_packet_unref(&packet);
}
end:
avfilter_graph_free(&filter_graph);
avcodec_free_context(&dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_frame_free(&filt_frame);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
exit(1);
}
exit(0);
}

View File

@@ -0,0 +1,156 @@
/*
* Copyright (c) 2015 Stephan Holljes
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* libavformat multi-client network API usage example.
*
* @example http_multiclient.c
* This example will serve a file without decoding or demuxing it over http.
* Multiple clients can connect and will receive the same file.
*/
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
#include <unistd.h>
static void process_client(AVIOContext *client, const char *in_uri)
{
AVIOContext *input = NULL;
uint8_t buf[1024];
int ret, n, reply_code;
uint8_t *resource = NULL;
while ((ret = avio_handshake(client)) > 0) {
av_opt_get(client, "resource", AV_OPT_SEARCH_CHILDREN, &resource);
// check for strlen(resource) is necessary, because av_opt_get()
// may return empty string.
if (resource && strlen(resource))
break;
av_freep(&resource);
}
if (ret < 0)
goto end;
av_log(client, AV_LOG_TRACE, "resource=%p\n", resource);
if (resource && resource[0] == '/' && !strcmp((resource + 1), in_uri)) {
reply_code = 200;
} else {
reply_code = AVERROR_HTTP_NOT_FOUND;
}
if ((ret = av_opt_set_int(client, "reply_code", reply_code, AV_OPT_SEARCH_CHILDREN)) < 0) {
av_log(client, AV_LOG_ERROR, "Failed to set reply_code: %s.\n", av_err2str(ret));
goto end;
}
av_log(client, AV_LOG_TRACE, "Set reply code to %d\n", reply_code);
while ((ret = avio_handshake(client)) > 0);
if (ret < 0)
goto end;
fprintf(stderr, "Handshake performed.\n");
if (reply_code != 200)
goto end;
fprintf(stderr, "Opening input file.\n");
if ((ret = avio_open2(&input, in_uri, AVIO_FLAG_READ, NULL, NULL)) < 0) {
av_log(input, AV_LOG_ERROR, "Failed to open input: %s: %s.\n", in_uri,
av_err2str(ret));
goto end;
}
for(;;) {
n = avio_read(input, buf, sizeof(buf));
if (n < 0) {
if (n == AVERROR_EOF)
break;
av_log(input, AV_LOG_ERROR, "Error reading from input: %s.\n",
av_err2str(n));
break;
}
avio_write(client, buf, n);
avio_flush(client);
}
end:
fprintf(stderr, "Flushing client\n");
avio_flush(client);
fprintf(stderr, "Closing client\n");
avio_close(client);
fprintf(stderr, "Closing input\n");
avio_close(input);
av_freep(&resource);
}
int main(int argc, char **argv)
{
AVDictionary *options = NULL;
AVIOContext *client = NULL, *server = NULL;
const char *in_uri, *out_uri;
int ret, pid;
av_log_set_level(AV_LOG_TRACE);
if (argc < 3) {
printf("usage: %s input http://hostname[:port]\n"
"API example program to serve http to multiple clients.\n"
"\n", argv[0]);
return 1;
}
in_uri = argv[1];
out_uri = argv[2];
avformat_network_init();
if ((ret = av_dict_set(&options, "listen", "2", 0)) < 0) {
fprintf(stderr, "Failed to set listen mode for server: %s\n", av_err2str(ret));
return ret;
}
if ((ret = avio_open2(&server, out_uri, AVIO_FLAG_WRITE, NULL, &options)) < 0) {
fprintf(stderr, "Failed to open server: %s\n", av_err2str(ret));
return ret;
}
fprintf(stderr, "Entering main loop.\n");
for(;;) {
if ((ret = avio_accept(server, &client)) < 0)
goto end;
fprintf(stderr, "Accepted client, forking process.\n");
// XXX: Since we don't reap our children and don't ignore signals
// this produces zombie processes.
pid = fork();
if (pid < 0) {
perror("Fork failed");
ret = AVERROR(errno);
goto end;
}
if (pid == 0) {
fprintf(stderr, "In child.\n");
process_client(client, in_uri);
avio_close(server);
exit(0);
}
if (pid > 0)
avio_close(client);
}
end:
avio_close(server);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Some errors occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

View File

@@ -24,18 +24,18 @@
*/
/**
* @file HW-accelerated decoding API usage.example
* @example hw_decode.c
* @file
* HW-Accelerated decoding example.
*
* Perform HW-accelerated decoding with output frames from HW video
* surfaces.
* @example hw_decode.c
* This example shows how to do HW-accelerated decoding with output
* frames from the HW video surfaces.
*/
#include <stdio.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/mem.h>
#include <libavutil/pixdesc.h>
#include <libavutil/hwcontext.h>
#include <libavutil/opt.h>
@@ -152,8 +152,8 @@ int main(int argc, char *argv[])
int video_stream, ret;
AVStream *video = NULL;
AVCodecContext *decoder_ctx = NULL;
const AVCodec *decoder = NULL;
AVPacket *packet = NULL;
AVCodec *decoder = NULL;
AVPacket packet;
enum AVHWDeviceType type;
int i;
@@ -172,12 +172,6 @@ int main(int argc, char *argv[])
return -1;
}
packet = av_packet_alloc();
if (!packet) {
fprintf(stderr, "Failed to allocate AVPacket\n");
return -1;
}
/* open the input file */
if (avformat_open_input(&input_ctx, argv[2], NULL, NULL) != 0) {
fprintf(stderr, "Cannot open input file '%s'\n", argv[2]);
@@ -229,25 +223,27 @@ int main(int argc, char *argv[])
}
/* open the file to dump raw data */
output_file = fopen(argv[3], "w+b");
output_file = fopen(argv[3], "w+");
/* actual decoding and dump the raw data */
while (ret >= 0) {
if ((ret = av_read_frame(input_ctx, packet)) < 0)
if ((ret = av_read_frame(input_ctx, &packet)) < 0)
break;
if (video_stream == packet->stream_index)
ret = decode_write(decoder_ctx, packet);
if (video_stream == packet.stream_index)
ret = decode_write(decoder_ctx, &packet);
av_packet_unref(packet);
av_packet_unref(&packet);
}
/* flush the decoder */
ret = decode_write(decoder_ctx, NULL);
packet.data = NULL;
packet.size = 0;
ret = decode_write(decoder_ctx, &packet);
av_packet_unref(&packet);
if (output_file)
fclose(output_file);
av_packet_free(&packet);
avcodec_free_context(&decoder_ctx);
avformat_close_input(&input_ctx);
av_buffer_unref(&hw_device_ctx);

55
doc/examples/metadata.c Normal file
View File

@@ -0,0 +1,55 @@
/*
* Copyright (c) 2011 Reinhard Tartler
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* Shows how the metadata API can be used in application programs.
* @example metadata.c
*/
#include <stdio.h>
#include <libavformat/avformat.h>
#include <libavutil/dict.h>
int main (int argc, char **argv)
{
AVFormatContext *fmt_ctx = NULL;
AVDictionaryEntry *tag = NULL;
int ret;
if (argc != 2) {
printf("usage: %s <input_file>\n"
"example program to demonstrate the use of the libavformat metadata API.\n"
"\n", argv[0]);
return 1;
}
if ((ret = avformat_open_input(&fmt_ctx, argv[1], NULL, NULL)))
return ret;
while ((tag = av_dict_get(fmt_ctx->metadata, "", tag, AV_DICT_IGNORE_SUFFIX)))
printf("%s=%s\n", tag->key, tag->value);
avformat_close_input(&fmt_ctx);
return 0;
}

View File

@@ -1,643 +0,0 @@
/*
* Copyright (c) 2003 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file libavformat muxing API usage example
* @example mux.c
*
* Generate a synthetic audio and video signal and mux them to a media file in
* any supported libavformat format. The default codecs are used.
*/
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <libavutil/avassert.h>
#include <libavutil/channel_layout.h>
#include <libavutil/opt.h>
#include <libavutil/mathematics.h>
#include <libavutil/timestamp.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#define STREAM_DURATION 10.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
#define SCALE_FLAGS SWS_BICUBIC
// a wrapper around a single output AVStream
typedef struct OutputStream {
AVStream *st;
AVCodecContext *enc;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
AVPacket *tmp_pkt;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c,
AVStream *st, AVFrame *frame, AVPacket *pkt)
{
int ret;
// send the frame to the encoder
ret = avcodec_send_frame(c, frame);
if (ret < 0) {
fprintf(stderr, "Error sending a frame to the encoder: %s\n",
av_err2str(ret));
exit(1);
}
while (ret >= 0) {
ret = avcodec_receive_packet(c, pkt);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
else if (ret < 0) {
fprintf(stderr, "Error encoding a frame: %s\n", av_err2str(ret));
exit(1);
}
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, c->time_base, st->time_base);
pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
ret = av_interleaved_write_frame(fmt_ctx, pkt);
/* pkt is now blank (av_interleaved_write_frame() takes ownership of
* its contents and resets pkt), so that no unreferencing is necessary.
* This would be different if one used av_write_frame(). */
if (ret < 0) {
fprintf(stderr, "Error while writing output packet: %s\n", av_err2str(ret));
exit(1);
}
}
return ret == AVERROR_EOF ? 1 : 0;
}
/* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc,
const AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
int i;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(codec_id));
exit(1);
}
ost->tmp_pkt = av_packet_alloc();
if (!ost->tmp_pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
exit(1);
}
ost->st = avformat_new_stream(oc, NULL);
if (!ost->st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
ost->st->id = oc->nb_streams-1;
c = avcodec_alloc_context3(*codec);
if (!c) {
fprintf(stderr, "Could not alloc an encoding context\n");
exit(1);
}
ost->enc = c;
switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO:
c->sample_fmt = (*codec)->sample_fmts ?
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000;
c->sample_rate = 44100;
if ((*codec)->supported_samplerates) {
c->sample_rate = (*codec)->supported_samplerates[0];
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
if ((*codec)->supported_samplerates[i] == 44100)
c->sample_rate = 44100;
}
}
av_channel_layout_copy(&c->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
break;
case AVMEDIA_TYPE_VIDEO:
c->codec_id = codec_id;
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = 352;
c->height = 288;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
c->time_base = ost->st->time_base;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B-frames */
c->max_b_frames = 2;
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;
}
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
/**************************************************************/
/* audio output */
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
const AVChannelLayout *channel_layout,
int sample_rate, int nb_samples)
{
AVFrame *frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Error allocating an audio frame\n");
exit(1);
}
frame->format = sample_fmt;
av_channel_layout_copy(&frame->ch_layout, channel_layout);
frame->sample_rate = sample_rate;
frame->nb_samples = nb_samples;
if (nb_samples) {
if (av_frame_get_buffer(frame, 0) < 0) {
fprintf(stderr, "Error allocating an audio buffer\n");
exit(1);
}
}
return frame;
}
static void open_audio(AVFormatContext *oc, const AVCodec *codec,
OutputStream *ost, AVDictionary *opt_arg)
{
AVCodecContext *c;
int nb_samples;
int ret;
AVDictionary *opt = NULL;
c = ost->enc;
/* open it */
av_dict_copy(&opt, opt_arg, 0);
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) {
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
exit(1);
}
/* init signal generator */
ost->t = 0;
ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
/* increment frequency by 110 Hz per second */
ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
nb_samples = 10000;
else
nb_samples = c->frame_size;
ost->frame = alloc_audio_frame(c->sample_fmt, &c->ch_layout,
c->sample_rate, nb_samples);
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, &c->ch_layout,
c->sample_rate, nb_samples);
/* copy the stream parameters to the muxer */
ret = avcodec_parameters_from_context(ost->st->codecpar, c);
if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n");
exit(1);
}
/* create resampler context */
ost->swr_ctx = swr_alloc();
if (!ost->swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n");
exit(1);
}
/* set options */
av_opt_set_chlayout (ost->swr_ctx, "in_chlayout", &c->ch_layout, 0);
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_chlayout (ost->swr_ctx, "out_chlayout", &c->ch_layout, 0);
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(ost->swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n");
exit(1);
}
}
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
* 'nb_channels' channels. */
static AVFrame *get_audio_frame(OutputStream *ost)
{
AVFrame *frame = ost->tmp_frame;
int j, i, v;
int16_t *q = (int16_t*)frame->data[0];
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->enc->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) > 0)
return NULL;
for (j = 0; j <frame->nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->enc->ch_layout.nb_channels; i++)
*q++ = v;
ost->t += ost->tincr;
ost->tincr += ost->tincr2;
}
frame->pts = ost->next_pts;
ost->next_pts += frame->nb_samples;
return frame;
}
/*
* encode one audio frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{
AVCodecContext *c;
AVFrame *frame;
int ret;
int dst_nb_samples;
c = ost->enc;
frame = get_audio_frame(ost);
if (frame) {
/* convert samples from native format to destination codec format, using the resampler */
/* compute destination number of samples */
dst_nb_samples = swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples;
av_assert0(dst_nb_samples == frame->nb_samples);
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here
*/
ret = av_frame_make_writable(ost->frame);
if (ret < 0)
exit(1);
/* convert to destination format */
ret = swr_convert(ost->swr_ctx,
ost->frame->data, dst_nb_samples,
(const uint8_t **)frame->data, frame->nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting\n");
exit(1);
}
frame = ost->frame;
frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
ost->samples_count += dst_nb_samples;
}
return write_frame(oc, c, ost->st, frame, ost->tmp_pkt);
}
/**************************************************************/
/* video output */
static AVFrame *alloc_frame(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *frame;
int ret;
frame = av_frame_alloc();
if (!frame)
return NULL;
frame->format = pix_fmt;
frame->width = width;
frame->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return frame;
}
static void open_video(AVFormatContext *oc, const AVCodec *codec,
OutputStream *ost, AVDictionary *opt_arg)
{
int ret;
AVCodecContext *c = ost->enc;
AVDictionary *opt = NULL;
av_dict_copy(&opt, opt_arg, 0);
/* open the codec */
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1);
}
/* allocate and init a reusable frame */
ost->frame = alloc_frame(c->pix_fmt, c->width, c->height);
if (!ost->frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
/* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required
* output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_frame(AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) {
fprintf(stderr, "Could not allocate temporary video frame\n");
exit(1);
}
}
/* copy the stream parameters to the muxer */
ret = avcodec_parameters_from_context(ost->st->codecpar, c);
if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n");
exit(1);
}
}
/* Prepare a dummy image. */
static void fill_yuv_image(AVFrame *pict, int frame_index,
int width, int height)
{
int x, y, i;
i = frame_index;
/* Y */
for (y = 0; y < height; y++)
for (x = 0; x < width; x++)
pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;
/* Cb and Cr */
for (y = 0; y < height / 2; y++) {
for (x = 0; x < width / 2; x++) {
pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
}
}
}
static AVFrame *get_video_frame(OutputStream *ost)
{
AVCodecContext *c = ost->enc;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, c->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) > 0)
return NULL;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally; make sure we do not overwrite it here */
if (av_frame_make_writable(ost->frame) < 0)
exit(1);
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,
ost->tmp_frame->linesize, 0, c->height, ost->frame->data,
ost->frame->linesize);
} else {
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{
return write_frame(oc, ost->enc, ost->st, get_video_frame(ost), ost->tmp_pkt);
}
static void close_stream(AVFormatContext *oc, OutputStream *ost)
{
avcodec_free_context(&ost->enc);
av_frame_free(&ost->frame);
av_frame_free(&ost->tmp_frame);
av_packet_free(&ost->tmp_pkt);
sws_freeContext(ost->sws_ctx);
swr_free(&ost->swr_ctx);
}
/**************************************************************/
/* media file output */
int main(int argc, char **argv)
{
OutputStream video_st = { 0 }, audio_st = { 0 };
const AVOutputFormat *fmt;
const char *filename;
AVFormatContext *oc;
const AVCodec *audio_codec, *video_codec;
int ret;
int have_video = 0, have_audio = 0;
int encode_video = 0, encode_audio = 0;
AVDictionary *opt = NULL;
int i;
if (argc < 2) {
printf("usage: %s output_file\n"
"API example program to output a media file with libavformat.\n"
"This program generates a synthetic audio and video stream, encodes and\n"
"muxes them into a file named output_file.\n"
"The output format is automatically guessed according to the file extension.\n"
"Raw images can also be output by using '%%d' in the filename.\n"
"\n", argv[0]);
return 1;
}
filename = argv[1];
for (i = 2; i+1 < argc; i+=2) {
if (!strcmp(argv[i], "-flags") || !strcmp(argv[i], "-fflags"))
av_dict_set(&opt, argv[i]+1, argv[i+1], 0);
}
/* allocate the output media context */
avformat_alloc_output_context2(&oc, NULL, NULL, filename);
if (!oc) {
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
}
if (!oc)
return 1;
fmt = oc->oformat;
/* Add the audio and video streams using the default format codecs
* and initialize the codecs. */
if (fmt->video_codec != AV_CODEC_ID_NONE) {
add_stream(&video_st, oc, &video_codec, fmt->video_codec);
have_video = 1;
encode_video = 1;
}
if (fmt->audio_codec != AV_CODEC_ID_NONE) {
add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
have_audio = 1;
encode_audio = 1;
}
/* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */
if (have_video)
open_video(oc, video_codec, &video_st, opt);
if (have_audio)
open_audio(oc, audio_codec, &audio_st, opt);
av_dump_format(oc, 0, filename, 1);
/* open the output file, if needed */
if (!(fmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open '%s': %s\n", filename,
av_err2str(ret));
return 1;
}
}
/* Write the stream header, if any. */
ret = avformat_write_header(oc, &opt);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file: %s\n",
av_err2str(ret));
return 1;
}
while (encode_video || encode_audio) {
/* select the stream to encode */
if (encode_video &&
(!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base,
audio_st.next_pts, audio_st.enc->time_base) <= 0)) {
encode_video = !write_video_frame(oc, &video_st);
} else {
encode_audio = !write_audio_frame(oc, &audio_st);
}
}
av_write_trailer(oc);
/* Close each codec. */
if (have_video)
close_stream(oc, &video_st);
if (have_audio)
close_stream(oc, &audio_st);
if (!(fmt->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_closep(&oc->pb);
/* free the stream */
avformat_free_context(oc);
return 0;
}

667
doc/examples/muxing.c Normal file
View File

@@ -0,0 +1,667 @@
/*
* Copyright (c) 2003 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* libavformat API example.
*
* Output a media file in any supported libavformat format. The default
* codecs are used.
* @example muxing.c
*/
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <libavutil/avassert.h>
#include <libavutil/channel_layout.h>
#include <libavutil/opt.h>
#include <libavutil/mathematics.h>
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#define STREAM_DURATION 10.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
#define SCALE_FLAGS SWS_BICUBIC
// a wrapper around a single output AVStream
typedef struct OutputStream {
AVStream *st;
AVCodecContext *enc;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
return av_interleaved_write_frame(fmt_ctx, pkt);
}
/* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc,
AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
int i;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(codec_id));
exit(1);
}
ost->st = avformat_new_stream(oc, NULL);
if (!ost->st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
ost->st->id = oc->nb_streams-1;
c = avcodec_alloc_context3(*codec);
if (!c) {
fprintf(stderr, "Could not alloc an encoding context\n");
exit(1);
}
ost->enc = c;
switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO:
c->sample_fmt = (*codec)->sample_fmts ?
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000;
c->sample_rate = 44100;
if ((*codec)->supported_samplerates) {
c->sample_rate = (*codec)->supported_samplerates[0];
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
if ((*codec)->supported_samplerates[i] == 44100)
c->sample_rate = 44100;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->channel_layout = AV_CH_LAYOUT_STEREO;
if ((*codec)->channel_layouts) {
c->channel_layout = (*codec)->channel_layouts[0];
for (i = 0; (*codec)->channel_layouts[i]; i++) {
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
c->channel_layout = AV_CH_LAYOUT_STEREO;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
break;
case AVMEDIA_TYPE_VIDEO:
c->codec_id = codec_id;
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = 352;
c->height = 288;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
c->time_base = ost->st->time_base;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B-frames */
c->max_b_frames = 2;
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;
}
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
/**************************************************************/
/* audio output */
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
uint64_t channel_layout,
int sample_rate, int nb_samples)
{
AVFrame *frame = av_frame_alloc();
int ret;
if (!frame) {
fprintf(stderr, "Error allocating an audio frame\n");
exit(1);
}
frame->format = sample_fmt;
frame->channel_layout = channel_layout;
frame->sample_rate = sample_rate;
frame->nb_samples = nb_samples;
if (nb_samples) {
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
fprintf(stderr, "Error allocating an audio buffer\n");
exit(1);
}
}
return frame;
}
static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{
AVCodecContext *c;
int nb_samples;
int ret;
AVDictionary *opt = NULL;
c = ost->enc;
/* open it */
av_dict_copy(&opt, opt_arg, 0);
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) {
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
exit(1);
}
/* init signal generator */
ost->t = 0;
ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
/* increment frequency by 110 Hz per second */
ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
nb_samples = 10000;
else
nb_samples = c->frame_size;
ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
c->sample_rate, nb_samples);
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, nb_samples);
/* copy the stream parameters to the muxer */
ret = avcodec_parameters_from_context(ost->st->codecpar, c);
if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n");
exit(1);
}
/* create resampler context */
ost->swr_ctx = swr_alloc();
if (!ost->swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n");
exit(1);
}
/* set options */
av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(ost->swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n");
exit(1);
}
}
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
* 'nb_channels' channels. */
static AVFrame *get_audio_frame(OutputStream *ost)
{
AVFrame *frame = ost->tmp_frame;
int j, i, v;
int16_t *q = (int16_t*)frame->data[0];
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->enc->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
for (j = 0; j <frame->nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->enc->channels; i++)
*q++ = v;
ost->t += ost->tincr;
ost->tincr += ost->tincr2;
}
frame->pts = ost->next_pts;
ost->next_pts += frame->nb_samples;
return frame;
}
/*
* encode one audio frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{
AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0;
AVFrame *frame;
int ret;
int got_packet;
int dst_nb_samples;
av_init_packet(&pkt);
c = ost->enc;
frame = get_audio_frame(ost);
if (frame) {
/* convert samples from native format to destination codec format, using the resampler */
/* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
c->sample_rate, c->sample_rate, AV_ROUND_UP);
av_assert0(dst_nb_samples == frame->nb_samples);
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here
*/
ret = av_frame_make_writable(ost->frame);
if (ret < 0)
exit(1);
/* convert to destination format */
ret = swr_convert(ost->swr_ctx,
ost->frame->data, dst_nb_samples,
(const uint8_t **)frame->data, frame->nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting\n");
exit(1);
}
frame = ost->frame;
frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
ost->samples_count += dst_nb_samples;
}
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
exit(1);
}
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
if (ret < 0) {
fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret));
exit(1);
}
}
return (frame || got_packet) ? 0 : 1;
}
/**************************************************************/
/* video output */
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *picture;
int ret;
picture = av_frame_alloc();
if (!picture)
return NULL;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return picture;
}
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{
int ret;
AVCodecContext *c = ost->enc;
AVDictionary *opt = NULL;
av_dict_copy(&opt, opt_arg, 0);
/* open the codec */
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1);
}
/* allocate and init a re-usable frame */
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
if (!ost->frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
/* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required
* output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) {
fprintf(stderr, "Could not allocate temporary picture\n");
exit(1);
}
}
/* copy the stream parameters to the muxer */
ret = avcodec_parameters_from_context(ost->st->codecpar, c);
if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n");
exit(1);
}
}
/* Prepare a dummy image. */
static void fill_yuv_image(AVFrame *pict, int frame_index,
int width, int height)
{
int x, y, i;
i = frame_index;
/* Y */
for (y = 0; y < height; y++)
for (x = 0; x < width; x++)
pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;
/* Cb and Cr */
for (y = 0; y < height / 2; y++) {
for (x = 0; x < width / 2; x++) {
pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
}
}
}
static AVFrame *get_video_frame(OutputStream *ost)
{
AVCodecContext *c = ost->enc;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, c->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally; make sure we do not overwrite it here */
if (av_frame_make_writable(ost->frame) < 0)
exit(1);
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,
ost->tmp_frame->linesize, 0, c->height, ost->frame->data,
ost->frame->linesize);
} else {
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
AVPacket pkt = { 0 };
c = ost->enc;
frame = get_video_frame(ost);
av_init_packet(&pkt);
/* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1);
}
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
} else {
ret = 0;
}
if (ret < 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1);
}
return (frame || got_packet) ? 0 : 1;
}
static void close_stream(AVFormatContext *oc, OutputStream *ost)
{
avcodec_free_context(&ost->enc);
av_frame_free(&ost->frame);
av_frame_free(&ost->tmp_frame);
sws_freeContext(ost->sws_ctx);
swr_free(&ost->swr_ctx);
}
/**************************************************************/
/* media file output */
int main(int argc, char **argv)
{
OutputStream video_st = { 0 }, audio_st = { 0 };
const char *filename;
AVOutputFormat *fmt;
AVFormatContext *oc;
AVCodec *audio_codec, *video_codec;
int ret;
int have_video = 0, have_audio = 0;
int encode_video = 0, encode_audio = 0;
AVDictionary *opt = NULL;
int i;
if (argc < 2) {
printf("usage: %s output_file\n"
"API example program to output a media file with libavformat.\n"
"This program generates a synthetic audio and video stream, encodes and\n"
"muxes them into a file named output_file.\n"
"The output format is automatically guessed according to the file extension.\n"
"Raw images can also be output by using '%%d' in the filename.\n"
"\n", argv[0]);
return 1;
}
filename = argv[1];
for (i = 2; i+1 < argc; i+=2) {
if (!strcmp(argv[i], "-flags") || !strcmp(argv[i], "-fflags"))
av_dict_set(&opt, argv[i]+1, argv[i+1], 0);
}
/* allocate the output media context */
avformat_alloc_output_context2(&oc, NULL, NULL, filename);
if (!oc) {
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
}
if (!oc)
return 1;
fmt = oc->oformat;
/* Add the audio and video streams using the default format codecs
* and initialize the codecs. */
if (fmt->video_codec != AV_CODEC_ID_NONE) {
add_stream(&video_st, oc, &video_codec, fmt->video_codec);
have_video = 1;
encode_video = 1;
}
if (fmt->audio_codec != AV_CODEC_ID_NONE) {
add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
have_audio = 1;
encode_audio = 1;
}
/* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */
if (have_video)
open_video(oc, video_codec, &video_st, opt);
if (have_audio)
open_audio(oc, audio_codec, &audio_st, opt);
av_dump_format(oc, 0, filename, 1);
/* open the output file, if needed */
if (!(fmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open '%s': %s\n", filename,
av_err2str(ret));
return 1;
}
}
/* Write the stream header, if any. */
ret = avformat_write_header(oc, &opt);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file: %s\n",
av_err2str(ret));
return 1;
}
while (encode_video || encode_audio) {
/* select the stream to encode */
if (encode_video &&
(!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base,
audio_st.next_pts, audio_st.enc->time_base) <= 0)) {
encode_video = !write_video_frame(oc, &video_st);
} else {
encode_audio = !write_audio_frame(oc, &audio_st);
}
}
/* Write the trailer, if any. The trailer must be written before you
* close the CodecContexts open when you wrote the header; otherwise
* av_write_trailer() may try to use memory that was freed on
* av_codec_close(). */
av_write_trailer(oc);
/* Close each codec. */
if (have_video)
close_stream(oc, &video_st);
if (have_audio)
close_stream(oc, &audio_st);
if (!(fmt->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_closep(&oc->pb);
/* free the stream */
avformat_free_context(oc);
return 0;
}

View File

@@ -1,238 +0,0 @@
/*
* Copyright (c) 2015 Anton Khirnov
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file Intel QSV-accelerated H.264 decoding API usage example
* @example qsv_decode.c
*
* Perform QSV-accelerated H.264 decoding with output frames in the
* GPU video surfaces, write the decoded frames to an output file.
*/
#include <stdio.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
#include <libavcodec/avcodec.h>
#include <libavutil/buffer.h>
#include <libavutil/error.h>
#include <libavutil/hwcontext.h>
#include <libavutil/hwcontext_qsv.h>
#include <libavutil/mem.h>
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
while (*pix_fmts != AV_PIX_FMT_NONE) {
if (*pix_fmts == AV_PIX_FMT_QSV) {
return AV_PIX_FMT_QSV;
}
pix_fmts++;
}
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
return AV_PIX_FMT_NONE;
}
static int decode_packet(AVCodecContext *decoder_ctx,
AVFrame *frame, AVFrame *sw_frame,
AVPacket *pkt, AVIOContext *output_ctx)
{
int ret = 0;
ret = avcodec_send_packet(decoder_ctx, pkt);
if (ret < 0) {
fprintf(stderr, "Error during decoding\n");
return ret;
}
while (ret >= 0) {
int i, j;
ret = avcodec_receive_frame(decoder_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
else if (ret < 0) {
fprintf(stderr, "Error during decoding\n");
return ret;
}
/* A real program would do something useful with the decoded frame here.
* We just retrieve the raw data and write it to a file, which is rather
* useless but pedagogic. */
ret = av_hwframe_transfer_data(sw_frame, frame, 0);
if (ret < 0) {
fprintf(stderr, "Error transferring the data to system memory\n");
goto fail;
}
for (i = 0; i < FF_ARRAY_ELEMS(sw_frame->data) && sw_frame->data[i]; i++)
for (j = 0; j < (sw_frame->height >> (i > 0)); j++)
avio_write(output_ctx, sw_frame->data[i] + j * sw_frame->linesize[i], sw_frame->width);
fail:
av_frame_unref(sw_frame);
av_frame_unref(frame);
if (ret < 0)
return ret;
}
return 0;
}
int main(int argc, char **argv)
{
AVFormatContext *input_ctx = NULL;
AVStream *video_st = NULL;
AVCodecContext *decoder_ctx = NULL;
const AVCodec *decoder;
AVPacket *pkt = NULL;
AVFrame *frame = NULL, *sw_frame = NULL;
AVIOContext *output_ctx = NULL;
int ret, i;
AVBufferRef *device_ref = NULL;
if (argc < 3) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
/* open the input file */
ret = avformat_open_input(&input_ctx, argv[1], NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Cannot open input file '%s': ", argv[1]);
goto finish;
}
/* find the first H.264 video stream */
for (i = 0; i < input_ctx->nb_streams; i++) {
AVStream *st = input_ctx->streams[i];
if (st->codecpar->codec_id == AV_CODEC_ID_H264 && !video_st)
video_st = st;
else
st->discard = AVDISCARD_ALL;
}
if (!video_st) {
fprintf(stderr, "No H.264 video stream in the input file\n");
goto finish;
}
/* open the hardware device */
ret = av_hwdevice_ctx_create(&device_ref, AV_HWDEVICE_TYPE_QSV,
"auto", NULL, 0);
if (ret < 0) {
fprintf(stderr, "Cannot open the hardware device\n");
goto finish;
}
/* initialize the decoder */
decoder = avcodec_find_decoder_by_name("h264_qsv");
if (!decoder) {
fprintf(stderr, "The QSV decoder is not present in libavcodec\n");
goto finish;
}
decoder_ctx = avcodec_alloc_context3(decoder);
if (!decoder_ctx) {
ret = AVERROR(ENOMEM);
goto finish;
}
decoder_ctx->codec_id = AV_CODEC_ID_H264;
if (video_st->codecpar->extradata_size) {
decoder_ctx->extradata = av_mallocz(video_st->codecpar->extradata_size +
AV_INPUT_BUFFER_PADDING_SIZE);
if (!decoder_ctx->extradata) {
ret = AVERROR(ENOMEM);
goto finish;
}
memcpy(decoder_ctx->extradata, video_st->codecpar->extradata,
video_st->codecpar->extradata_size);
decoder_ctx->extradata_size = video_st->codecpar->extradata_size;
}
decoder_ctx->hw_device_ctx = av_buffer_ref(device_ref);
decoder_ctx->get_format = get_format;
ret = avcodec_open2(decoder_ctx, NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Error opening the decoder: ");
goto finish;
}
/* open the output stream */
ret = avio_open(&output_ctx, argv[2], AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Error opening the output context: ");
goto finish;
}
frame = av_frame_alloc();
sw_frame = av_frame_alloc();
pkt = av_packet_alloc();
if (!frame || !sw_frame || !pkt) {
ret = AVERROR(ENOMEM);
goto finish;
}
/* actual decoding */
while (ret >= 0) {
ret = av_read_frame(input_ctx, pkt);
if (ret < 0)
break;
if (pkt->stream_index == video_st->index)
ret = decode_packet(decoder_ctx, frame, sw_frame, pkt, output_ctx);
av_packet_unref(pkt);
}
/* flush the decoder */
ret = decode_packet(decoder_ctx, frame, sw_frame, NULL, output_ctx);
finish:
if (ret < 0)
fprintf(stderr, "%s\n", av_err2str(ret));
avformat_close_input(&input_ctx);
av_frame_free(&frame);
av_frame_free(&sw_frame);
av_packet_free(&pkt);
avcodec_free_context(&decoder_ctx);
av_buffer_unref(&device_ref);
avio_close(output_ctx);
return ret;
}

View File

@@ -1,436 +0,0 @@
/*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file Intel QSV-accelerated video transcoding API usage example
* @example qsv_transcode.c
*
* Perform QSV-accelerated transcoding and show to dynamically change
* encoder's options.
*
* Usage: qsv_transcode input_stream codec output_stream initial option
* { frame_number new_option }
* e.g: - qsv_transcode input.mp4 h264_qsv output_h264.mp4 "g 60"
* - qsv_transcode input.mp4 hevc_qsv output_hevc.mp4 "g 60 async_depth 1"
* 100 "g 120"
* (initialize codec with gop_size 60 and change it to 120 after 100
* frames)
*/
#include <stdio.h>
#include <errno.h>
#include <libavutil/hwcontext.h>
#include <libavutil/mem.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
static AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
static AVBufferRef *hw_device_ctx = NULL;
static AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;
static int video_stream = -1;
typedef struct DynamicSetting {
int frame_number;
char* optstr;
} DynamicSetting;
static DynamicSetting *dynamic_setting;
static int setting_number;
static int current_setting_number;
static int str_to_dict(char* optstr, AVDictionary **opt)
{
char *key, *value;
if (strlen(optstr) == 0)
return 0;
key = strtok(optstr, " ");
if (key == NULL)
return AVERROR(EINVAL);
value = strtok(NULL, " ");
if (value == NULL)
return AVERROR(EINVAL);
av_dict_set(opt, key, value, 0);
do {
key = strtok(NULL, " ");
if (key == NULL)
return 0;
value = strtok(NULL, " ");
if (value == NULL)
return AVERROR(EINVAL);
av_dict_set(opt, key, value, 0);
} while(1);
}
static int dynamic_set_parameter(AVCodecContext *avctx)
{
AVDictionary *opts = NULL;
int ret = 0;
static int frame_number = 0;
frame_number++;
if (current_setting_number < setting_number &&
frame_number == dynamic_setting[current_setting_number].frame_number) {
AVDictionaryEntry *e = NULL;
ret = str_to_dict(dynamic_setting[current_setting_number++].optstr, &opts);
if (ret < 0) {
fprintf(stderr, "The dynamic parameter is wrong\n");
goto fail;
}
/* Set common option. The dictionary will be freed and replaced
* by a new one containing all options not found in common option list.
* Then this new dictionary is used to set private option. */
if ((ret = av_opt_set_dict(avctx, &opts)) < 0)
goto fail;
/* Set codec specific option */
if ((ret = av_opt_set_dict(avctx->priv_data, &opts)) < 0)
goto fail;
/* There is no "framerate" option in common option list. Use "-r" to set
* framerate, which is compatible with ffmpeg commandline. The video is
* assumed to be average frame rate, so set time_base to 1/framerate. */
e = av_dict_get(opts, "r", NULL, 0);
if (e) {
avctx->framerate = av_d2q(atof(e->value), INT_MAX);
encoder_ctx->time_base = av_inv_q(encoder_ctx->framerate);
}
}
fail:
av_dict_free(&opts);
return ret;
}
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
while (*pix_fmts != AV_PIX_FMT_NONE) {
if (*pix_fmts == AV_PIX_FMT_QSV) {
return AV_PIX_FMT_QSV;
}
pix_fmts++;
}
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
return AV_PIX_FMT_NONE;
}
static int open_input_file(char *filename)
{
int ret;
const AVCodec *decoder = NULL;
AVStream *video = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
fprintf(stderr, "Cannot open input file '%s', Error code: %s\n",
filename, av_err2str(ret));
return ret;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
fprintf(stderr, "Cannot find input stream information. Error code: %s\n",
av_err2str(ret));
return ret;
}
ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Cannot find a video stream in the input file. "
"Error code: %s\n", av_err2str(ret));
return ret;
}
video_stream = ret;
video = ifmt_ctx->streams[video_stream];
switch(video->codecpar->codec_id) {
case AV_CODEC_ID_H264:
decoder = avcodec_find_decoder_by_name("h264_qsv");
break;
case AV_CODEC_ID_HEVC:
decoder = avcodec_find_decoder_by_name("hevc_qsv");
break;
case AV_CODEC_ID_VP9:
decoder = avcodec_find_decoder_by_name("vp9_qsv");
break;
case AV_CODEC_ID_VP8:
decoder = avcodec_find_decoder_by_name("vp8_qsv");
break;
case AV_CODEC_ID_AV1:
decoder = avcodec_find_decoder_by_name("av1_qsv");
break;
case AV_CODEC_ID_MPEG2VIDEO:
decoder = avcodec_find_decoder_by_name("mpeg2_qsv");
break;
case AV_CODEC_ID_MJPEG:
decoder = avcodec_find_decoder_by_name("mjpeg_qsv");
break;
default:
fprintf(stderr, "Codec is not supported by qsv\n");
return AVERROR(EINVAL);
}
if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
return AVERROR(ENOMEM);
if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {
fprintf(stderr, "avcodec_parameters_to_context error. Error code: %s\n",
av_err2str(ret));
return ret;
}
decoder_ctx->framerate = av_guess_frame_rate(ifmt_ctx, video, NULL);
decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
if (!decoder_ctx->hw_device_ctx) {
fprintf(stderr, "A hardware device reference create failed.\n");
return AVERROR(ENOMEM);
}
decoder_ctx->get_format = get_format;
decoder_ctx->pkt_timebase = video->time_base;
if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)
fprintf(stderr, "Failed to open codec for decoding. Error code: %s\n",
av_err2str(ret));
return ret;
}
static int encode_write(AVPacket *enc_pkt, AVFrame *frame)
{
int ret = 0;
av_packet_unref(enc_pkt);
if((ret = dynamic_set_parameter(encoder_ctx)) < 0) {
fprintf(stderr, "Failed to set dynamic parameter. Error code: %s\n",
av_err2str(ret));
goto end;
}
if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {
fprintf(stderr, "Error during encoding. Error code: %s\n", av_err2str(ret));
goto end;
}
while (1) {
if (ret = avcodec_receive_packet(encoder_ctx, enc_pkt))
break;
enc_pkt->stream_index = 0;
av_packet_rescale_ts(enc_pkt, encoder_ctx->time_base,
ofmt_ctx->streams[0]->time_base);
if ((ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt)) < 0) {
fprintf(stderr, "Error during writing data to output file. "
"Error code: %s\n", av_err2str(ret));
return ret;
}
}
end:
if (ret == AVERROR_EOF)
return 0;
ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);
return ret;
}
static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec, char *optstr)
{
AVFrame *frame;
int ret = 0;
ret = avcodec_send_packet(decoder_ctx, pkt);
if (ret < 0) {
fprintf(stderr, "Error during decoding. Error code: %s\n", av_err2str(ret));
return ret;
}
while (ret >= 0) {
if (!(frame = av_frame_alloc()))
return AVERROR(ENOMEM);
ret = avcodec_receive_frame(decoder_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
av_frame_free(&frame);
return 0;
} else if (ret < 0) {
fprintf(stderr, "Error while decoding. Error code: %s\n", av_err2str(ret));
goto fail;
}
if (!encoder_ctx->hw_frames_ctx) {
AVDictionaryEntry *e = NULL;
AVDictionary *opts = NULL;
AVStream *ost;
/* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.
Only after we get a decoded frame, can we obtain its hw_frames_ctx */
encoder_ctx->hw_frames_ctx = av_buffer_ref(decoder_ctx->hw_frames_ctx);
if (!encoder_ctx->hw_frames_ctx) {
ret = AVERROR(ENOMEM);
goto fail;
}
/* set AVCodecContext Parameters for encoder, here we keep them stay
* the same as decoder.
*/
encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);
encoder_ctx->pix_fmt = AV_PIX_FMT_QSV;
encoder_ctx->width = decoder_ctx->width;
encoder_ctx->height = decoder_ctx->height;
if ((ret = str_to_dict(optstr, &opts)) < 0) {
fprintf(stderr, "Failed to set encoding parameter.\n");
goto fail;
}
/* There is no "framerate" option in common option list. Use "-r" to
* set framerate, which is compatible with ffmpeg commandline. The
* video is assumed to be average frame rate, so set time_base to
* 1/framerate. */
e = av_dict_get(opts, "r", NULL, 0);
if (e) {
encoder_ctx->framerate = av_d2q(atof(e->value), INT_MAX);
encoder_ctx->time_base = av_inv_q(encoder_ctx->framerate);
}
if ((ret = avcodec_open2(encoder_ctx, enc_codec, &opts)) < 0) {
fprintf(stderr, "Failed to open encode codec. Error code: %s\n",
av_err2str(ret));
av_dict_free(&opts);
goto fail;
}
av_dict_free(&opts);
if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {
fprintf(stderr, "Failed to allocate stream for output format.\n");
ret = AVERROR(ENOMEM);
goto fail;
}
ost->time_base = encoder_ctx->time_base;
ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);
if (ret < 0) {
fprintf(stderr, "Failed to copy the stream parameters. "
"Error code: %s\n", av_err2str(ret));
goto fail;
}
/* write the stream header */
if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {
fprintf(stderr, "Error while writing stream header. "
"Error code: %s\n", av_err2str(ret));
goto fail;
}
}
frame->pts = av_rescale_q(frame->pts, decoder_ctx->pkt_timebase,
encoder_ctx->time_base);
if ((ret = encode_write(pkt, frame)) < 0)
fprintf(stderr, "Error during encoding and writing.\n");
fail:
av_frame_free(&frame);
}
return ret;
}
int main(int argc, char **argv)
{
const AVCodec *enc_codec;
int ret = 0;
AVPacket *dec_pkt = NULL;
if (argc < 5 || (argc - 5) % 2) {
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <encoder> <output file>"
" <\"encoding option set 0\"> [<frame_number> <\"encoding options set 1\">]...\n", argv[0]);
return 1;
}
setting_number = (argc - 5) / 2;
dynamic_setting = av_malloc(setting_number * sizeof(*dynamic_setting));
current_setting_number = 0;
for (int i = 0; i < setting_number; i++) {
dynamic_setting[i].frame_number = atoi(argv[i*2 + 5]);
dynamic_setting[i].optstr = argv[i*2 + 6];
}
ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_QSV, NULL, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Failed to create a QSV device. Error code: %s\n", av_err2str(ret));
goto end;
}
dec_pkt = av_packet_alloc();
if (!dec_pkt) {
fprintf(stderr, "Failed to allocate decode packet\n");
goto end;
}
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {
fprintf(stderr, "Could not find encoder '%s'\n", argv[2]);
ret = -1;
goto end;
}
if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {
fprintf(stderr, "Failed to deduce output format from file extension. Error code: "
"%s\n", av_err2str(ret));
goto end;
}
if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {
ret = AVERROR(ENOMEM);
goto end;
}
ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Cannot open output file. "
"Error code: %s\n", av_err2str(ret));
goto end;
}
/* read all packets and only transcoding video */
while (ret >= 0) {
if ((ret = av_read_frame(ifmt_ctx, dec_pkt)) < 0)
break;
if (video_stream == dec_pkt->stream_index)
ret = dec_enc(dec_pkt, enc_codec, argv[4]);
av_packet_unref(dec_pkt);
}
/* flush decoder */
av_packet_unref(dec_pkt);
if ((ret = dec_enc(dec_pkt, enc_codec, argv[4])) < 0) {
fprintf(stderr, "Failed to flush decoder %s\n", av_err2str(ret));
goto end;
}
/* flush encoder */
if ((ret = encode_write(dec_pkt, NULL)) < 0) {
fprintf(stderr, "Failed to flush encoder %s\n", av_err2str(ret));
goto end;
}
/* write the trailer for output stream */
if ((ret = av_write_trailer(ofmt_ctx)) < 0)
fprintf(stderr, "Failed to write trailer %s\n", av_err2str(ret));
end:
avformat_close_input(&ifmt_ctx);
avformat_close_input(&ofmt_ctx);
avcodec_free_context(&decoder_ctx);
avcodec_free_context(&encoder_ctx);
av_buffer_unref(&hw_device_ctx);
av_packet_free(&dec_pkt);
av_freep(&dynamic_setting);
return ret;
}

271
doc/examples/qsvdec.c Normal file
View File

@@ -0,0 +1,271 @@
/*
* Copyright (c) 2015 Anton Khirnov
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* Intel QSV-accelerated H.264 decoding example.
*
* @example qsvdec.c
* This example shows how to do QSV-accelerated H.264 decoding with output
* frames in the GPU video surfaces.
*/
#include "config.h"
#include <stdio.h>
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
#include "libavcodec/avcodec.h"
#include "libavutil/buffer.h"
#include "libavutil/error.h"
#include "libavutil/hwcontext.h"
#include "libavutil/hwcontext_qsv.h"
#include "libavutil/mem.h"
typedef struct DecodeContext {
AVBufferRef *hw_device_ref;
} DecodeContext;
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
while (*pix_fmts != AV_PIX_FMT_NONE) {
if (*pix_fmts == AV_PIX_FMT_QSV) {
DecodeContext *decode = avctx->opaque;
AVHWFramesContext *frames_ctx;
AVQSVFramesContext *frames_hwctx;
int ret;
/* create a pool of surfaces to be used by the decoder */
avctx->hw_frames_ctx = av_hwframe_ctx_alloc(decode->hw_device_ref);
if (!avctx->hw_frames_ctx)
return AV_PIX_FMT_NONE;
frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data;
frames_hwctx = frames_ctx->hwctx;
frames_ctx->format = AV_PIX_FMT_QSV;
frames_ctx->sw_format = avctx->sw_pix_fmt;
frames_ctx->width = FFALIGN(avctx->coded_width, 32);
frames_ctx->height = FFALIGN(avctx->coded_height, 32);
frames_ctx->initial_pool_size = 32;
frames_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
if (ret < 0)
return AV_PIX_FMT_NONE;
return AV_PIX_FMT_QSV;
}
pix_fmts++;
}
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
return AV_PIX_FMT_NONE;
}
static int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,
AVFrame *frame, AVFrame *sw_frame,
AVPacket *pkt, AVIOContext *output_ctx)
{
int ret = 0;
ret = avcodec_send_packet(decoder_ctx, pkt);
if (ret < 0) {
fprintf(stderr, "Error during decoding\n");
return ret;
}
while (ret >= 0) {
int i, j;
ret = avcodec_receive_frame(decoder_ctx, frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
else if (ret < 0) {
fprintf(stderr, "Error during decoding\n");
return ret;
}
/* A real program would do something useful with the decoded frame here.
* We just retrieve the raw data and write it to a file, which is rather
* useless but pedagogic. */
ret = av_hwframe_transfer_data(sw_frame, frame, 0);
if (ret < 0) {
fprintf(stderr, "Error transferring the data to system memory\n");
goto fail;
}
for (i = 0; i < FF_ARRAY_ELEMS(sw_frame->data) && sw_frame->data[i]; i++)
for (j = 0; j < (sw_frame->height >> (i > 0)); j++)
avio_write(output_ctx, sw_frame->data[i] + j * sw_frame->linesize[i], sw_frame->width);
fail:
av_frame_unref(sw_frame);
av_frame_unref(frame);
if (ret < 0)
return ret;
}
return 0;
}
int main(int argc, char **argv)
{
AVFormatContext *input_ctx = NULL;
AVStream *video_st = NULL;
AVCodecContext *decoder_ctx = NULL;
const AVCodec *decoder;
AVPacket pkt = { 0 };
AVFrame *frame = NULL, *sw_frame = NULL;
DecodeContext decode = { NULL };
AVIOContext *output_ctx = NULL;
int ret, i;
if (argc < 3) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
/* open the input file */
ret = avformat_open_input(&input_ctx, argv[1], NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Cannot open input file '%s': ", argv[1]);
goto finish;
}
/* find the first H.264 video stream */
for (i = 0; i < input_ctx->nb_streams; i++) {
AVStream *st = input_ctx->streams[i];
if (st->codecpar->codec_id == AV_CODEC_ID_H264 && !video_st)
video_st = st;
else
st->discard = AVDISCARD_ALL;
}
if (!video_st) {
fprintf(stderr, "No H.264 video stream in the input file\n");
goto finish;
}
/* open the hardware device */
ret = av_hwdevice_ctx_create(&decode.hw_device_ref, AV_HWDEVICE_TYPE_QSV,
"auto", NULL, 0);
if (ret < 0) {
fprintf(stderr, "Cannot open the hardware device\n");
goto finish;
}
/* initialize the decoder */
decoder = avcodec_find_decoder_by_name("h264_qsv");
if (!decoder) {
fprintf(stderr, "The QSV decoder is not present in libavcodec\n");
goto finish;
}
decoder_ctx = avcodec_alloc_context3(decoder);
if (!decoder_ctx) {
ret = AVERROR(ENOMEM);
goto finish;
}
decoder_ctx->codec_id = AV_CODEC_ID_H264;
if (video_st->codecpar->extradata_size) {
decoder_ctx->extradata = av_mallocz(video_st->codecpar->extradata_size +
AV_INPUT_BUFFER_PADDING_SIZE);
if (!decoder_ctx->extradata) {
ret = AVERROR(ENOMEM);
goto finish;
}
memcpy(decoder_ctx->extradata, video_st->codecpar->extradata,
video_st->codecpar->extradata_size);
decoder_ctx->extradata_size = video_st->codecpar->extradata_size;
}
decoder_ctx->opaque = &decode;
decoder_ctx->get_format = get_format;
ret = avcodec_open2(decoder_ctx, NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Error opening the decoder: ");
goto finish;
}
/* open the output stream */
ret = avio_open(&output_ctx, argv[2], AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Error opening the output context: ");
goto finish;
}
frame = av_frame_alloc();
sw_frame = av_frame_alloc();
if (!frame || !sw_frame) {
ret = AVERROR(ENOMEM);
goto finish;
}
/* actual decoding */
while (ret >= 0) {
ret = av_read_frame(input_ctx, &pkt);
if (ret < 0)
break;
if (pkt.stream_index == video_st->index)
ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);
av_packet_unref(&pkt);
}
/* flush the decoder */
pkt.data = NULL;
pkt.size = 0;
ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);
finish:
if (ret < 0) {
char buf[1024];
av_strerror(ret, buf, sizeof(buf));
fprintf(stderr, "%s\n", buf);
}
avformat_close_input(&input_ctx);
av_frame_free(&frame);
av_frame_free(&sw_frame);
avcodec_free_context(&decoder_ctx);
av_buffer_unref(&decode.hw_device_ref);
avio_close(output_ctx);
return ret;
}

View File

@@ -1,199 +0,0 @@
/*
* Copyright (c) 2013 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file libavformat/libavcodec demuxing and muxing API usage example
* @example remux.c
*
* Remux streams from one container format to another. Data is copied from the
* input to the output without transcoding.
*/
#include <libavutil/mem.h>
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
tag,
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
int main(int argc, char **argv)
{
const AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket *pkt = NULL;
const char *in_filename, *out_filename;
int ret, i;
int stream_index = 0;
int *stream_mapping = NULL;
int stream_mapping_size = 0;
if (argc < 3) {
printf("usage: %s input output\n"
"API example program to remux a media file with libavformat and libavcodec.\n"
"The output format is guessed according to the file extension.\n"
"\n", argv[0]);
return 1;
}
in_filename = argv[1];
out_filename = argv[2];
pkt = av_packet_alloc();
if (!pkt) {
fprintf(stderr, "Could not allocate AVPacket\n");
return 1;
}
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
stream_mapping_size = ifmt_ctx->nb_streams;
stream_mapping = av_calloc(stream_mapping_size, sizeof(*stream_mapping));
if (!stream_mapping) {
ret = AVERROR(ENOMEM);
goto end;
}
ofmt = ofmt_ctx->oformat;
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *out_stream;
AVStream *in_stream = ifmt_ctx->streams[i];
AVCodecParameters *in_codecpar = in_stream->codecpar;
if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
stream_mapping[i] = -1;
continue;
}
stream_mapping[i] = stream_index++;
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
if (ret < 0) {
fprintf(stderr, "Failed to copy codec parameters\n");
goto end;
}
out_stream->codecpar->codec_tag = 0;
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open output file '%s'", out_filename);
goto end;
}
}
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
goto end;
}
while (1) {
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt->stream_index];
if (pkt->stream_index >= stream_mapping_size ||
stream_mapping[pkt->stream_index] < 0) {
av_packet_unref(pkt);
continue;
}
pkt->stream_index = stream_mapping[pkt->stream_index];
out_stream = ofmt_ctx->streams[pkt->stream_index];
log_packet(ifmt_ctx, pkt, "in");
/* copy packet */
av_packet_rescale_ts(pkt, in_stream->time_base, out_stream->time_base);
pkt->pos = -1;
log_packet(ofmt_ctx, pkt, "out");
ret = av_interleaved_write_frame(ofmt_ctx, pkt);
/* pkt is now blank (av_interleaved_write_frame() takes ownership of
* its contents and resets pkt), so that no unreferencing is necessary.
* This would be different if one used av_write_frame(). */
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
}
av_write_trailer(ofmt_ctx);
end:
av_packet_free(&pkt);
avformat_close_input(&ifmt_ctx);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
av_freep(&stream_mapping);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

191
doc/examples/remuxing.c Normal file
View File

@@ -0,0 +1,191 @@
/*
* Copyright (c) 2013 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* libavformat/libavcodec demuxing and muxing API example.
*
* Remux streams from one container format to another.
* @example remuxing.c
*/
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
tag,
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
int main(int argc, char **argv)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
const char *in_filename, *out_filename;
int ret, i;
int stream_index = 0;
int *stream_mapping = NULL;
int stream_mapping_size = 0;
if (argc < 3) {
printf("usage: %s input output\n"
"API example program to remux a media file with libavformat and libavcodec.\n"
"The output format is guessed according to the file extension.\n"
"\n", argv[0]);
return 1;
}
in_filename = argv[1];
out_filename = argv[2];
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
stream_mapping_size = ifmt_ctx->nb_streams;
stream_mapping = av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping));
if (!stream_mapping) {
ret = AVERROR(ENOMEM);
goto end;
}
ofmt = ofmt_ctx->oformat;
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *out_stream;
AVStream *in_stream = ifmt_ctx->streams[i];
AVCodecParameters *in_codecpar = in_stream->codecpar;
if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
stream_mapping[i] = -1;
continue;
}
stream_mapping[i] = stream_index++;
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
if (ret < 0) {
fprintf(stderr, "Failed to copy codec parameters\n");
goto end;
}
out_stream->codecpar->codec_tag = 0;
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open output file '%s'", out_filename);
goto end;
}
}
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
goto end;
}
while (1) {
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
if (pkt.stream_index >= stream_mapping_size ||
stream_mapping[pkt.stream_index] < 0) {
av_packet_unref(&pkt);
continue;
}
pkt.stream_index = stream_mapping[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
log_packet(ifmt_ctx, &pkt, "in");
/* copy packet */
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
log_packet(ofmt_ctx, &pkt, "out");
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(&pkt);
}
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
av_freep(&stream_mapping);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

Some files were not shown because too many files have changed in this diff Show More