mirror of
https://git.ffmpeg.org/ffmpeg.git
synced 2025-12-06 14:59:59 +01:00
Compare commits
513 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
03292829aa | ||
|
|
96e8400553 | ||
|
|
54897d7466 | ||
|
|
f2b83f4aba | ||
|
|
1bc06771d8 | ||
|
|
2cde8dc055 | ||
|
|
0da741ba6b | ||
|
|
b66e3e321f | ||
|
|
1d9830cba3 | ||
|
|
41a706b912 | ||
|
|
7a5c738963 | ||
|
|
bdba0f6786 | ||
|
|
badca11741 | ||
|
|
b7c9f27ad6 | ||
|
|
603845225c | ||
|
|
3d297038a9 | ||
|
|
054188db10 | ||
|
|
454a2405ce | ||
|
|
01ab4117dc | ||
|
|
39db2f9514 | ||
|
|
272a9687a7 | ||
|
|
9bc2f44c27 | ||
|
|
85ea121684 | ||
|
|
79ec638115 | ||
|
|
c8bbddf057 | ||
|
|
cead6c94c5 | ||
|
|
d5bdcd8a27 | ||
|
|
7bc064d461 | ||
|
|
70dc266342 | ||
|
|
2de4eb6fec | ||
|
|
11a940adbc | ||
|
|
edd0cd21f4 | ||
|
|
362967fec6 | ||
|
|
a0eccf673c | ||
|
|
fa29141e34 | ||
|
|
f4e25620a1 | ||
|
|
1c9af4d7a8 | ||
|
|
78a0356fae | ||
|
|
4a412dc6ad | ||
|
|
01439fe1e1 | ||
|
|
488c2e8487 | ||
|
|
74104d2dc0 | ||
|
|
066c657376 | ||
|
|
aac7ca7a36 | ||
|
|
96fe37a339 | ||
|
|
b3067f95c9 | ||
|
|
8be48f1c9a | ||
|
|
c1d31ccfac | ||
|
|
c8027878d0 | ||
|
|
286e3bf174 | ||
|
|
19fb467fcb | ||
|
|
67208cf992 | ||
|
|
56a56c0cb5 | ||
|
|
d3264c496a | ||
|
|
ffa2d60ac5 | ||
|
|
4a47195d2a | ||
|
|
912448efc1 | ||
|
|
6fb7e324fe | ||
|
|
faa84a0c06 | ||
|
|
02612c3e3e | ||
|
|
18fbf2622c | ||
|
|
3d6ffa2bb5 | ||
|
|
b33d302195 | ||
|
|
ca47e9ffdc | ||
|
|
a7aac19933 | ||
|
|
670d3189e9 | ||
|
|
60b385a5bf | ||
|
|
b33434ec62 | ||
|
|
0ccb27e094 | ||
|
|
e8fd32b69f | ||
|
|
20fd9217d8 | ||
|
|
48933f28c2 | ||
|
|
d13d3feba2 | ||
|
|
aa6c44c333 | ||
|
|
0009272f94 | ||
|
|
7cc854ce15 | ||
|
|
0a231e7dd3 | ||
|
|
ab43bc50c0 | ||
|
|
4768b30b5b | ||
|
|
6b9ffcdb2b | ||
|
|
520daf8c0e | ||
|
|
c54317a17e | ||
|
|
ab845587d1 | ||
|
|
4bc16930ef | ||
|
|
b5b52c0ca7 | ||
|
|
940659036f | ||
|
|
6d1ebb9def | ||
|
|
4e341bd904 | ||
|
|
bf6cd808be | ||
|
|
9f7042f9cd | ||
|
|
e3a1c0491f | ||
|
|
41479c83ae | ||
|
|
de260c7b34 | ||
|
|
0c5eb03aac | ||
|
|
eca53fd52b | ||
|
|
4f97556f54 | ||
|
|
32fa6ce64a | ||
|
|
a295d1870a | ||
|
|
b590758298 | ||
|
|
8eb8882af5 | ||
|
|
1df91b48a3 | ||
|
|
b61e5a878c | ||
|
|
d9cf9f5af8 | ||
|
|
8a640fc7cb | ||
|
|
fef0ccc401 | ||
|
|
73427f5c74 | ||
|
|
9d3a7c82a6 | ||
|
|
c01f799314 | ||
|
|
e6a8d110d7 | ||
|
|
cd221a86a6 | ||
|
|
25ff26aaac | ||
|
|
b5f0302eeb | ||
|
|
e910f15fcb | ||
|
|
8cb0f2c4e5 | ||
|
|
6bd562e044 | ||
|
|
4ff1fcd3ca | ||
|
|
6447815dfb | ||
|
|
305f37e5be | ||
|
|
85ffdcd8ff | ||
|
|
5474a7e93b | ||
|
|
eea01de3ff | ||
|
|
deca5e7349 | ||
|
|
9739a269fb | ||
|
|
c889041352 | ||
|
|
b6a79b841d | ||
|
|
6ce9b2c1fe | ||
|
|
4a122a0879 | ||
|
|
736ef73f9c | ||
|
|
253b7829e4 | ||
|
|
f5227c50b7 | ||
|
|
16772e43ef | ||
|
|
53dae9585f | ||
|
|
99491bd260 | ||
|
|
02d224406f | ||
|
|
a33e375d7d | ||
|
|
f5c6ce899f | ||
|
|
dcf02ee6c6 | ||
|
|
86b5a3d35d | ||
|
|
818f73542d | ||
|
|
1dbfcd65b2 | ||
|
|
fd871e24e6 | ||
|
|
8aa32a8d5c | ||
|
|
ef8db67c92 | ||
|
|
c554788352 | ||
|
|
4306ddd87d | ||
|
|
bab4cb3fb5 | ||
|
|
e51e07c34e | ||
|
|
9079c70d20 | ||
|
|
4f71435248 | ||
|
|
934878f2a6 | ||
|
|
38d9a782a5 | ||
|
|
6de5ec8ef8 | ||
|
|
0d2b67d17c | ||
|
|
d40bb6f5e9 | ||
|
|
aadfec7d6c | ||
|
|
47c0626ec7 | ||
|
|
2f75ebe24a | ||
|
|
a9081b36f4 | ||
|
|
b120685dca | ||
|
|
b44a3cd06e | ||
|
|
a930db5c82 | ||
|
|
f10252e47d | ||
|
|
4627033a23 | ||
|
|
20c440edbc | ||
|
|
ab81ea1035 | ||
|
|
2f2904030f | ||
|
|
064d0c6462 | ||
|
|
9ce4350c48 | ||
|
|
6ae1b70cb4 | ||
|
|
dbb121688c | ||
|
|
1667b3ea0f | ||
|
|
20d4514f25 | ||
|
|
8d3ac812ff | ||
|
|
ba3a4a94bc | ||
|
|
fb55620369 | ||
|
|
0a36341e96 | ||
|
|
1a21edf7b8 | ||
|
|
94c8e53034 | ||
|
|
132037ad5b | ||
|
|
f3cb2eedeb | ||
|
|
736c73a243 | ||
|
|
0272afe70d | ||
|
|
165b2ee692 | ||
|
|
ea153eb52c | ||
|
|
f21e96109d | ||
|
|
94f3c06678 | ||
|
|
3ed986522a | ||
|
|
f1116294aa | ||
|
|
0749384f0a | ||
|
|
fe8960ab86 | ||
|
|
5b8a97d000 | ||
|
|
ada21bca55 | ||
|
|
72403ba2b9 | ||
|
|
eaf2bacca1 | ||
|
|
cf61bf8107 | ||
|
|
93456ca3ea | ||
|
|
b2522f35ec | ||
|
|
e8558abeaf | ||
|
|
b8d0d76740 | ||
|
|
4384481fbc | ||
|
|
347cc89daf | ||
|
|
7119574f48 | ||
|
|
f17443cdcd | ||
|
|
ee2396cefd | ||
|
|
01ed8d93b2 | ||
|
|
1729101c44 | ||
|
|
15cc151709 | ||
|
|
0a709e2a10 | ||
|
|
10d821309b | ||
|
|
f8a331598e | ||
|
|
f33c3ccbe7 | ||
|
|
7a86581afd | ||
|
|
3c98e4be89 | ||
|
|
d2567caea9 | ||
|
|
452c78a09c | ||
|
|
cce9471373 | ||
|
|
34282abc57 | ||
|
|
e1b6d78bf7 | ||
|
|
53a32fdf0a | ||
|
|
f3ac7e40d6 | ||
|
|
5217145824 | ||
|
|
311f2f5aba | ||
|
|
460abcd671 | ||
|
|
741c341968 | ||
|
|
ad9ce1fa1d | ||
|
|
50c2ef91d3 | ||
|
|
20f5e2c177 | ||
|
|
6d7192bcb7 | ||
|
|
4c7477f132 | ||
|
|
90b6425b12 | ||
|
|
07944df9a7 | ||
|
|
34887d091d | ||
|
|
ec5e262e1d | ||
|
|
0fb432a23b | ||
|
|
3dd1f38329 | ||
|
|
d34d06d1e2 | ||
|
|
cefbc513ea | ||
|
|
0d19167a65 | ||
|
|
00312b5ea4 | ||
|
|
b7904b58af | ||
|
|
aae731b9d3 | ||
|
|
4e6de49a5a | ||
|
|
52a7ae844b | ||
|
|
3dc62e679a | ||
|
|
4f02447d45 | ||
|
|
30abd8e6f9 | ||
|
|
706b427ff5 | ||
|
|
797621afab | ||
|
|
e3a1d133f7 | ||
|
|
fc74ac463c | ||
|
|
eac6114e01 | ||
|
|
9b351d0d88 | ||
|
|
e5e01d2477 | ||
|
|
771206c0db | ||
|
|
1998147f2e | ||
|
|
003cce421d | ||
|
|
795f65eed5 | ||
|
|
a24cd04074 | ||
|
|
c1074aea71 | ||
|
|
d59e6cef79 | ||
|
|
0a0eec60c8 | ||
|
|
ece91a3918 | ||
|
|
722cc62baa | ||
|
|
3a0e4368ec | ||
|
|
22dab0f4e1 | ||
|
|
b578ba915f | ||
|
|
080edf29e7 | ||
|
|
be9268e350 | ||
|
|
b419c7564c | ||
|
|
586e00d7d3 | ||
|
|
bc2cbb3077 | ||
|
|
cd3314552b | ||
|
|
3e18f0fddd | ||
|
|
b330fec1ce | ||
|
|
1d589a93b0 | ||
|
|
d2476bd465 | ||
|
|
c0895d64f5 | ||
|
|
f5626db24e | ||
|
|
573e40e8f1 | ||
|
|
75d881f1a9 | ||
|
|
b803624aae | ||
|
|
dbff2d602d | ||
|
|
92a23e2a63 | ||
|
|
42163d4c55 | ||
|
|
4e8c5721b3 | ||
|
|
f85a71527a | ||
|
|
a49743407b | ||
|
|
190787a026 | ||
|
|
80cebb992c | ||
|
|
38fd2a33b9 | ||
|
|
861c05b286 | ||
|
|
ba7ea7c4b1 | ||
|
|
6b839e9aa3 | ||
|
|
abd5277318 | ||
|
|
17a4e791bf | ||
|
|
e73efe4691 | ||
|
|
a7442f8d35 | ||
|
|
d11c686204 | ||
|
|
0ea475942e | ||
|
|
3cfb016071 | ||
|
|
f832d7361d | ||
|
|
ff4f525905 | ||
|
|
a5875f8a1e | ||
|
|
f397613f05 | ||
|
|
9c65a87bd4 | ||
|
|
e605faaabc | ||
|
|
f3b6ea1408 | ||
|
|
e46bc3052d | ||
|
|
f254c7ea13 | ||
|
|
fc7c379060 | ||
|
|
686eb3b1ed | ||
|
|
b6c0ad571f | ||
|
|
72e5607c87 | ||
|
|
4186702184 | ||
|
|
fedd8b6507 | ||
|
|
6ebb9e7b77 | ||
|
|
6e788fadae | ||
|
|
f34dc82d56 | ||
|
|
b7b28b6aad | ||
|
|
21d50c185d | ||
|
|
72e5ccfe37 | ||
|
|
b147ded288 | ||
|
|
75697b500c | ||
|
|
1cbeb16187 | ||
|
|
6ee4b20f4a | ||
|
|
3e38bf95c5 | ||
|
|
cbae648eb8 | ||
|
|
2fb25e2dd6 | ||
|
|
3bc5e427e4 | ||
|
|
8640339dbb | ||
|
|
7fae0ea21d | ||
|
|
19fea7d703 | ||
|
|
c1c50650df | ||
|
|
ff1f181178 | ||
|
|
9b33462dc4 | ||
|
|
dd349b24ce | ||
|
|
814dd3e9eb | ||
|
|
224b47f76d | ||
|
|
b4cea069a5 | ||
|
|
0f98030290 | ||
|
|
d93a5a8d11 | ||
|
|
70d3ad7b6f | ||
|
|
d2108de6b8 | ||
|
|
d9b25b3923 | ||
|
|
0f928e5918 | ||
|
|
611ef6381b | ||
|
|
340690e8e6 | ||
|
|
0991208151 | ||
|
|
b38c8fd291 | ||
|
|
bc6c12b7e7 | ||
|
|
1d37fe95e8 | ||
|
|
79122e2671 | ||
|
|
d8afd8d371 | ||
|
|
7f79879a01 | ||
|
|
6a9017d3a5 | ||
|
|
d7b86cd308 | ||
|
|
8c33e2e11b | ||
|
|
8fbd347508 | ||
|
|
bbda126477 | ||
|
|
236912f789 | ||
|
|
9fb677dd82 | ||
|
|
3ed0d94b82 | ||
|
|
6f8dab7a7b | ||
|
|
47da68fc8e | ||
|
|
479bb1cacd | ||
|
|
b0f3f56bbc | ||
|
|
128b42f4d1 | ||
|
|
d1dd90ae54 | ||
|
|
00a9eaff97 | ||
|
|
906f1f66a8 | ||
|
|
f0ee408624 | ||
|
|
5121f31cac | ||
|
|
3526d25017 | ||
|
|
34ae610115 | ||
|
|
ee92ea8903 | ||
|
|
256b9442df | ||
|
|
971fe06074 | ||
|
|
5d0e4c877f | ||
|
|
383fdec3b2 | ||
|
|
79f6a1b96e | ||
|
|
9b754ccc53 | ||
|
|
363b46cdbf | ||
|
|
35f293fe89 | ||
|
|
4a974cb595 | ||
|
|
924a2dd57a | ||
|
|
e9c3c8df45 | ||
|
|
e6997adee9 | ||
|
|
3af036360d | ||
|
|
3459fd598e | ||
|
|
f58b45f0ac | ||
|
|
5d2ddaa139 | ||
|
|
f66eaded01 | ||
|
|
452629fb23 | ||
|
|
311b29134e | ||
|
|
be3a7857ed | ||
|
|
95c80c7d27 | ||
|
|
497de399c9 | ||
|
|
8850dc3771 | ||
|
|
d9adb13ff6 | ||
|
|
9e54114647 | ||
|
|
49aa0e9cc7 | ||
|
|
27a30e4166 | ||
|
|
abeb7838ca | ||
|
|
6ec9c902ee | ||
|
|
0d17ecffa5 | ||
|
|
88893627a1 | ||
|
|
a483e46b79 | ||
|
|
97eb92b276 | ||
|
|
e397902d47 | ||
|
|
edb8d29ca5 | ||
|
|
bf4b8b1677 | ||
|
|
298de0a183 | ||
|
|
8795bf9e5d | ||
|
|
77af726871 | ||
|
|
9970fa10c0 | ||
|
|
ffa39cd574 | ||
|
|
3f7a9eef51 | ||
|
|
eec20b665a | ||
|
|
4f0fecf9fa | ||
|
|
fca86d3e28 | ||
|
|
bd739bce1c | ||
|
|
f8eea96d64 | ||
|
|
c108bba1ae | ||
|
|
973a66108b | ||
|
|
cbc5796fc3 | ||
|
|
66b7e165db | ||
|
|
99bedf74ac | ||
|
|
2ff5e3f54e | ||
|
|
28c618355c | ||
|
|
88a3e4c34e | ||
|
|
e2462c8828 | ||
|
|
139d881273 | ||
|
|
6f590bf05f | ||
|
|
cab8d31804 | ||
|
|
3585986a00 | ||
|
|
645b36ce64 | ||
|
|
67835afd79 | ||
|
|
955b97704f | ||
|
|
28b6588b48 | ||
|
|
7617b90f07 | ||
|
|
2adf20b3da | ||
|
|
b80d4f58d8 | ||
|
|
85bf84c96c | ||
|
|
5520e00a49 | ||
|
|
671530ccb4 | ||
|
|
1e8212798c | ||
|
|
484ce1af75 | ||
|
|
175a569f5b | ||
|
|
dd3a5f04b9 | ||
|
|
a5bcb36874 | ||
|
|
142c65ba9c | ||
|
|
dfda395b2f | ||
|
|
927ff67ab4 | ||
|
|
44eabc5d5d | ||
|
|
78fd652af4 | ||
|
|
65a9d0c66c | ||
|
|
b1fc2c5c25 | ||
|
|
b6a4aeb2f6 | ||
|
|
0875b2651d | ||
|
|
681ca7ecd0 | ||
|
|
51f24cb3f5 | ||
|
|
87dcc7502d | ||
|
|
3ffde707df | ||
|
|
2ff9e21f7f | ||
|
|
9eb0d76e25 | ||
|
|
508e410d34 | ||
|
|
059db22040 | ||
|
|
329176adc5 | ||
|
|
8119efdbec | ||
|
|
4f19268eee | ||
|
|
58a8e4733a | ||
|
|
cfca0b9139 | ||
|
|
da693f8daa | ||
|
|
ed2ed4ac0f | ||
|
|
1968a1eef1 | ||
|
|
f4f3bf3c94 | ||
|
|
6557ea8e2b | ||
|
|
9d742f774a | ||
|
|
5e84c94f69 | ||
|
|
e90de50195 | ||
|
|
51ca6fda05 | ||
|
|
d1cae50a04 | ||
|
|
b51217381d | ||
|
|
f5f0b2f44c | ||
|
|
e9fc7a90ba | ||
|
|
414d11fff6 | ||
|
|
1830b0a6c7 | ||
|
|
0ed4f26cf2 | ||
|
|
69e35db80d | ||
|
|
af43c7092c | ||
|
|
ecdf52745f | ||
|
|
07e7ebf52d | ||
|
|
37589e6443 | ||
|
|
ad37fb86d7 | ||
|
|
4f325589f9 | ||
|
|
707d4c7fb5 | ||
|
|
c30d0ace65 | ||
|
|
0c188bc595 | ||
|
|
72e038acaf | ||
|
|
83e6a4a32b | ||
|
|
7182fbc471 | ||
|
|
0b4d87fad1 | ||
|
|
7034009f62 | ||
|
|
6c9574e490 | ||
|
|
37fcf089b4 | ||
|
|
f4400a92f5 | ||
|
|
a430ba9925 | ||
|
|
1833ec5334 | ||
|
|
c9c977be27 | ||
|
|
3c9e1b89a1 | ||
|
|
2ff93effb3 | ||
|
|
b1377b2d28 | ||
|
|
e0064df4ff |
7
.gitignore
vendored
7
.gitignore
vendored
@@ -18,9 +18,6 @@
|
||||
*.so.*
|
||||
*.swp
|
||||
*.ver
|
||||
*.version
|
||||
*.ptx
|
||||
*.ptx.c
|
||||
*_g
|
||||
\#*
|
||||
.\#*
|
||||
@@ -29,8 +26,8 @@
|
||||
/ffmpeg
|
||||
/ffplay
|
||||
/ffprobe
|
||||
/config.asm
|
||||
/config.h
|
||||
/ffserver
|
||||
/config.*
|
||||
/coverage.info
|
||||
/avversion.h
|
||||
/lcov/
|
||||
|
||||
@@ -6,22 +6,18 @@ os:
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- nasm
|
||||
- yasm
|
||||
- diffutils
|
||||
compiler:
|
||||
- clang
|
||||
- gcc
|
||||
matrix:
|
||||
exclude:
|
||||
- os: osx
|
||||
compiler: gcc
|
||||
cache:
|
||||
directories:
|
||||
- ffmpeg-samples
|
||||
before_install:
|
||||
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update --all; fi
|
||||
install:
|
||||
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install nasm; fi
|
||||
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install yasm; fi
|
||||
script:
|
||||
- mkdir -p ffmpeg-samples
|
||||
- ./configure --samples=ffmpeg-samples --cc=$CC
|
||||
|
||||
570
Changelog
570
Changelog
@@ -1,113 +1,472 @@
|
||||
Entries are sorted chronologically from oldest to youngest within each release,
|
||||
releases are sorted from youngest to oldest.
|
||||
|
||||
version 4.0:
|
||||
- Bitstream filters for editing metadata in H.264, HEVC and MPEG-2 streams
|
||||
- Dropped support for OpenJPEG versions 2.0 and below. Using OpenJPEG now
|
||||
requires 2.1 (or later) and pkg-config.
|
||||
- VDA dropped (use VideoToolbox instead)
|
||||
- MagicYUV encoder
|
||||
- Raw AMR-NB and AMR-WB demuxers
|
||||
- TiVo ty/ty+ demuxer
|
||||
- Intel QSV-accelerated MJPEG encoding
|
||||
- PCE support for extended channel layouts in the AAC encoder
|
||||
- native aptX and aptX HD encoder and decoder
|
||||
- Raw aptX and aptX HD muxer and demuxer
|
||||
- NVIDIA NVDEC-accelerated H.264, HEVC, MJPEG, MPEG-1/2/4, VC1, VP8/9 hwaccel decoding
|
||||
- Intel QSV-accelerated overlay filter
|
||||
- mcompand audio filter
|
||||
- acontrast audio filter
|
||||
- OpenCL overlay filter
|
||||
- video mix filter
|
||||
- video normalize filter
|
||||
- audio lv2 wrapper filter
|
||||
- VAAPI MJPEG and VP8 decoding
|
||||
- AMD AMF H.264 and HEVC encoders
|
||||
- video fillborders filter
|
||||
- video setrange filter
|
||||
- nsp demuxer
|
||||
- support LibreSSL (via libtls)
|
||||
- AVX-512/ZMM support added
|
||||
- Dropped support for building for Windows XP. The minimum supported Windows
|
||||
version is Windows Vista.
|
||||
- deconvolve video filter
|
||||
- entropy video filter
|
||||
- hilbert audio filter source
|
||||
- aiir audio filter
|
||||
- aiff: add support for CD-ROM XA ADPCM
|
||||
- Removed the ffserver program
|
||||
- Removed the ffmenc and ffmdec muxer and demuxer
|
||||
- VideoToolbox HEVC encoder and hwaccel
|
||||
- VAAPI-accelerated ProcAmp (color balance), denoise and sharpness filters
|
||||
- Add android_camera indev
|
||||
- codec2 en/decoding via libcodec2
|
||||
- muxer/demuxer for raw codec2 files and .c2 files
|
||||
- Moved nvidia codec headers into an external repository.
|
||||
They can be found at http://git.videolan.org/?p=ffmpeg/nv-codec-headers.git
|
||||
- native SBC encoder and decoder
|
||||
- drmeter audio filter
|
||||
- hapqa_extract bitstream filter
|
||||
- filter_units bitstream filter
|
||||
- AV1 Support through libaom
|
||||
- E-AC-3 dependent frames support
|
||||
- bitstream filter for extracting E-AC-3 core
|
||||
- Haivision SRT protocol via libsrt
|
||||
- segafilm muxer
|
||||
- vfrdet filter
|
||||
version 3.3.6:
|
||||
- x264: Support version 153
|
||||
- avcodec/exr: Check buf_size more completely
|
||||
- avcodec/flacdec: Fix overflow in multiplication in decode_subframe_fixed()
|
||||
- avcodec/hevcdsp_template: Fix Invalid shifts in put_hevc_qpel_bi_w_h() and put_hevc_qpel_bi_w_w()
|
||||
- avcodec/flacdec: avoid undefined shift
|
||||
- avcodec/hevcdsp_template.c: Fix undefined shift in FUNC(dequant)
|
||||
- avcodec/dirac_dwt: Fix integer overflow in COMPOSE_DD97iH0() and COMPOSE_DD137iL0()
|
||||
- avcodec/hevc_cabac: Fix integer overflow in ff_hevc_cu_qp_delta_abs()
|
||||
- tests/audiomatch: Add missing return code at the end of main()
|
||||
- avcodec/hevc_sei: Fix integer overflows in decode_nal_sei_message()
|
||||
- avcodec/hevcdsp_template: Fix undefined shift in put_hevc_qpel_bi_w_hv()
|
||||
- libavfilter/af_dcshift.c: Fixed repeated spelling error
|
||||
- avfilter/formats: fix wrong function name in error message
|
||||
- avcodec/amrwbdec: Fix division by 0 in voice_factor()
|
||||
- avcodec/diracdsp: Fix integer overflow in PUT_SIGNED_RECT_CLAMPED()
|
||||
- avcodec/dirac_dwt: Fix integer overflows in COMPOSE_DAUB97*
|
||||
- avcodec/extract_extradata_bsf: Fix leak discovered via fuzzing
|
||||
- avcodec/vorbis: Fix another 1 << 31 > int32_t::max() with 1u.
|
||||
- Don't manipulate duration when it's AV_NOPTS_VALUE.
|
||||
- avcodec/vorbis: 1 << 31 > int32_t::max(), so use 1u << 31 instead.
|
||||
- avformat/utils: Prevent undefined shift with wrap_bits > 64.
|
||||
- avcodec/j2kenc: Fix out of array access in encode_cblk()
|
||||
- avcodec/hevcdsp_template: Fix undefined shift in put_hevc_epel_bi_w_h()
|
||||
- avcodec/mlpdsp: Fix signed integer overflow, 2nd try
|
||||
- avcodec/kgv1dec: Check that there is enough input for maximum RLE compression
|
||||
- avcodec/dirac_dwt: Fix integer overflow in COMPOSE_FIDELITYi*
|
||||
- avcodec/mpeg4videodec: Check also for negative versions in the validity check
|
||||
- Close ogg stream upon error when using AV_EF_EXPLODE.
|
||||
- Fix undefined shift on assumed 8-bit input.
|
||||
- Use ff_thread_once for fixed, float table init.
|
||||
- Fix leak of frame_duration_buffer in mov_fix_index().
|
||||
- avformat/mov: Propagate errors in mov_switch_root.
|
||||
- avcodec/hevcdsp_template: Fix invalid shift in put_hevc_epel_bi_w_v()
|
||||
- avcodec/mlpdsp: Fix undefined shift ff_mlp_pack_output()
|
||||
- avcodec/zmbv: Check that the buffer is large enough for mvec
|
||||
- avcodec/dirac_dwt: Fix integer overflow in COMPOSE_DD137iL0()
|
||||
- avcodec/wmv2dec: Check end of bitstream in parse_mb_skip() and ff_wmv2_decode_mb()
|
||||
- avcodec/snowdec: Check for remaining bitstream in decode_blocks()
|
||||
- avcodec/snowdec: Check intra block dc differences.
|
||||
- avformat/mov: Check size of STSC allocation
|
||||
- avcodec/vc2enc: Clear coef_buf on allocation
|
||||
- avcodec/h264dec: Fix potential array overread
|
||||
- avcodec/x86/mpegvideodsp: Fix signedness bug in need_emu
|
||||
- avcodec/aacpsdsp_template: Fix integer overflows in ps_decorrelate_c()
|
||||
- avcodec/aacdec_fixed: Fix undefined shift
|
||||
- avcodec/mdct_*: Fix integer overflow in addition in RESCALE()
|
||||
- avcodec/snowdec: Fix integer overflow in header parsing
|
||||
- avcodec/cngdec: Fix integer clipping
|
||||
- avcodec/sbrdsp_fixed: Fix integer overflow in shift in sbr_hf_g_filt_c()
|
||||
- avcodec/aacsbr_fixed: Fix division by zero in sbr_gain_calc()
|
||||
- avutil/softfloat: Add FLOAT_MIN
|
||||
- avcodec/h264idct_template: Fix integer overflows in ff_h264_idct8_add()
|
||||
- avcodec/xan: Check for bitstream end in xan_huffman_decode()
|
||||
- avcodec/exr: fix undefined shift in pxr24_uncompress()
|
||||
- avformat: Free the internal codec context at the end
|
||||
- avcodec/h264idct_template: Fix integer overflows in ff_h264_idct8_add()
|
||||
- avcodec/xan: Improve overlapping check
|
||||
- avcodec/aacdec_fixed: Fix integer overflow in apply_dependent_coupling_fixed()
|
||||
- avcodec/aacdec_fixed: Fix integer overflow in predict()
|
||||
- avcodec/jpeglsdec: Check for end of bitstream in ls_decode_line()
|
||||
- avcodec/jpeglsdec: Check ilv for being a supported value
|
||||
- lavfi/af_pan: fix sign handling in channel coefficient parser
|
||||
- vc2enc_dwt: pad the temporary buffer by the slice siz
|
||||
|
||||
version 3.3.5:
|
||||
- ffserver: Fix off by 1 error in path
|
||||
- avcodec/snowdec: Check mv_scale
|
||||
- avcodec/pafvideo: Check for bitstream end in decode_0()
|
||||
- avcodec/ffv1dec: Fix out of array read in slice counting
|
||||
- avcodec/dirac_dwt: Fix integer overflow in COMPOSE_53iL0()
|
||||
- avcodec/mpeg_er: Clear mcsel in mpeg_er_decode_mb()
|
||||
- avcodec/mpeg4videodec: Use 64 bit intermediates for sprite delta
|
||||
- avcodec/x86/lossless_videoencdsp: Fix warning: signed dword value exceeds bounds
|
||||
- avcodec/x86/lossless_videoencdsp: Fix handling of small widths
|
||||
- avcodec/truemotion2: Fix integer overflows in tm2_high_chroma()
|
||||
- avcodec/aacdec_template: Clear tns present flag on error
|
||||
- avcodec/proresdec2: SKIP_BITS() does not work with len=32
|
||||
- avcodec/hevcdsp_template: Fix undefined shift
|
||||
- avcodec/jpeg2000: Check that codsty->log2_prec_widths/heights has been initialized
|
||||
- avcodec/takdec: Fix integer overflow in decode_lpc()
|
||||
- avcodec/proresdec2: Check bits in DECODE_CODEWORD(), fixes invalid shift
|
||||
- avcodec/takdec: Fix integer overflows in decode_subframe()
|
||||
- avcodec/dirac_dwt: Fix integer overflow in COMPOSE_FIDELITYi*()
|
||||
- avcodec/ffv1dec: Fix integer overflow in read_quant_table()
|
||||
- avcodec/svq3: Fix overflow in svq3_add_idct_c()
|
||||
- avcodec/pngdec: Clean up on av_frame_ref() failure
|
||||
|
||||
version 3.3.4:
|
||||
- avcodec/hevc_ps: improve check for missing default display window bitstream
|
||||
- avcodec/hevc_ps: Fix c?_qp_offset_list size
|
||||
- avcodec/shorten: Move buffer allocation and offset init to end of read_header()
|
||||
- avcodec/jpeg2000dsp: Fix multiple integer overflows in ict_int()
|
||||
- avcodec/hevcdsp_template: Fix undefined shift in put_hevc_pel_bi_w_pixels
|
||||
- avcodec/diracdec: Fix overflow in DC computation
|
||||
- avcodec/scpr: optimize shift loop.
|
||||
- avcodec/dirac_vlc: limit res_bits in APPEND_RESIDUE()
|
||||
- libavcodec/h264_parse: don't use uninitialized value when chroma_format_idc==0
|
||||
- avformat/asfdec: Fix DoS in asf_build_simple_index()
|
||||
- avformat/mov: Fix DoS in read_tfra()
|
||||
- avcodec/dirac_vlc: Fix invalid shift in ff_dirac_golomb_read_32bit()
|
||||
- avcodec/dirac_dwt: Fix multiple overflows in 9/7 lifting
|
||||
- avcodec/diracdec: Fix integer overflow in INTRA_DC_PRED()
|
||||
- avformat/mxfdec: Fix Sign error in mxf_read_primer_pack()
|
||||
- avformat/mxfdec: Fix DoS issues in mxf_read_index_entry_array()
|
||||
- avformat/nsvdec: Fix DoS due to lack of eof check in nsvs_file_offset loop.
|
||||
- avcodec/snowdec: Fix integer overflow in decode_subband_slice_buffered()
|
||||
- avcodec/hevc_ps: Fix undefined shift in pcm code
|
||||
- avcodec/sbrdsp_fixed: Fix undefined overflows in autocorrelate()
|
||||
- avformat/mvdec: Fix DoS due to lack of eof check
|
||||
- avformat/rl2: Fix DoS due to lack of eof check
|
||||
- avformat/rmdec: Fix DoS due to lack of eof check
|
||||
- avformat/cinedec: Fix DoS due to lack of eof check
|
||||
- avformat/asfdec: Fix DoS due to lack of eof check
|
||||
- avformat/hls: Fix DoS due to infinite loop
|
||||
- ffprobe: Fix NULL pointer handling in color parameter printing
|
||||
- ffprobe: Fix null pointer dereference with color primaries
|
||||
- avcodec/hevc_ps: Check delta_pocs in ff_hevc_decode_short_term_rps()
|
||||
- avformat/rtpdec_h264: Fix heap-buffer-overflow
|
||||
- avformat/aviobuf: Fix signed integer overflow in avio_seek()
|
||||
- avformat/mov: Fix signed integer overflows with total_size
|
||||
- avcodec/utils: Fix signed integer overflow in rc_initial_buffer_occupancy initialization
|
||||
- avcodec/aacdec_template: Fix running cleanup in decode_ics_info()
|
||||
- avcodec/me_cmp: Fix crashes on ARM due to misalignment
|
||||
- avcodec/pixlet: Fixes: undefined shift in av_mod_uintp2()
|
||||
- avcodec/dirac_dwt_template: Fix integer overflow in vertical_compose53iL0()
|
||||
- avcodec/fic: Fixes signed integer overflow
|
||||
- avcodec/snowdec: Fix off by 1 error
|
||||
- avcodec/pixlet: fixes integer overflow in read_highpass()
|
||||
- avcodec/zmbv: Check decomp_size
|
||||
- avcodec/diracdec: Fixes integer overflow
|
||||
- avcodec/diracdec: Check perspective_exp and zrs_exp.
|
||||
- avcodec/ffv1dec_template: Fix undefined shift
|
||||
- avcodec/mpeg4videodec: Clear mcsel before decoding an image
|
||||
- avcodec/dirac_dwt: Fixes integer overflows in COMPOSE_DAUB97*
|
||||
- avcodec/aacdec_fixed: fix invalid shift in predict()
|
||||
- avcodec/h264_slice: Fix overflow in slice offset
|
||||
- avformat/utils: fix memory leak in avformat_free_context
|
||||
- swscale: fix gbrap16 alpha channel issues
|
||||
- avcodec/h264idct_template: Fix integer overflow in ff_h264_idct_add()
|
||||
- avcodec/diracdsp: fix integer overflow
|
||||
- avcodec/diracdec: Check weight_log2denom
|
||||
- avcodec/nvenc: only push cuda context on encoder close if encoder exists
|
||||
- avfilter/vf_ssim: fix temp size calculation
|
||||
|
||||
version 3.3.3:
|
||||
- avcodec/dirac_dwt: Fix multiple integer overflows in COMPOSE_DD97iH0()
|
||||
- avcodec/diracdec: Fix integer overflow in divide3()
|
||||
- avcodec/takdec: Fix integer overflow in decode_subframe()
|
||||
- avformat/rtmppkt: Convert ff_amf_get_field_value() to bytestream2
|
||||
- avformat/rtmppkt: Convert ff_amf_tag_size() to bytestream2
|
||||
- avcodec/diracdec: Fix integer overflow in signed multiplication in UNPACK_ARITH()
|
||||
- avcodec/pixlet: Simplify nbits computation
|
||||
- avcodec/dnxhddec: Move mb height check out of non hr branch
|
||||
- avcodec/hevc_ps: fix integer overflow in log2_parallel_merge_level_minus2
|
||||
- avformat/oggparsecelt: Do not re-allocate os->private
|
||||
- avcodec/ylc: Fix shift overflow
|
||||
- avcodec/aacps: Fix multiple integer overflow in map_val_34_to_20()
|
||||
- avcodec/aacdec_fixed: fix: left shift of negative value -1
|
||||
- avcodec/dirac_vlc: Fix undefined shift
|
||||
- doc/filters: typo in frei0r
|
||||
- avcodec/cfhd: Fix decoding regression due to height check
|
||||
- avcodec/aacdec_template (fixed point): Check gain in decode_cce() to avoid undefined shifts later
|
||||
- avcodec/ffv1dec_template: Fix signed integer overflow
|
||||
- avcodec/aacdec_template: Fix undefined integer overflow in apply_tns()
|
||||
- avcodec/magicyuv: Check that vlc len is not too large
|
||||
- avcodec/mjpegdec: Clip DC also on the negative side.
|
||||
- avcodec/aacps (fixed point): Fix multiple signed integer overflows
|
||||
- avcodec/ylc: Fix vlc of 31 bits
|
||||
- avcodec/sbrdsp_fixed: Fix integer overflow in sbr_hf_apply_noise()
|
||||
- avcodec/hevcdec: do not let updated extradata corrupt state
|
||||
- avcodec/wavpack: Fix invalid shift
|
||||
- avcodec/h264_slice: Fix signed integer overflow
|
||||
- avcodec/hevc_ps: Fix integer overflow with beta/tc offsets
|
||||
- avcodec/cfhd: Fix invalid left shift of negative value
|
||||
- avcodec/vb: Check vertical GMC component before multiply
|
||||
- avcodec/hevcdec: do basic validity check on delta_chroma_weight and offset
|
||||
- avcodec/jpeg2000dwt: Fix integer overflow in dwt_decode97_int()
|
||||
- avcodec/apedec: Fix integer overflow
|
||||
- avcodec/wavpack: Fix integer overflow in wv_unpack_stereo()
|
||||
- avcodec/hevc_ps: Fix max_dec_buffer check
|
||||
- avcodec/mpeg4videodec: Fix GMC with videos of dimension 1
|
||||
- avcodec/wavpack: Fix integer overflow
|
||||
- avcodec/takdec: Fix integer overflow
|
||||
- avcodec/tiff: Update pointer only when the result is used
|
||||
- avcodec/cfhd: Check bpc before setting bpc in context
|
||||
- avcodec/cfhd: Fix undefined shift
|
||||
- avcodec/hevc_filter: Fix invalid shift
|
||||
- avcodec/mpeg4videodec: Fix overflow in virtual_ref computation
|
||||
- avcodec/lpc: signed integer overflow in compute_lpc_coefs() (aacdec_fixed)
|
||||
- avcodec/wavpack: Fix undefined integer negation
|
||||
- avcodec/aacdec_fixed: Check s for being too small
|
||||
- avcodec/htmlsubtitles: Replace very slow redundant sscanf() calls by cleaner and faster code
|
||||
- avcodec/h264: Fix mix of lossless and lossy MBs decoding
|
||||
- avcodec/h264_mb: Fix 8x8dct in lossless for new versions of x264
|
||||
- avcodec/h264_cabac: Fix CABAC+8x8dct in 4:4:4
|
||||
- avcodec/takdec: Fixes: integer overflow in AV_SAMPLE_FMT_U8P output
|
||||
- avcodec/jpeg2000dsp: Reorder operations in ict_int() to avoid 2 integer overflows
|
||||
- avcodec/hevcpred_template: Fix left shift of negative value
|
||||
- avcodec/hevcdec: Fix signed integer overflow in decode_lt_rps()
|
||||
- avcodec/jpeg2000dec: Check nonzerobits more completely
|
||||
- avcodec/shorten: Sanity check maxnlpc
|
||||
- avcodec/truemotion2: Move skip computation after checks
|
||||
- avcodec/jpeg2000: Fixes integer overflow in ff_jpeg2000_ceildivpow2()
|
||||
- avcodec/dnxhd_parser: Do not return invalid value from dnxhd_find_frame_end() on error
|
||||
- avcodec/hevcdec: Check nb_sps
|
||||
- avcodec/hevc_refs: Check nb_refs in add_candidate_ref()
|
||||
- avcodec/mpeg4videodec: Check sprite delta upshift against overflowing.
|
||||
- avcodec/mpeg4videodec: Fix integer overflow in num_sprite_warping_points=2 case
|
||||
- avcodec/aacsbr_fixed: Check shift in sbr_hf_assemble()
|
||||
- avcodec/sbrdsp_fixed: Return an error from sbr_hf_apply_noise() if operations are impossible
|
||||
- avcodec/libvpxdec: Check that display dimensions fit in the storage dimensions
|
||||
- avcodec/jpeg2000dwt: Fix runtime error: left shift of negative value -123
|
||||
- avcodec/wavpack: Fix runtime error: signed integer overflow: 1886191616 + 277872640 cannot be represented in type 'int'
|
||||
- avcodec/snowdec: Fix runtime error: left shift of negative value -1
|
||||
- avcodec/aacdec_fixed: Fix runtime error: left shift of negative value -1297616
|
||||
- avcodec/tiff: Fix leak of geotags[].val
|
||||
- avcodec/ra144: Fix runtime error: signed integer overflow: -2200 * 1033073 cannot be represented in type 'int'
|
||||
- avcodec/flicvideo: Fix runtime error: signed integer overflow: 4864 * 459296 cannot be represented in type 'int'
|
||||
- avcodec/cfhd: Check band parameters before storing them
|
||||
- avcodec/h264_parse: Check picture structure when initializig weight table
|
||||
- avcodec/indeo4: Check remaining data in Pic hdr extension parsing code
|
||||
- avcodec/ac3dec_fixed: Fix multiple runtime error: signed integer overflow: -39271008 * 59 cannot be represented in type 'int'
|
||||
- lavc/aarch64/simple_idct: fix idct_col4_top coefficient
|
||||
|
||||
|
||||
version 3.4:
|
||||
- deflicker video filter
|
||||
- doubleweave video filter
|
||||
- lumakey video filter
|
||||
- pixscope video filter
|
||||
- oscilloscope video filter
|
||||
- config.log and other configuration files moved into ffbuild/ directory
|
||||
- update cuvid/nvenc headers to Video Codec SDK 8.0.14
|
||||
- afir audio filter
|
||||
- scale_cuda CUDA based video scale filter
|
||||
- librsvg support for svg rasterization
|
||||
- crossfeed audio filter
|
||||
- spec compliant VP9 muxing support in MP4
|
||||
- remove the libnut muxer/demuxer wrappers
|
||||
- remove the libschroedinger encoder/decoder wrappers
|
||||
- surround audio filter
|
||||
- sofalizer filter switched to libmysofa
|
||||
- Gremlin Digital Video demuxer and decoder
|
||||
- headphone audio filter
|
||||
- superequalizer audio filter
|
||||
- roberts video filter
|
||||
- The x86 assembler default switched from yasm to nasm, pass
|
||||
--x86asmexe=yasm to configure to restore the old behavior.
|
||||
- additional frame format support for Interplay MVE movies
|
||||
- support for decoding through D3D11VA in ffmpeg
|
||||
- limiter video filter
|
||||
- libvmaf video filter
|
||||
- Dolby E decoder and SMPTE 337M demuxer
|
||||
- unpremultiply video filter
|
||||
- tlut2 video filter
|
||||
- floodfill video filter
|
||||
- pseudocolor video filter
|
||||
- raw G.726 muxer and demuxer, left- and right-justified
|
||||
- NewTek NDI input/output device
|
||||
- Some video filters with several inputs now use a common set of options:
|
||||
blend, libvmaf, lut3d, overlay, psnr, ssim.
|
||||
They must always be used by name.
|
||||
- FITS demuxer and decoder
|
||||
- FITS muxer and encoder
|
||||
- add --disable-autodetect build switch
|
||||
- drop deprecated qtkit input device (use avfoundation instead)
|
||||
- despill video filter
|
||||
- haas audio filter
|
||||
- SUP/PGS subtitle muxer
|
||||
- convolve video filter
|
||||
- VP9 tile threading support
|
||||
- KMS screen grabber
|
||||
- CUDA thumbnail filter
|
||||
- V4L2 mem2mem HW assisted codecs
|
||||
- Rockchip MPP hardware decoding
|
||||
- vmafmotion video filter
|
||||
- use MIME type "G726" for little-endian G.726, "AAL2-G726" for big-endian G.726
|
||||
version 3.3.2:
|
||||
- avcodec/mpeg4videodec: Fix runtime error: signed integer overflow: 53098 * 40448 cannot be represented in type 'int'
|
||||
- avcodec/pafvideo: Fix assertion failure
|
||||
- avcodec/takdec: Fix multiple runtime error: signed integer overflow: 637072 * 4096 cannot be represented in type 'int'
|
||||
- avcodec/mjpegdec: Check that reference frame matches the current frame
|
||||
- avcodec/tiff: Avoid loosing allocated geotag values
|
||||
- avcodec/cavs: Fix runtime error: signed integer overflow: -12648062 * 256 cannot be represented in type 'int'
|
||||
- avformat/hls: Check local file extensions
|
||||
- avcodec/qdrw: Fix null pointer dereference
|
||||
- avutil/softfloat: Fix sign error in and improve documentation of av_int2sf()
|
||||
- avcodec/hevc_ps: Fix runtime error: index 32 out of bounds for type 'uint8_t [32]'
|
||||
- avcodec/dxv: Check remaining bytes in dxv_decompress_raw()
|
||||
- avcodec/pafvideo: Check packet size and frame code before ff_reget_buffer()
|
||||
- avcodec/ac3dec_fixed: Fix runtime error: left shift of 419 by 23 places cannot be represented in type 'int'
|
||||
- avformat/options: log filename on open
|
||||
- avcodec/aacps: Fix runtime error: left shift of 1073741824 by 1 places cannot be represented in type 'INTFLOAT' (aka 'int')
|
||||
- avcodec/wavpack: Fix runtime error: shift exponent 32 is too large for 32-bit type 'int'
|
||||
- avcodec/cfhd: Fix runtime error: signed integer overflow: 65280 * 65288 cannot be represented in type 'int'
|
||||
- avcodec/wavpack: Fix runtime error: signed integer overflow: 2013265955 - -134217694 cannot be represented in type 'int'
|
||||
- avcodec/cinepak: Check input packet size before frame reallocation
|
||||
- avcodec/hevc_ps: Fix runtime error: signed integer overflow: 2147483628 + 256 cannot be represented in type 'int'
|
||||
- avcodec/ra144: Fixes runtime error: signed integer overflow: 7160 * 327138 cannot be represented in type 'int'
|
||||
- avcodec/pnm: Use ff_set_dimensions()
|
||||
- avcodec/cavsdec: Fix runtime error: signed integer overflow: 59 + 2147483600 cannot be represented in type 'int'
|
||||
- avcodec/nvenc: fix hw accelerated transcode with bframes
|
||||
- libavformat/hls: Observe Set-Cookie headers
|
||||
- libavformat/http: Ignore expired cookies
|
||||
- avformat/avidec: Limit formats in gab2 to srt and ass/ssa
|
||||
- avcodec/acelp_pitch_delay: Fix runtime error: value 4.83233e+39 is outside the range of representable values of type 'float'
|
||||
- avcodec/wavpack: Check float_shift
|
||||
- avcodec/wavpack: Fix runtime error: signed integer overflow: 24 * -2147483648 cannot be represented in type 'int'
|
||||
- avcodec/ansi: Fix frame memleak
|
||||
- avcodec/dds: Fix runtime error: left shift of 145 by 24 places cannot be represented in type 'int'
|
||||
- avcodec/jpeg2000dec: Use ff_set_dimensions()
|
||||
- avcodec/truemotion2: Fix passing null pointer to memset()
|
||||
- avcodec/truemotion2: Fix runtime error: left shift of 1 by 31 places cannot be represented in type 'int'
|
||||
- avcodec/ra144: Fix runtime error: signed integer overflow: -2449 * 1398101 cannot be represented in type 'int'
|
||||
- avcodec/ra144: Fix runtime error: signed integer overflow: 11184810 * 404 cannot be represented in type 'int'
|
||||
- avcodec/aac_defines: Add missing () to AAC_HALF_SUM() macro
|
||||
- avcodec/webp: Fixes null pointer dereference
|
||||
- avcodec/aacdec_fixed: Fix runtime error: left shift of 1 by 31 places cannot be represented in type 'int'
|
||||
- avcodec/ylc: Check count in build_vlc()
|
||||
- avcodec/snow: Fix runtime error: signed integer overflow: 1086573993 + 1086573994 cannot be represented in type 'int'
|
||||
- avcodec/jpeg2000: Fix runtime error: signed integer overflow: 4185 + 2147483394 cannot be represented in type 'int'
|
||||
- avcodec/jpeg2000dec: Check tile offsets more completely
|
||||
- avcodec/sheervideo: Check input buffer size before allocating and decoding
|
||||
- avcodec/aacdec_fixed: Fix multiple runtime error: shift exponent 127 is too large for 32-bit type 'int'
|
||||
- avcodec/wnv1: More strict buffer size check
|
||||
- avcodec/libfdk-aacdec: Correct buffer_size parameter
|
||||
- avcodec/sbrdsp_template: Fix: runtime error: signed integer overflow: 849815297 + 1315389781 cannot be represented in type 'int'
|
||||
- avcodec/ivi_dsp: Fix runtime error: left shift of negative value -2
|
||||
- doc/filters: Clarify scale2ref example
|
||||
- avcodec/mlpdec: Do not leave invalid values in matrix_out_ch[] on error
|
||||
- avcodec/ra144dec: Fix runtime error: left shift of negative value -17
|
||||
- avcodec/pixlet: Fix runtime error: signed integer overflow: 2147483647 + 32 cannot be represented in type 'int'
|
||||
- avformat/mux: Fix copy an paste typo
|
||||
- avutil/internal: Do not enable CHECKED with DEBUG
|
||||
- avcodec/clearvideo: Check buf_size before decoding frame
|
||||
- avcodec/aacdec_fixed: Fix runtime error: signed integer overflow: -2147483648 * -1 cannot be represented in type 'int'
|
||||
- avcodec/smc: Check remaining input
|
||||
- avcodec/diracdec: Fix off by 1 error in quant check
|
||||
- avcodec/jpeg2000dec: Fix copy and paste error
|
||||
- avcodec/jpeg2000dec: Check tile offsets
|
||||
- avcodec/sanm: Fix uninitialized reference frames
|
||||
- avcodec/jpeglsdec: Check get_bits_left() before decoding a picture
|
||||
- avcodec/fmvc: Fix use of uninitialized memory when the first frame is not a keyframe
|
||||
- avcodec/ivi_dsp: Fix multiple runtime error: left shift of negative value -71
|
||||
- avcodec/mjpegdec: Fix runtime error: signed integer overflow: -32767 * 130560 cannot be represented in type 'int'
|
||||
- avcodec/aacdec_fixed: Fix runtime error: shift exponent 34 is too large for 32-bit type 'int'
|
||||
- avcodec/mpeg4videodec: Check for multiple VOL headers
|
||||
- avcodec/vp9block: fix runtime error: signed integer overflow: 196675 * 20670 cannot be represented in type 'int'
|
||||
- avcodec/vmnc: Check location before use
|
||||
- avcodec/takdec: Fix runtime error: signed integer overflow: 8192 * 524308 cannot be represented in type 'int'
|
||||
- avcodec/aac_defines: Fix: runtime error: left shift of negative value -2
|
||||
- avcodec/takdec: Fix runtime error: left shift of negative value -63
|
||||
- avcodec/mlpdsp: Fix runtime error: signed integer overflow: -24419392 * 128 cannot be represented in type 'int'
|
||||
- avcodec/sbrdsp_fixed: fix runtime error: left shift of 1 by 31 places cannot be represented in type 'int'
|
||||
- avcodec/aacsbr_fixed: Fix multiple runtime error: shift exponent 170 is too large for 32-bit type 'int'
|
||||
- avcodec/mlpdec: Do not leave a invalid num_primitive_matrices in the context
|
||||
- avcodec/aacsbr_fixed: Fix multiple runtime error: shift exponent 150 is too large for 32-bit type 'int'
|
||||
- avcodec/mimic: Use ff_set_dimensions() to set the dimensions
|
||||
- avcodec/fic: Fix multiple runtime error: signed integer overflow: 5793 * 419752 cannot be represented in type 'int'
|
||||
- avcodec/pixlet: Fix reading invalid numbers of bits
|
||||
- avcodec/mlpdec: Fix: runtime error: left shift of negative value -8
|
||||
- avcodec/dfa: Fix: runtime error: signed integer overflow: -14202 * 196877 cannot be represented in type 'int'
|
||||
- avcodec/aacdec: Fix runtime error: signed integer overflow: 2147483520 + 255 cannot be represented in type 'int'
|
||||
- avcodec/aacdec_template: Fix fixed point scale in decode_cce()
|
||||
- avcodec/fmvc: Fix off by 1 error
|
||||
- avcodec/flicvideo: Check frame_size before decrementing
|
||||
- avcodec/mlpdec: Fix runtime error: left shift of negative value -1
|
||||
- avcodec/takdec: Fix runtime error: left shift of negative value -42
|
||||
- avcodec/hq_hqa: Fix: runtime error: signed integer overflow: -255 * 10180917 cannot be represented in type 'int'
|
||||
- avcodec/scpr: mask bits to prevent out of array read
|
||||
- avcodec/truemotion1: Fix multiple runtime error: signed integer overflow: 1246906962 * 2 cannot be represented in type 'int'
|
||||
- avcodec/svq3: Fix runtime error: left shift of negative value -6
|
||||
- avcodec/tiff: reset sampling[] if its invalid
|
||||
- configure: Fix the msvcrt version check for mingw32
|
||||
- lavf/mov: make invalid m{d,v}hd time_scale default to 1 instead of erroring out
|
||||
- lavc/ffjni: add missing '\n'
|
||||
- lavc/mediacodec_wrapper: do not declare JNIAMedia{Codec,CodecList,Format}Fields on the stack
|
||||
- lavc/mediacodec_wrapper: fix local reference leaks
|
||||
- avcodec/nvenc: remove unnecessary alignment
|
||||
- Use AVOnce as a static variable consistently
|
||||
- avfilter: take_samples: do not directly return frame when samples are skipped
|
||||
- avutil/hwcontext_dxva2: Don't improperly free IDirect3DSurface9 objects
|
||||
|
||||
version 3.3.1:
|
||||
- libswscale/tests/swscale: Fix uninitialized variables
|
||||
- avcodec/ffv1dec: Fix runtime error: signed integer overflow: 1550964438 + 1550964438 cannot be represented in type 'int'
|
||||
- avcodec/webp: Fix signedness in prefix_code check
|
||||
- avcodec/svq3: Fix runtime error: signed integer overflow: 169 * 12717677 cannot be represented in type 'int'
|
||||
- avcodec/mlpdec: Check that there is enough data for headers
|
||||
- avcodec/ac3dec: Keep track of band structure
|
||||
- avcodec/webp: Add missing input padding
|
||||
- avcodec/aacdec_fixed: Fix runtime error: left shift of negative value -1
|
||||
- avcodec/aacsbr_template: Do not change bs_num_env before its checked
|
||||
- avcodec/scpr: Fix multiple runtime error: index 256 out of bounds for type 'unsigned int [256]'
|
||||
- avcodec/mlp: Fix multiple runtime error: left shift of negative value -1
|
||||
- avcodec/xpmdec: Fix multiple pointer/memory issues
|
||||
- avcodec/vp8dsp: vp7_luma_dc_wht_c: Fix multiple runtime error: signed integer overflow: -1366381240 + -1262413604 cannot be represented in type 'int'
|
||||
- avcodec/avcodec: Limit the number of side data elements per packet
|
||||
- avcodec/texturedsp: Fix runtime error: left shift of 255 by 24 places cannot be represented in type 'int'
|
||||
- avcodec/g723_1dec: Fix runtime error: left shift of negative value -1
|
||||
- avcodec/wmv2dsp: Fix runtime error: signed integer overflow: 181 * -17047030 cannot be represented in type 'int'
|
||||
- avcodec/diracdec: Fix Assertion frame->buf[0] failed at libavcodec/decode.c:610
|
||||
- avcodec/msmpeg4dec: Check for cbpy VLC errors
|
||||
- avcodec/cllc: Check num_bits
|
||||
- avcodec/cllc: Factor VLC_BITS/DEPTH out, do not use repeated literal numbers
|
||||
- avcodec/scpr: Check y in first line loop in decompress_i()
|
||||
- avcodec/dvbsubdec: Check entry_id
|
||||
- avcodec/aacdec_fixed: Fix multiple shift exponent 33 is too large for 32-bit type 'int'
|
||||
- avcodec/mpeg12dec: Fixes runtime error: division by zero
|
||||
- avcodec/pixlet: Fix runtime error: signed integer overflow: 436207616 * -5160230545260541 cannot be represented in type 'long'
|
||||
- avcodec/webp: Always set pix_fmt
|
||||
- avfilter/vf_uspp: Fix currently unused input frame dimensions
|
||||
- avcodec/truemotion1: Fix multiple runtime error: left shift of negative value -1
|
||||
- avcodec/eatqi: Fix runtime error: signed integer overflow: 4466147 * 1075 cannot be represented in type 'int'
|
||||
- avcodec/dss_sp: Fix runtime error: signed integer overflow: 2147481189 + 4096 cannot be represented in type 'int'
|
||||
- avformat/wavdec: Check chunk_size
|
||||
- avcodec/cavs: Check updated MV
|
||||
- avcodec/y41pdec: Fix width in input buffer size check
|
||||
- avcodec/svq3: Fix multiple runtime error: signed integer overflow: -237341 * 24552 cannot be represented in type 'int'
|
||||
- avcodec/texturedsp: Fix runtime error: left shift of 218 by 24 places cannot be represented in type 'int'
|
||||
- avcodec/lagarith: Check scale_factor
|
||||
- avcodec/lagarith: Fix runtime error: left shift of negative value -1
|
||||
- avcodec/takdec: Fix multiple runtime error: left shift of negative value -1
|
||||
- avcodec/indeo2: Check for invalid VLCs
|
||||
- avcodec/g723_1dec: Fix several integer related cases of undefined behaviour
|
||||
- avcodec/htmlsubtitles: Check for string truncation and return error
|
||||
- avcodec/bmvvideo: Fix runtime error: left shift of 137 by 24 places cannot be represented in type 'int'
|
||||
- avcodec/dss_sp: Fix multiple runtime error: signed integer overflow: -15699 * -164039 cannot be represented in type 'int'
|
||||
- avcodec/dvbsubdec: check region dimensions
|
||||
- avcodec/vp8dsp: Fixes: runtime error: signed integer overflow: 1330143360 - -1023040530 cannot be represented in type 'int'
|
||||
- avcodec/hqxdsp: Fix multiple runtime error: signed integer overflow: 248220 * 21407 cannot be represented in type 'int' in idct_col()
|
||||
- avcodec/cavsdec: Check sym_factor
|
||||
- avcodec/cdxl: Check format for BGR24
|
||||
- avcodec/ffv1dec: Fix copying planes of paletted formats
|
||||
- avcodec/wmv2dsp: Fix runtime error: signed integer overflow: 181 * -12156865 cannot be represented in type 'int'
|
||||
- avcodec/xwddec: Check bpp more completely
|
||||
- avcodec/aacdec_template: Do not decode 2nd PCE if it will lead to failure
|
||||
- avcodec/s302m: Fix left shift of 8 by 28 places cannot be represented in type 'int'
|
||||
- avcodec/eamad: Fix runtime error: signed integer overflow: 49674 * 49858 cannot be represented in type 'int'
|
||||
- avcodec/g726: Fix runtime error: left shift of negative value -2
|
||||
- avcodec/magicyuv: Check len to be supported
|
||||
- avcodec/ra144: Fix runtime error: left shift of negative value -798
|
||||
- avcodec/mss34dsp: Fix multiple signed integer overflow
|
||||
- avcodec/targa_y216dec: Fix width type
|
||||
- avcodec/texturedsp: Fix multiple runtime error: left shift of 255 by 24 places cannot be represented in type 'int'
|
||||
- avcodec/ivi_dsp: Fix multiple left shift of negative value -2
|
||||
- avcodec/svq3: Fix multiple runtime error: signed integer overflow: 44161 * 61694 cannot be represented in type 'int'
|
||||
- avcodec/msmpeg4dec: Correct table depth
|
||||
- avcodec/dds: Fix runtime error: left shift of 1 by 31 places cannot be represented in type 'int'
|
||||
- avcodec/cdxl: Check format parameter
|
||||
- avutil/softfloat: Fix overflow in av_div_sf()
|
||||
- avcodec/hq_hqa: Fix runtime error: left shift of negative value -207
|
||||
- avcodec/mss3: Change types in rac_get_model_sym() to match the types they are initialized from
|
||||
- avcodec/shorten: Check k in get_uint()
|
||||
- avcodec/webp: Fix null pointer dereference
|
||||
- avcodec/dfa: Fix signed integer overflow: -2147483648 - 1 cannot be represented in type 'int'
|
||||
- avcodec/g723_1: Fix multiple runtime error: left shift of negative value
|
||||
- avcodec/mimic: Fix runtime error: left shift of negative value -1
|
||||
- avcodec/clearvideo: Fix multiple runtime error: left shift of negative value -1024
|
||||
- avcodec/fic: Fix multiple left shift of negative value -15
|
||||
- avcodec/mlpdec: Fix runtime error: left shift of negative value -22
|
||||
- avcodec/snowdec: Check qbias
|
||||
- avutil/softfloat: Fix multiple runtime error: left shift of negative value -8
|
||||
- avcodec/aacsbr_template: Do not leave bs_num_env invalid
|
||||
- avcodec/mdec: Fix signed integer overflow: 28835400 * 83 cannot be represented in type 'int'
|
||||
- avcodec/dfa: Fix off by 1 error
|
||||
- avcodec/nellymoser: Fix multiple left shift of negative value -8591
|
||||
- avcodec/cdxl: Fix signed integer overflow: 14243456 * 164 cannot be represented in type 'int'
|
||||
- avcodec/g722: Fix multiple runtime error: left shift of negative value -1
|
||||
- avcodec/dss_sp: Fix multiple left shift of negative value -466
|
||||
- avcodec/wnv1: Fix runtime error: left shift of negative value -1
|
||||
- avcodec/tiertexseqv: set the fixed dimenasions, do not depend on the demuxer doing so
|
||||
- avcodec/mjpegdec: Fix runtime error: signed integer overflow: -24543 * 2031616 cannot be represented in type 'int'
|
||||
- avcodec/cavsdec: Fix undefined behavior from integer overflow
|
||||
- avcodec/dvdsubdec: Fix runtime error: left shift of 242 by 24 places cannot be represented in type 'int'
|
||||
- libavcodec/mpeg4videodec: Convert sprite_offset to 64bit
|
||||
- avcodec/pngdec: Use ff_set_dimensions()
|
||||
- avcodec/msvideo1: Check buffer size before re-getting the frame
|
||||
- avcodec/h264_cavlc: Fix undefined behavior on qscale overflow
|
||||
- avcodec/dcadsp: Fix runtime error: signed integer overflow
|
||||
- avcodec/svq3: Reject dx/dy beyond 16bit
|
||||
- avcodec/svq3: Increase offsets to prevent integer overflows
|
||||
- avcodec/indeo2: Check remaining bits in ir2_decode_plane()
|
||||
- avcodec/vp3: Check remaining bits in unpack_dct_coeffs()
|
||||
- doc/developer: Add terse documentation of assumed C implementation defined behavior
|
||||
- avcodec/bmp: Use ff_set_dimensions()
|
||||
- avcodec/mdec: Fix runtime error: left shift of negative value -127
|
||||
- avcodec/x86/vc1dsp_init: Fix build failure with --disable-optimizations and clang
|
||||
- libavcodec/exr : fix float to uint16 conversion for negative float value
|
||||
- avformat/webmdashenc: Validate the 'streams' adaptation sets parameter
|
||||
- avformat/webmdashenc: Require the 'adaptation_sets' option to be set
|
||||
- lavfi/avfiltergraph: only return EOF in avfilter_graph_request_oldest if all sinks EOFed
|
||||
- ffmpeg: check for unconnected outputs
|
||||
- avformat/utils: free AVStream.codec properly in free_stream()
|
||||
- avcodec/options: do a more thorough clean up in avcodec_copy_context()
|
||||
- avcodec/options: factorize avcodec_copy_context() cleanup code
|
||||
- ffmpeg: count packets when queued
|
||||
- avformat/concatdec: fix the h264 annexb extradata check
|
||||
- avcodec/dnxhd_parser: fix parsing interlaced video, simplify code
|
||||
- ffmpeg; check return code of avcodec_send_frame when flushing encoders
|
||||
- avcodec/g723_1dec: Fix LCG type
|
||||
- avcodec/hqxdsp: Fix runtime error: signed integer overflow: -196264 * 11585 cannot be represented in type 'int'
|
||||
- avcodec/ac3dec: Fix: runtime error: index -1 out of bounds for type 'INTFLOAT [2]'
|
||||
- avcodec/mpeg4videodec: Clear sprite wraping on unsupported cases in VOP decode
|
||||
- avcodec/pixlet: Fixes: runtime error: signed integer overflow: 9203954323419769657 + 29897660706736950 cannot be represented in type 'long'
|
||||
- avcodec/dds: Fix runtime error: left shift of 210 by 24 places cannot be represented in type 'int'
|
||||
- avcodec/rscc: Check pixel_size for overflow
|
||||
- avcodec/fmvc: Check nb_blocks
|
||||
- avcodec/cllc: Check prefix
|
||||
- avcodec/webp: Factor update_canvas_size() out
|
||||
- avcodec/webp: Update canvas size in vp8_lossy_decode_frame() as in vp8_lossless_decode_frame()
|
||||
- avcodec/snowdec: Check width
|
||||
- avcodec/flacdec: Return error code instead of 0 for failures
|
||||
- avcodec/opus_silk: Fix integer overflow and out of array read
|
||||
- avcodec/aacps: Fix undefined behavior
|
||||
- avcodec/pixlet: Fix shift exponent 4294967268 is too large for 32-bit type 'int'
|
||||
- doc/general: fix project name after 2b1a6b1ae
|
||||
|
||||
|
||||
version 3.3:
|
||||
@@ -143,7 +502,6 @@ version 3.3:
|
||||
- MPEG-7 Video Signature filter
|
||||
- Removed asyncts filter (use af_aresample instead)
|
||||
- Intel QSV-accelerated VP8 video decoding
|
||||
- VAAPI-accelerated deinterlacing
|
||||
|
||||
|
||||
version 3.2:
|
||||
|
||||
45
MAINTAINERS
45
MAINTAINERS
@@ -29,6 +29,9 @@ ffplay:
|
||||
ffprobe:
|
||||
ffprobe.c Stefano Sabatini
|
||||
|
||||
ffserver:
|
||||
ffserver.c Reynaldo H. Verdejo Pinochet
|
||||
|
||||
Commandline utility code:
|
||||
cmdutils.c, cmdutils.h Michael Niedermayer
|
||||
|
||||
@@ -39,7 +42,7 @@ QuickTime faststart:
|
||||
Miscellaneous Areas
|
||||
===================
|
||||
|
||||
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Lou Logan, Gyan Doshi
|
||||
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Lou Logan
|
||||
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Nikolay Aleksandrov
|
||||
presets Robert Swain
|
||||
metadata subsystem Aurelien Jacobs
|
||||
@@ -139,7 +142,6 @@ Codecs:
|
||||
aacenc*, aaccoder.c Rostislav Pehlivanov
|
||||
alacenc.c Jaikrishnan Menon
|
||||
alsdec.c Thilo Borgmann, Umair Khan
|
||||
aptx.c Aurelien Jacobs
|
||||
ass* Aurelien Jacobs
|
||||
asv* Michael Niedermayer
|
||||
atrac3plus* Maxim Poliakovski
|
||||
@@ -156,11 +158,9 @@ Codecs:
|
||||
cpia.c Stephan Hilb
|
||||
crystalhd.c Philip Langdale
|
||||
cscd.c Reimar Doeffinger
|
||||
cuviddec.c Timo Rothenpieler
|
||||
dca* foo86
|
||||
cuvid.c Timo Rothenpieler
|
||||
dirac* Rostislav Pehlivanov
|
||||
dnxhd* Baptiste Coudurier
|
||||
dolby_e* foo86
|
||||
dpcm.c Mike Melanson
|
||||
dss_sp.c Oleksij Rempel
|
||||
dv.c Roman Shaposhnik
|
||||
@@ -168,7 +168,6 @@ Codecs:
|
||||
eacmv*, eaidct*, eat* Peter Ross
|
||||
evrc* Paul B Mahol
|
||||
exif.c, exif.h Thilo Borgmann
|
||||
exr.c Martin Vignali
|
||||
ffv1* Michael Niedermayer
|
||||
ffwavesynth.c Nicolas George
|
||||
fifo.c Jan Sebechlebsky
|
||||
@@ -188,12 +187,12 @@ Codecs:
|
||||
jvdec.c Peter Ross
|
||||
lcl*.c Roberto Togni, Reimar Doeffinger
|
||||
libcelt_dec.c Nicolas George
|
||||
libcodec2.c Tomas Härdin
|
||||
libdirac* David Conrad
|
||||
libgsm.c Michel Bardiaux
|
||||
libkvazaar.c Arttu Ylä-Outinen
|
||||
libopenjpeg.c Jaikrishnan Menon
|
||||
libopenjpegenc.c Michael Bradshaw
|
||||
libschroedinger* David Conrad
|
||||
libtheoraenc.c David Conrad
|
||||
libvorbis.c David Conrad
|
||||
libvpx* James Zern
|
||||
@@ -212,7 +211,7 @@ Codecs:
|
||||
msrle.c Mike Melanson
|
||||
msvideo1.c Mike Melanson
|
||||
nuv.c Reimar Doeffinger
|
||||
nvdec*, nvenc* Timo Rothenpieler
|
||||
nvenc* Timo Rothenpieler
|
||||
opus* Rostislav Pehlivanov
|
||||
paf.* Paul B Mahol
|
||||
pcx.c Ivo van Poorten
|
||||
@@ -233,7 +232,6 @@ Codecs:
|
||||
smvjpegdec.c Ash Hughes
|
||||
snow* Michael Niedermayer, Loren Merritt
|
||||
sonic.c Alex Beregszaszi
|
||||
speedhq.c Steinar H. Gunderson
|
||||
srt* Aurelien Jacobs
|
||||
sunrast.c Ivo van Poorten
|
||||
svq3.c Michael Niedermayer
|
||||
@@ -242,10 +240,10 @@ Codecs:
|
||||
tta.c Alex Beregszaszi, Jaikrishnan Menon
|
||||
ttaenc.c Paul B Mahol
|
||||
txd.c Ivo van Poorten
|
||||
v4l2_* Jorge Ramirez-Ortiz
|
||||
vc2* Rostislav Pehlivanov
|
||||
vcr1.c Michael Niedermayer
|
||||
videotoolboxenc.c Rick Kern, Aman Gupta
|
||||
vda_h264_dec.c Xidorn Quan
|
||||
videotoolboxenc.c Rick Kern
|
||||
vima.c Paul B Mahol
|
||||
vorbisdec.c Denes Balatoni, David Conrad
|
||||
vorbisenc.c Oded Shimon
|
||||
@@ -268,11 +266,11 @@ Hardware acceleration:
|
||||
crystalhd.c Philip Langdale
|
||||
dxva2* Hendrik Leppkes, Laurent Aimar, Steve Lhomme
|
||||
d3d11va* Steve Lhomme
|
||||
mediacodec* Matthieu Bouron, Aman Gupta
|
||||
mediacodec* Matthieu Bouron
|
||||
vaapi* Gwenole Beauchesne
|
||||
vaapi_encode* Mark Thompson
|
||||
vdpau* Philip Langdale, Carl Eugen Hoyos
|
||||
videotoolbox* Rick Kern, Aman Gupta
|
||||
videotoolbox* Rick Kern
|
||||
|
||||
|
||||
libavdevice
|
||||
@@ -282,8 +280,7 @@ libavdevice
|
||||
|
||||
|
||||
avfoundation.m Thilo Borgmann
|
||||
android_camera.c Felix Matouschek
|
||||
decklink* Marton Balint
|
||||
decklink* Deti Fliegl
|
||||
dshow.c Roger Pack (CC rogerdpack@gmail.com)
|
||||
fbdev_enc.c Lukasz Marek
|
||||
gdigrab.c Roger Pack (CC rogerdpack@gmail.com)
|
||||
@@ -292,6 +289,7 @@ libavdevice
|
||||
libdc1394.c Roman Shaposhnik
|
||||
opengl_enc.c Lukasz Marek
|
||||
pulse_audio_enc.c Lukasz Marek
|
||||
qtkit.m Thilo Borgmann
|
||||
sdl Stefano Sabatini
|
||||
sdl2.c Josh de Kock
|
||||
v4l2.c Giorgio Vazzana
|
||||
@@ -330,7 +328,6 @@ Filters:
|
||||
avf_avectorscope.c Paul B Mahol
|
||||
avf_showcqt.c Muhammad Faiz
|
||||
vf_blend.c Paul B Mahol
|
||||
vf_bwdif Thomas Mundt (CC <thomas.mundt@hr.de>)
|
||||
vf_chromakey.c Timo Rothenpieler
|
||||
vf_colorchannelmixer.c Paul B Mahol
|
||||
vf_colorbalance.c Paul B Mahol
|
||||
@@ -346,7 +343,6 @@ Filters:
|
||||
vf_hqx.c Clément Bœsch
|
||||
vf_idet.c Pascal Massimino
|
||||
vf_il.c Paul B Mahol
|
||||
vf_(t)interlace Thomas Mundt (CC <thomas.mundt@hr.de>)
|
||||
vf_lenscorrection.c Daniel Oberhoff
|
||||
vf_mergeplanes.c Paul B Mahol
|
||||
vf_mestimate.c Davinder Singh
|
||||
@@ -396,13 +392,9 @@ Muxers/Demuxers:
|
||||
brstm.c Paul B Mahol
|
||||
caf* Peter Ross
|
||||
cdxl.c Paul B Mahol
|
||||
codec2.c Tomas Härdin
|
||||
crc.c Michael Niedermayer
|
||||
dashdec.c Steven Liu
|
||||
dashenc.c Karthick Jeyapal
|
||||
daud.c Reimar Doeffinger
|
||||
dss.c Oleksij Rempel
|
||||
dtsdec.c foo86
|
||||
dtshddec.c Paul B Mahol
|
||||
dv.c Roman Shaposhnik
|
||||
electronicarts.c Peter Ross
|
||||
@@ -414,7 +406,7 @@ Muxers/Demuxers:
|
||||
gxf.c Reimar Doeffinger
|
||||
gxfenc.c Baptiste Coudurier
|
||||
hls.c Anssi Hannula
|
||||
hlsenc.c Christian Suloway, Steven Liu
|
||||
hls encryption (hlsenc.c) Christian Suloway, Steven Liu
|
||||
idcin.c Mike Melanson
|
||||
idroqdec.c Mike Melanson
|
||||
iff.c Jaikrishnan Menon
|
||||
@@ -424,6 +416,7 @@ Muxers/Demuxers:
|
||||
iss.c Stefan Gehrer
|
||||
jvdec.c Peter Ross
|
||||
libmodplug.c Clément Bœsch
|
||||
libnut.c Oded Shimon
|
||||
libopenmpt.c Josh de Kock
|
||||
lmlm4.c Ivo van Poorten
|
||||
lvfdec.c Paul B Mahol
|
||||
@@ -446,6 +439,7 @@ Muxers/Demuxers:
|
||||
msnwc_tcp.c Ramiro Polla
|
||||
mtv.c Reynaldo H. Verdejo Pinochet
|
||||
mxf* Baptiste Coudurier
|
||||
mxfdec.c Tomas Härdin
|
||||
nistspheredec.c Paul B Mahol
|
||||
nsvdec.c Francois Revol
|
||||
nut* Michael Niedermayer
|
||||
@@ -474,7 +468,6 @@ Muxers/Demuxers:
|
||||
rtpdec_vc2hq.*, rtpenc_vc2hq.* Thomas Volkert
|
||||
rtpdec_vp9.c Thomas Volkert
|
||||
rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo
|
||||
s337m.c foo86
|
||||
sbgdec.c Nicolas George
|
||||
sdp.c Martin Storsjo
|
||||
segafilm.c Mike Melanson
|
||||
@@ -524,7 +517,7 @@ Operating systems / CPU architectures
|
||||
=====================================
|
||||
|
||||
Alpha Falk Hueffner
|
||||
MIPS Manojkumar Bhosale
|
||||
MIPS Nedeljko Babic
|
||||
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
|
||||
Amiga / PowerPC Colin Ward
|
||||
Windows MinGW Alex Beregszaszi, Ramiro Polla
|
||||
@@ -550,9 +543,7 @@ Ganesh Ajjanagadde
|
||||
Henrik Gramner
|
||||
Ivan Uskov
|
||||
James Darnley
|
||||
Jan Ekström
|
||||
Joakim Plate
|
||||
Jun Zhao
|
||||
Kieran Kunhya
|
||||
Kirill Gavrilov
|
||||
Martin Storsjö
|
||||
@@ -591,7 +582,6 @@ FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
|
||||
Ganesh Ajjanagadde C96A 848E 97C3 CEA2 AB72 5CE4 45F9 6A2D 3C36 FB1B
|
||||
Gwenole Beauchesne 2E63 B3A6 3E44 37E2 017D 2704 53C7 6266 B153 99C4
|
||||
Jaikrishnan Menon 61A1 F09F 01C9 2D45 78E1 C862 25DC 8831 AF70 D368
|
||||
James Almer 7751 2E8C FD94 A169 57E6 9A7A 1463 01AD 7376 59E0
|
||||
Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A
|
||||
Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE
|
||||
Lou Logan 7D68 DC73 CBEF EABB 671A B6CF 621C 2E28 82F8 DC3A
|
||||
@@ -607,7 +597,6 @@ Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
|
||||
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
|
||||
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
|
||||
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
|
||||
Steinar H. Gunderson C2E9 004F F028 C18E 4EAD DB83 7F61 7561 7797 8F76
|
||||
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
|
||||
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
|
||||
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
|
||||
|
||||
139
Makefile
139
Makefile
@@ -1,5 +1,5 @@
|
||||
MAIN_MAKEFILE=1
|
||||
include ffbuild/config.mak
|
||||
include config.mak
|
||||
|
||||
vpath %.c $(SRC_PATH)
|
||||
vpath %.cpp $(SRC_PATH)
|
||||
@@ -11,12 +11,40 @@ vpath %.asm $(SRC_PATH)
|
||||
vpath %.rc $(SRC_PATH)
|
||||
vpath %.v $(SRC_PATH)
|
||||
vpath %.texi $(SRC_PATH)
|
||||
vpath %.cu $(SRC_PATH)
|
||||
vpath %.ptx $(SRC_PATH)
|
||||
vpath %/fate_config.sh.template $(SRC_PATH)
|
||||
|
||||
AVPROGS-$(CONFIG_FFMPEG) += ffmpeg
|
||||
AVPROGS-$(CONFIG_FFPLAY) += ffplay
|
||||
AVPROGS-$(CONFIG_FFPROBE) += ffprobe
|
||||
AVPROGS-$(CONFIG_FFSERVER) += ffserver
|
||||
|
||||
AVPROGS := $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
INSTPROGS = $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
PROGS += $(AVPROGS)
|
||||
|
||||
AVBASENAMES = ffmpeg ffplay ffprobe ffserver
|
||||
ALLAVPROGS = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF))
|
||||
ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
|
||||
|
||||
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog) += cmdutils.o))
|
||||
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_opencl.o))
|
||||
|
||||
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o
|
||||
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += ffmpeg_videotoolbox.o
|
||||
OBJS-ffmpeg-$(CONFIG_LIBMFX) += ffmpeg_qsv.o
|
||||
OBJS-ffmpeg-$(CONFIG_VAAPI) += ffmpeg_vaapi.o
|
||||
ifndef CONFIG_VIDEOTOOLBOX
|
||||
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_videotoolbox.o
|
||||
endif
|
||||
OBJS-ffmpeg-$(CONFIG_CUVID) += ffmpeg_cuvid.o
|
||||
OBJS-ffmpeg-$(HAVE_DXVA2_LIB) += ffmpeg_dxva2.o
|
||||
OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o
|
||||
OBJS-ffserver += ffserver_config.o
|
||||
|
||||
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64 audiomatch
|
||||
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
|
||||
TOOLS = qt-faststart trasher uncoded_frame
|
||||
TOOLS-$(CONFIG_ZLIB) += cws2fws
|
||||
|
||||
# $(FFLIBS-yes) needs to be in linking order
|
||||
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
|
||||
@@ -31,45 +59,41 @@ FFLIBS-$(CONFIG_SWSCALE) += swscale
|
||||
FFLIBS := avutil
|
||||
|
||||
DATA_FILES := $(wildcard $(SRC_PATH)/presets/*.ffpreset) $(SRC_PATH)/doc/ffprobe.xsd
|
||||
EXAMPLES_FILES := $(wildcard $(SRC_PATH)/doc/examples/*.c) $(SRC_PATH)/doc/examples/Makefile $(SRC_PATH)/doc/examples/README
|
||||
|
||||
SKIPHEADERS = compat/w32pthreads.h
|
||||
SKIPHEADERS = cmdutils_common_opts.h \
|
||||
compat/w32pthreads.h
|
||||
|
||||
# first so "all" becomes default target
|
||||
all: all-yes
|
||||
|
||||
include $(SRC_PATH)/tools/Makefile
|
||||
include $(SRC_PATH)/ffbuild/common.mak
|
||||
include $(SRC_PATH)/common.mak
|
||||
|
||||
FF_EXTRALIBS := $(FFEXTRALIBS)
|
||||
FF_DEP_LIBS := $(DEP_LIBS)
|
||||
FF_STATIC_DEP_LIBS := $(STATIC_DEP_LIBS)
|
||||
|
||||
$(TOOLS): %$(EXESUF): %.o
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(EXTRALIBS-$(*F)) $(EXTRALIBS) $(ELIBS)
|
||||
all: $(AVPROGS)
|
||||
|
||||
target_dec_%_fuzzer$(EXESUF): target_dec_%_fuzzer.o $(FF_DEP_LIBS)
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(FF_EXTRALIBS) $(LIBFUZZER_PATH)
|
||||
$(TOOLS): %$(EXESUF): %.o $(EXEOBJS)
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS)
|
||||
|
||||
tools/sofa2wavs$(EXESUF): ELIBS = $(FF_EXTRALIBS)
|
||||
tools/cws2fws$(EXESUF): ELIBS = $(ZLIB)
|
||||
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
|
||||
tools/uncoded_frame$(EXESUF): ELIBS = $(FF_EXTRALIBS)
|
||||
tools/target_dec_%_fuzzer$(EXESUF): $(FF_DEP_LIBS)
|
||||
|
||||
CONFIGURABLE_COMPONENTS = \
|
||||
$(wildcard $(FFLIBS:%=$(SRC_PATH)/lib%/all*.c)) \
|
||||
$(SRC_PATH)/libavcodec/bitstream_filters.c \
|
||||
$(SRC_PATH)/libavformat/protocols.c \
|
||||
|
||||
config.h: ffbuild/.config
|
||||
ffbuild/.config: $(CONFIGURABLE_COMPONENTS)
|
||||
config.h: .config
|
||||
.config: $(CONFIGURABLE_COMPONENTS)
|
||||
@-tput bold 2>/dev/null
|
||||
@-printf '\nWARNING: $(?) newer than config.h, rerun configure\n\n'
|
||||
@-printf '\nWARNING: $(?F) newer than config.h, rerun configure\n\n'
|
||||
@-tput sgr0 2>/dev/null
|
||||
|
||||
SUBDIR_VARS := CLEANFILES FFLIBS HOSTPROGS TESTPROGS TOOLS \
|
||||
SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \
|
||||
HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \
|
||||
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \
|
||||
ALTIVEC-OBJS VSX-OBJS MMX-OBJS X86ASM-OBJS \
|
||||
ALTIVEC-OBJS VSX-OBJS MMX-OBJS YASM-OBJS \
|
||||
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSP-OBJS MSA-OBJS \
|
||||
MMI-OBJS OBJS SLIBOBJS HOSTOBJS TESTOBJS
|
||||
|
||||
@@ -84,32 +108,41 @@ SUBDIR := $(1)/
|
||||
include $(SRC_PATH)/$(1)/Makefile
|
||||
-include $(SRC_PATH)/$(1)/$(ARCH)/Makefile
|
||||
-include $(SRC_PATH)/$(1)/$(INTRINSICS)/Makefile
|
||||
include $(SRC_PATH)/ffbuild/library.mak
|
||||
include $(SRC_PATH)/library.mak
|
||||
endef
|
||||
|
||||
$(foreach D,$(FFLIBS),$(eval $(call DOSUBDIR,lib$(D))))
|
||||
|
||||
include $(SRC_PATH)/fftools/Makefile
|
||||
include $(SRC_PATH)/doc/Makefile
|
||||
include $(SRC_PATH)/doc/examples/Makefile
|
||||
|
||||
libavcodec/utils.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h
|
||||
define DOPROG
|
||||
OBJS-$(1) += $(1).o $(EXEOBJS) $(OBJS-$(1)-yes)
|
||||
$(1)$(PROGSSUF)_g$(EXESUF): $$(OBJS-$(1))
|
||||
$$(OBJS-$(1)): CFLAGS += $(CFLAGS-$(1))
|
||||
$(1)$(PROGSSUF)_g$(EXESUF): LDFLAGS += $(LDFLAGS-$(1))
|
||||
$(1)$(PROGSSUF)_g$(EXESUF): FF_EXTRALIBS += $(LIBS-$(1))
|
||||
-include $$(OBJS-$(1):.o=.d)
|
||||
endef
|
||||
|
||||
$(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=))))
|
||||
|
||||
ffprobe.o cmdutils.o libavcodec/utils.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h
|
||||
|
||||
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
|
||||
ifeq ($(STRIPTYPE),direct)
|
||||
$(STRIP) -o $@ $<
|
||||
else
|
||||
$(CP) $< $@
|
||||
$(STRIP) $@
|
||||
endif
|
||||
|
||||
%$(PROGSSUF)_g$(EXESUF): $(FF_DEP_LIBS)
|
||||
%$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS)
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
|
||||
|
||||
VERSION_SH = $(SRC_PATH)/ffbuild/version.sh
|
||||
OBJDIRS += tools
|
||||
|
||||
-include $(wildcard tools/*.d)
|
||||
|
||||
VERSION_SH = $(SRC_PATH)/version.sh
|
||||
GIT_LOG = $(SRC_PATH)/.git/logs/HEAD
|
||||
|
||||
.version: $(wildcard $(GIT_LOG)) $(VERSION_SH) ffbuild/config.mak
|
||||
.version: $(wildcard $(GIT_LOG)) $(VERSION_SH) config.mak
|
||||
.version: M=@
|
||||
|
||||
libavutil/ffversion.h .version:
|
||||
@@ -119,32 +152,47 @@ libavutil/ffversion.h .version:
|
||||
# force version.sh to run whenever version might have changed
|
||||
-include .version
|
||||
|
||||
ifdef AVPROGS
|
||||
install: install-progs install-data
|
||||
endif
|
||||
|
||||
install: install-libs install-headers
|
||||
|
||||
install-libs: install-libs-yes
|
||||
|
||||
install-data: $(DATA_FILES)
|
||||
$(Q)mkdir -p "$(DATADIR)"
|
||||
$(INSTALL) -m 644 $(DATA_FILES) "$(DATADIR)"
|
||||
install-progs-yes:
|
||||
install-progs-$(CONFIG_SHARED): install-libs
|
||||
|
||||
uninstall: uninstall-data uninstall-headers uninstall-libs uninstall-pkgconfig
|
||||
install-progs: install-progs-yes $(AVPROGS)
|
||||
$(Q)mkdir -p "$(BINDIR)"
|
||||
$(INSTALL) -c -m 755 $(INSTPROGS) "$(BINDIR)"
|
||||
|
||||
install-data: $(DATA_FILES) $(EXAMPLES_FILES)
|
||||
$(Q)mkdir -p "$(DATADIR)/examples"
|
||||
$(INSTALL) -m 644 $(DATA_FILES) "$(DATADIR)"
|
||||
$(INSTALL) -m 644 $(EXAMPLES_FILES) "$(DATADIR)/examples"
|
||||
|
||||
uninstall: uninstall-libs uninstall-headers uninstall-progs uninstall-data
|
||||
|
||||
uninstall-progs:
|
||||
$(RM) $(addprefix "$(BINDIR)/", $(ALLAVPROGS))
|
||||
|
||||
uninstall-data:
|
||||
$(RM) -r "$(DATADIR)"
|
||||
|
||||
clean::
|
||||
$(RM) $(ALLAVPROGS) $(ALLAVPROGS_G)
|
||||
$(RM) $(CLEANSUFFIXES)
|
||||
$(RM) $(addprefix compat/,$(CLEANSUFFIXES)) $(addprefix compat/*/,$(CLEANSUFFIXES))
|
||||
$(RM) $(CLEANSUFFIXES:%=tools/%)
|
||||
$(RM) $(CLEANSUFFIXES:%=compat/msvcrt/%)
|
||||
$(RM) $(CLEANSUFFIXES:%=compat/atomics/pthread/%)
|
||||
$(RM) $(CLEANSUFFIXES:%=compat/%)
|
||||
$(RM) -r coverage-html
|
||||
$(RM) -rf coverage.info coverage.info.in lcov
|
||||
|
||||
distclean:: clean
|
||||
$(RM) .version avversion.h config.asm config.h mapfile \
|
||||
ffbuild/.config ffbuild/config.* libavutil/avconfig.h \
|
||||
version.h libavutil/ffversion.h libavcodec/codec_names.h \
|
||||
libavcodec/bsf_list.c libavformat/protocol_list.c \
|
||||
libavcodec/codec_list.c libavcodec/parser_list.c \
|
||||
libavformat/muxer_list.c libavformat/demuxer_list.c
|
||||
distclean::
|
||||
$(RM) $(DISTCLEANSUFFIXES)
|
||||
$(RM) config.* .config libavutil/avconfig.h .version mapfile avversion.h version.h libavutil/ffversion.h libavcodec/codec_names.h libavcodec/bsf_list.c libavformat/protocol_list.c
|
||||
ifeq ($(SRC_LINK),src)
|
||||
$(RM) src
|
||||
endif
|
||||
@@ -153,7 +201,6 @@ endif
|
||||
config:
|
||||
$(SRC_PATH)/configure $(value FFMPEG_CONFIGURATION)
|
||||
|
||||
build: all alltools examples testprogs
|
||||
check: all alltools examples testprogs fate
|
||||
|
||||
include $(SRC_PATH)/tests/Makefile
|
||||
@@ -169,5 +216,5 @@ $(sort $(OBJDIRS)):
|
||||
# so this saves some time on slow systems.
|
||||
.SUFFIXES:
|
||||
|
||||
.PHONY: all all-yes alltools build check config testprogs
|
||||
.PHONY: *clean install* uninstall*
|
||||
.PHONY: all all-yes alltools check *clean config install*
|
||||
.PHONY: testprogs uninstall*
|
||||
|
||||
@@ -21,6 +21,8 @@ such as audio, video, subtitles and related metadata.
|
||||
* [ffplay](https://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
|
||||
* [ffprobe](https://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
|
||||
multimedia content.
|
||||
* [ffserver](https://ffmpeg.org/ffserver.html) is a multimedia streaming server
|
||||
for live broadcasts.
|
||||
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
|
||||
|
||||
## Documentation
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
|
||||
┌───────────────────────────────────┐
|
||||
│ RELEASE NOTES for FFmpeg 4.0 "Wu" │
|
||||
└───────────────────────────────────┘
|
||||
┌────────────────────────────────────────┐
|
||||
│ RELEASE NOTES for FFmpeg 3.3 "Hilbert" │
|
||||
└────────────────────────────────────────┘
|
||||
|
||||
The FFmpeg Project proudly presents FFmpeg 4.0 "Wu", about 6
|
||||
months after the release of FFmpeg 3.4.
|
||||
The FFmpeg Project proudly presents FFmpeg 3.3 "Hilbert", about 5
|
||||
months after the release of FFmpeg 3.2.
|
||||
|
||||
A complete Changelog is available at the root of the project, and the
|
||||
complete Git history on https://git.ffmpeg.org/gitweb/ffmpeg.git
|
||||
complete Git history on http://source.ffmpeg.org.
|
||||
|
||||
We hope you will like this release as much as we enjoyed working on it, and
|
||||
as usual, if you have any questions about it, or any FFmpeg related topic,
|
||||
|
||||
@@ -14,4 +14,4 @@ OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes)
|
||||
OBJS-$(HAVE_VSX) += $(VSX-OBJS) $(VSX-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes)
|
||||
OBJS-$(HAVE_X86ASM) += $(X86ASM-OBJS) $(X86ASM-OBJS-yes)
|
||||
OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes)
|
||||
@@ -38,7 +38,6 @@
|
||||
#include "libswscale/swscale.h"
|
||||
#include "libswresample/swresample.h"
|
||||
#include "libpostproc/postprocess.h"
|
||||
#include "libavutil/attributes.h"
|
||||
#include "libavutil/avassert.h"
|
||||
#include "libavutil/avstring.h"
|
||||
#include "libavutil/bprint.h"
|
||||
@@ -881,54 +880,28 @@ int opt_loglevel(void *optctx, const char *opt, const char *arg)
|
||||
{ "debug" , AV_LOG_DEBUG },
|
||||
{ "trace" , AV_LOG_TRACE },
|
||||
};
|
||||
const char *token;
|
||||
char *tail;
|
||||
int flags = av_log_get_flags();
|
||||
int level = av_log_get_level();
|
||||
int cmd, i = 0;
|
||||
int level;
|
||||
int flags;
|
||||
int i;
|
||||
|
||||
av_assert0(arg);
|
||||
while (*arg) {
|
||||
token = arg;
|
||||
if (*token == '+' || *token == '-') {
|
||||
cmd = *token++;
|
||||
} else {
|
||||
cmd = 0;
|
||||
}
|
||||
if (!i && !cmd) {
|
||||
flags = 0; /* missing relative prefix, build absolute value */
|
||||
}
|
||||
if (!strncmp(token, "repeat", 6)) {
|
||||
if (cmd == '-') {
|
||||
flags |= AV_LOG_SKIP_REPEATED;
|
||||
} else {
|
||||
flags &= ~AV_LOG_SKIP_REPEATED;
|
||||
}
|
||||
arg = token + 6;
|
||||
} else if (!strncmp(token, "level", 5)) {
|
||||
if (cmd == '-') {
|
||||
flags &= ~AV_LOG_PRINT_LEVEL;
|
||||
} else {
|
||||
flags |= AV_LOG_PRINT_LEVEL;
|
||||
}
|
||||
arg = token + 5;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
i++;
|
||||
}
|
||||
if (!*arg) {
|
||||
goto end;
|
||||
} else if (*arg == '+') {
|
||||
arg++;
|
||||
} else if (!i) {
|
||||
flags = av_log_get_flags(); /* level value without prefix, reset flags */
|
||||
}
|
||||
flags = av_log_get_flags();
|
||||
tail = strstr(arg, "repeat");
|
||||
if (tail)
|
||||
flags &= ~AV_LOG_SKIP_REPEATED;
|
||||
else
|
||||
flags |= AV_LOG_SKIP_REPEATED;
|
||||
|
||||
av_log_set_flags(flags);
|
||||
if (tail == arg)
|
||||
arg += 6 + (arg[6]=='+');
|
||||
if(tail && !*arg)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < FF_ARRAY_ELEMS(log_levels); i++) {
|
||||
if (!strcmp(log_levels[i].name, arg)) {
|
||||
level = log_levels[i].level;
|
||||
goto end;
|
||||
av_log_set_level(log_levels[i].level);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -940,9 +913,6 @@ int opt_loglevel(void *optctx, const char *opt, const char *arg)
|
||||
av_log(NULL, AV_LOG_FATAL, "\"%s\"\n", log_levels[i].name);
|
||||
exit_program(1);
|
||||
}
|
||||
|
||||
end:
|
||||
av_log_set_flags(flags);
|
||||
av_log_set_level(level);
|
||||
return 0;
|
||||
}
|
||||
@@ -1288,10 +1258,8 @@ static int is_device(const AVClass *avclass)
|
||||
|
||||
static int show_formats_devices(void *optctx, const char *opt, const char *arg, int device_only, int muxdemuxers)
|
||||
{
|
||||
void *ifmt_opaque = NULL;
|
||||
const AVInputFormat *ifmt = NULL;
|
||||
void *ofmt_opaque = NULL;
|
||||
const AVOutputFormat *ofmt = NULL;
|
||||
AVInputFormat *ifmt = NULL;
|
||||
AVOutputFormat *ofmt = NULL;
|
||||
const char *last_name;
|
||||
int is_dev;
|
||||
|
||||
@@ -1307,8 +1275,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
|
||||
const char *long_name = NULL;
|
||||
|
||||
if (muxdemuxers !=SHOW_DEMUXERS) {
|
||||
ofmt_opaque = NULL;
|
||||
while ((ofmt = av_muxer_iterate(&ofmt_opaque))) {
|
||||
while ((ofmt = av_oformat_next(ofmt))) {
|
||||
is_dev = is_device(ofmt->priv_class);
|
||||
if (!is_dev && device_only)
|
||||
continue;
|
||||
@@ -1321,8 +1288,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
|
||||
}
|
||||
}
|
||||
if (muxdemuxers != SHOW_MUXERS) {
|
||||
ifmt_opaque = NULL;
|
||||
while ((ifmt = av_demuxer_iterate(&ifmt_opaque))) {
|
||||
while ((ifmt = av_iformat_next(ifmt))) {
|
||||
is_dev = is_device(ifmt->priv_class);
|
||||
if (!is_dev && device_only)
|
||||
continue;
|
||||
@@ -1636,7 +1602,7 @@ int show_bsfs(void *optctx, const char *opt, const char *arg)
|
||||
void *opaque = NULL;
|
||||
|
||||
printf("Bitstream filters:\n");
|
||||
while ((bsf = av_bsf_iterate(&opaque)))
|
||||
while ((bsf = av_bsf_next(&opaque)))
|
||||
printf("%s\n", bsf->name);
|
||||
printf("\n");
|
||||
return 0;
|
||||
@@ -1662,7 +1628,6 @@ int show_filters(void *optctx, const char *opt, const char *arg)
|
||||
#if CONFIG_AVFILTER
|
||||
const AVFilter *filter = NULL;
|
||||
char descr[64], *descr_cur;
|
||||
void *opaque = NULL;
|
||||
int i, j;
|
||||
const AVFilterPad *pad;
|
||||
|
||||
@@ -1674,7 +1639,7 @@ int show_filters(void *optctx, const char *opt, const char *arg)
|
||||
" V = Video input/output\n"
|
||||
" N = Dynamic number and/or type of input/output\n"
|
||||
" | = Source or sink filter\n");
|
||||
while ((filter = av_filter_iterate(&opaque))) {
|
||||
while ((filter = avfilter_next(filter))) {
|
||||
descr_cur = descr;
|
||||
for (i = 0; i < 2; i++) {
|
||||
if (i) {
|
||||
@@ -1737,7 +1702,7 @@ int show_pix_fmts(void *optctx, const char *opt, const char *arg)
|
||||
#endif
|
||||
|
||||
while ((pix_desc = av_pix_fmt_desc_next(pix_desc))) {
|
||||
enum AVPixelFormat av_unused pix_fmt = av_pix_fmt_desc_get_id(pix_desc);
|
||||
enum AVPixelFormat pix_fmt = av_pix_fmt_desc_get_id(pix_desc);
|
||||
printf("%c%c%c%c%c %-16s %d %2d\n",
|
||||
sws_isSupportedInput (pix_fmt) ? 'I' : '.',
|
||||
sws_isSupportedOutput(pix_fmt) ? 'O' : '.',
|
||||
@@ -1931,22 +1896,6 @@ static void show_help_filter(const char *name)
|
||||
}
|
||||
#endif
|
||||
|
||||
static void show_help_bsf(const char *name)
|
||||
{
|
||||
const AVBitStreamFilter *bsf = av_bsf_get_by_name(name);
|
||||
|
||||
if (!bsf) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Unknown bit stream filter '%s'.\n", name);
|
||||
return;
|
||||
}
|
||||
|
||||
printf("Bit stream filter %s\n", bsf->name);
|
||||
PRINT_CODEC_SUPPORTED(bsf, codec_ids, enum AVCodecID, "codecs",
|
||||
AV_CODEC_ID_NONE, GET_CODEC_NAME);
|
||||
if (bsf->priv_class)
|
||||
show_help_children(bsf->priv_class, AV_OPT_FLAG_BSF_PARAM);
|
||||
}
|
||||
|
||||
int show_help(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
char *topic, *par;
|
||||
@@ -1973,8 +1922,6 @@ int show_help(void *optctx, const char *opt, const char *arg)
|
||||
} else if (!strcmp(topic, "filter")) {
|
||||
show_help_filter(par);
|
||||
#endif
|
||||
} else if (!strcmp(topic, "bsf")) {
|
||||
show_help_bsf(par);
|
||||
} else {
|
||||
show_help_default(topic, par);
|
||||
}
|
||||
@@ -19,8 +19,8 @@
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#ifndef FFTOOLS_CMDUTILS_H
|
||||
#define FFTOOLS_CMDUTILS_H
|
||||
#ifndef CMDUTILS_H
|
||||
#define CMDUTILS_H
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
@@ -105,6 +105,12 @@ int opt_max_alloc(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
int opt_codec_debug(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
#if CONFIG_OPENCL
|
||||
int opt_opencl(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
int opt_opencl_bench(void *optctx, const char *opt, const char *arg);
|
||||
#endif
|
||||
|
||||
/**
|
||||
* Limit the execution time.
|
||||
*/
|
||||
@@ -149,7 +155,6 @@ typedef struct SpecifierOpt {
|
||||
uint8_t *str;
|
||||
int i;
|
||||
int64_t i64;
|
||||
uint64_t ui64;
|
||||
float f;
|
||||
double dbl;
|
||||
} u;
|
||||
@@ -201,47 +206,6 @@ typedef struct OptionDef {
|
||||
void show_help_options(const OptionDef *options, const char *msg, int req_flags,
|
||||
int rej_flags, int alt_flags);
|
||||
|
||||
#if CONFIG_AVDEVICE
|
||||
#define CMDUTILS_COMMON_OPTIONS_AVDEVICE \
|
||||
{ "sources" , OPT_EXIT | HAS_ARG, { .func_arg = show_sources }, \
|
||||
"list sources of the input device", "device" }, \
|
||||
{ "sinks" , OPT_EXIT | HAS_ARG, { .func_arg = show_sinks }, \
|
||||
"list sinks of the output device", "device" }, \
|
||||
|
||||
#else
|
||||
#define CMDUTILS_COMMON_OPTIONS_AVDEVICE
|
||||
#endif
|
||||
|
||||
#define CMDUTILS_COMMON_OPTIONS \
|
||||
{ "L", OPT_EXIT, { .func_arg = show_license }, "show license" }, \
|
||||
{ "h", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
|
||||
{ "?", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
|
||||
{ "help", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
|
||||
{ "-help", OPT_EXIT, { .func_arg = show_help }, "show help", "topic" }, \
|
||||
{ "version", OPT_EXIT, { .func_arg = show_version }, "show version" }, \
|
||||
{ "buildconf", OPT_EXIT, { .func_arg = show_buildconf }, "show build configuration" }, \
|
||||
{ "formats", OPT_EXIT, { .func_arg = show_formats }, "show available formats" }, \
|
||||
{ "muxers", OPT_EXIT, { .func_arg = show_muxers }, "show available muxers" }, \
|
||||
{ "demuxers", OPT_EXIT, { .func_arg = show_demuxers }, "show available demuxers" }, \
|
||||
{ "devices", OPT_EXIT, { .func_arg = show_devices }, "show available devices" }, \
|
||||
{ "codecs", OPT_EXIT, { .func_arg = show_codecs }, "show available codecs" }, \
|
||||
{ "decoders", OPT_EXIT, { .func_arg = show_decoders }, "show available decoders" }, \
|
||||
{ "encoders", OPT_EXIT, { .func_arg = show_encoders }, "show available encoders" }, \
|
||||
{ "bsfs", OPT_EXIT, { .func_arg = show_bsfs }, "show available bit stream filters" }, \
|
||||
{ "protocols", OPT_EXIT, { .func_arg = show_protocols }, "show available protocols" }, \
|
||||
{ "filters", OPT_EXIT, { .func_arg = show_filters }, "show available filters" }, \
|
||||
{ "pix_fmts", OPT_EXIT, { .func_arg = show_pix_fmts }, "show available pixel formats" }, \
|
||||
{ "layouts", OPT_EXIT, { .func_arg = show_layouts }, "show standard channel layouts" }, \
|
||||
{ "sample_fmts", OPT_EXIT, { .func_arg = show_sample_fmts }, "show available audio sample formats" }, \
|
||||
{ "colors", OPT_EXIT, { .func_arg = show_colors }, "show available color names" }, \
|
||||
{ "loglevel", HAS_ARG, { .func_arg = opt_loglevel }, "set logging level", "loglevel" }, \
|
||||
{ "v", HAS_ARG, { .func_arg = opt_loglevel }, "set logging level", "loglevel" }, \
|
||||
{ "report", 0, { (void*)opt_report }, "generate a report" }, \
|
||||
{ "max_alloc", HAS_ARG, { .func_arg = opt_max_alloc }, "set maximum size of a single allocated block", "bytes" }, \
|
||||
{ "cpuflags", HAS_ARG | OPT_EXPERT, { .func_arg = opt_cpuflags }, "force specific cpu flags", "flags" }, \
|
||||
{ "hide_banner", OPT_BOOL | OPT_EXPERT, {&hide_banner}, "do not show program banner", "hide_banner" }, \
|
||||
CMDUTILS_COMMON_OPTIONS_AVDEVICE \
|
||||
|
||||
/**
|
||||
* Show help for all options with given flags in class and all its
|
||||
* children.
|
||||
@@ -625,9 +589,6 @@ void *grow_array(void *array, int elem_size, int *size, int new_size);
|
||||
#define GET_PIX_FMT_NAME(pix_fmt)\
|
||||
const char *name = av_get_pix_fmt_name(pix_fmt);
|
||||
|
||||
#define GET_CODEC_NAME(id)\
|
||||
const char *name = avcodec_descriptor_get(id)->name;
|
||||
|
||||
#define GET_SAMPLE_FMT_NAME(sample_fmt)\
|
||||
const char *name = av_get_sample_fmt_name(sample_fmt)
|
||||
|
||||
@@ -645,4 +606,4 @@ void *grow_array(void *array, int elem_size, int *size, int new_size);
|
||||
|
||||
double get_rotation(AVStream *st);
|
||||
|
||||
#endif /* FFTOOLS_CMDUTILS_H */
|
||||
#endif /* CMDUTILS_H */
|
||||
37
cmdutils_common_opts.h
Normal file
37
cmdutils_common_opts.h
Normal file
@@ -0,0 +1,37 @@
|
||||
{ "L" , OPT_EXIT, {.func_arg = show_license}, "show license" },
|
||||
{ "h" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "?" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "-help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "version" , OPT_EXIT, {.func_arg = show_version}, "show version" },
|
||||
{ "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" },
|
||||
{ "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" },
|
||||
{ "muxers" , OPT_EXIT, {.func_arg = show_muxers }, "show available muxers" },
|
||||
{ "demuxers" , OPT_EXIT, {.func_arg = show_demuxers }, "show available demuxers" },
|
||||
{ "devices" , OPT_EXIT, {.func_arg = show_devices }, "show available devices" },
|
||||
{ "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" },
|
||||
{ "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" },
|
||||
{ "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" },
|
||||
{ "bsfs" , OPT_EXIT, {.func_arg = show_bsfs }, "show available bit stream filters" },
|
||||
{ "protocols" , OPT_EXIT, {.func_arg = show_protocols}, "show available protocols" },
|
||||
{ "filters" , OPT_EXIT, {.func_arg = show_filters }, "show available filters" },
|
||||
{ "pix_fmts" , OPT_EXIT, {.func_arg = show_pix_fmts }, "show available pixel formats" },
|
||||
{ "layouts" , OPT_EXIT, {.func_arg = show_layouts }, "show standard channel layouts" },
|
||||
{ "sample_fmts", OPT_EXIT, {.func_arg = show_sample_fmts }, "show available audio sample formats" },
|
||||
{ "colors" , OPT_EXIT, {.func_arg = show_colors }, "show available color names" },
|
||||
{ "loglevel" , HAS_ARG, {.func_arg = opt_loglevel}, "set logging level", "loglevel" },
|
||||
{ "v", HAS_ARG, {.func_arg = opt_loglevel}, "set logging level", "loglevel" },
|
||||
{ "report" , 0, {(void*)opt_report}, "generate a report" },
|
||||
{ "max_alloc" , HAS_ARG, {.func_arg = opt_max_alloc}, "set maximum size of a single allocated block", "bytes" },
|
||||
{ "cpuflags" , HAS_ARG | OPT_EXPERT, { .func_arg = opt_cpuflags }, "force specific cpu flags", "flags" },
|
||||
{ "hide_banner", OPT_BOOL | OPT_EXPERT, {&hide_banner}, "do not show program banner", "hide_banner" },
|
||||
#if CONFIG_OPENCL
|
||||
{ "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" },
|
||||
{ "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" },
|
||||
#endif
|
||||
#if CONFIG_AVDEVICE
|
||||
{ "sources" , OPT_EXIT | HAS_ARG, { .func_arg = show_sources },
|
||||
"list sources of the input device", "device" },
|
||||
{ "sinks" , OPT_EXIT | HAS_ARG, { .func_arg = show_sinks },
|
||||
"list sinks of the output device", "device" },
|
||||
#endif
|
||||
283
cmdutils_opencl.c
Normal file
283
cmdutils_opencl.c
Normal file
@@ -0,0 +1,283 @@
|
||||
/*
|
||||
* Copyright (C) 2013 Lenny Wang
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include "libavutil/opt.h"
|
||||
#include "libavutil/time.h"
|
||||
#include "libavutil/log.h"
|
||||
#include "libavutil/opencl.h"
|
||||
#include "libavutil/avstring.h"
|
||||
#include "cmdutils.h"
|
||||
|
||||
typedef struct {
|
||||
int platform_idx;
|
||||
int device_idx;
|
||||
char device_name[64];
|
||||
int64_t runtime;
|
||||
} OpenCLDeviceBenchmark;
|
||||
|
||||
const char *ocl_bench_source = AV_OPENCL_KERNEL(
|
||||
inline unsigned char clip_uint8(int a)
|
||||
{
|
||||
if (a & (~0xFF))
|
||||
return (-a)>>31;
|
||||
else
|
||||
return a;
|
||||
}
|
||||
|
||||
kernel void unsharp_bench(
|
||||
global unsigned char *src,
|
||||
global unsigned char *dst,
|
||||
global int *mask,
|
||||
int width,
|
||||
int height)
|
||||
{
|
||||
int i, j, local_idx, lc_idx, sum = 0;
|
||||
int2 thread_idx, block_idx, global_idx, lm_idx;
|
||||
thread_idx.x = get_local_id(0);
|
||||
thread_idx.y = get_local_id(1);
|
||||
block_idx.x = get_group_id(0);
|
||||
block_idx.y = get_group_id(1);
|
||||
global_idx.x = get_global_id(0);
|
||||
global_idx.y = get_global_id(1);
|
||||
local uchar data[32][32];
|
||||
local int lc[128];
|
||||
|
||||
for (i = 0; i <= 1; i++) {
|
||||
lm_idx.y = -8 + (block_idx.y + i) * 16 + thread_idx.y;
|
||||
lm_idx.y = lm_idx.y < 0 ? 0 : lm_idx.y;
|
||||
lm_idx.y = lm_idx.y >= height ? height - 1: lm_idx.y;
|
||||
for (j = 0; j <= 1; j++) {
|
||||
lm_idx.x = -8 + (block_idx.x + j) * 16 + thread_idx.x;
|
||||
lm_idx.x = lm_idx.x < 0 ? 0 : lm_idx.x;
|
||||
lm_idx.x = lm_idx.x >= width ? width - 1: lm_idx.x;
|
||||
data[i*16 + thread_idx.y][j*16 + thread_idx.x] = src[lm_idx.y*width + lm_idx.x];
|
||||
}
|
||||
}
|
||||
local_idx = thread_idx.y*16 + thread_idx.x;
|
||||
if (local_idx < 128)
|
||||
lc[local_idx] = mask[local_idx];
|
||||
barrier(CLK_LOCAL_MEM_FENCE);
|
||||
|
||||
\n#pragma unroll\n
|
||||
for (i = -4; i <= 4; i++) {
|
||||
lm_idx.y = 8 + i + thread_idx.y;
|
||||
\n#pragma unroll\n
|
||||
for (j = -4; j <= 4; j++) {
|
||||
lm_idx.x = 8 + j + thread_idx.x;
|
||||
lc_idx = (i + 4)*8 + j + 4;
|
||||
sum += (int)data[lm_idx.y][lm_idx.x] * lc[lc_idx];
|
||||
}
|
||||
}
|
||||
int temp = (int)data[thread_idx.y + 8][thread_idx.x + 8];
|
||||
int res = temp + (((temp - (int)((sum + 1<<15) >> 16))) >> 16);
|
||||
if (global_idx.x < width && global_idx.y < height)
|
||||
dst[global_idx.x + global_idx.y*width] = clip_uint8(res);
|
||||
}
|
||||
);
|
||||
|
||||
#define OCLCHECK(method, ... ) \
|
||||
do { \
|
||||
status = method(__VA_ARGS__); \
|
||||
if (status != CL_SUCCESS) { \
|
||||
av_log(NULL, AV_LOG_ERROR, # method " error '%s'\n", \
|
||||
av_opencl_errstr(status)); \
|
||||
ret = AVERROR_EXTERNAL; \
|
||||
goto end; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define CREATEBUF(out, flags, size) \
|
||||
do { \
|
||||
out = clCreateBuffer(ext_opencl_env->context, flags, size, NULL, &status); \
|
||||
if (status != CL_SUCCESS) { \
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not create OpenCL buffer\n"); \
|
||||
ret = AVERROR_EXTERNAL; \
|
||||
goto end; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
static void fill_rand_int(int *data, int n)
|
||||
{
|
||||
int i;
|
||||
srand(av_gettime());
|
||||
for (i = 0; i < n; i++)
|
||||
data[i] = rand();
|
||||
}
|
||||
|
||||
#define OPENCL_NB_ITER 5
|
||||
static int64_t run_opencl_bench(AVOpenCLExternalEnv *ext_opencl_env)
|
||||
{
|
||||
int i, arg = 0, width = 1920, height = 1088;
|
||||
int64_t start, ret = 0;
|
||||
cl_int status;
|
||||
size_t kernel_len;
|
||||
char *inbuf;
|
||||
int *mask;
|
||||
int buf_size = width * height * sizeof(char);
|
||||
int mask_size = sizeof(uint32_t) * 128;
|
||||
|
||||
cl_mem cl_mask, cl_inbuf, cl_outbuf;
|
||||
cl_kernel kernel = NULL;
|
||||
cl_program program = NULL;
|
||||
size_t local_work_size_2d[2] = {16, 16};
|
||||
size_t global_work_size_2d[2] = {(size_t)width, (size_t)height};
|
||||
|
||||
if (!(inbuf = av_malloc(buf_size)) || !(mask = av_malloc(mask_size))) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Out of memory\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
fill_rand_int((int*)inbuf, buf_size/4);
|
||||
fill_rand_int(mask, mask_size/4);
|
||||
|
||||
CREATEBUF(cl_mask, CL_MEM_READ_ONLY, mask_size);
|
||||
CREATEBUF(cl_inbuf, CL_MEM_READ_ONLY, buf_size);
|
||||
CREATEBUF(cl_outbuf, CL_MEM_READ_WRITE, buf_size);
|
||||
|
||||
kernel_len = strlen(ocl_bench_source);
|
||||
program = clCreateProgramWithSource(ext_opencl_env->context, 1, &ocl_bench_source,
|
||||
&kernel_len, &status);
|
||||
if (status != CL_SUCCESS || !program) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark program\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
status = clBuildProgram(program, 1, &(ext_opencl_env->device_id), NULL, NULL, NULL);
|
||||
if (status != CL_SUCCESS) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to build benchmark program\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
kernel = clCreateKernel(program, "unsharp_bench", &status);
|
||||
if (status != CL_SUCCESS) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark kernel\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
|
||||
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_inbuf, CL_TRUE, 0,
|
||||
buf_size, inbuf, 0, NULL, NULL);
|
||||
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_mask, CL_TRUE, 0,
|
||||
mask_size, mask, 0, NULL, NULL);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_inbuf);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_outbuf);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_mask);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height);
|
||||
|
||||
start = av_gettime_relative();
|
||||
for (i = 0; i < OPENCL_NB_ITER; i++)
|
||||
OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL,
|
||||
global_work_size_2d, local_work_size_2d, 0, NULL, NULL);
|
||||
clFinish(ext_opencl_env->command_queue);
|
||||
ret = (av_gettime_relative() - start)/OPENCL_NB_ITER;
|
||||
end:
|
||||
if (kernel)
|
||||
clReleaseKernel(kernel);
|
||||
if (program)
|
||||
clReleaseProgram(program);
|
||||
if (cl_inbuf)
|
||||
clReleaseMemObject(cl_inbuf);
|
||||
if (cl_outbuf)
|
||||
clReleaseMemObject(cl_outbuf);
|
||||
if (cl_mask)
|
||||
clReleaseMemObject(cl_mask);
|
||||
av_free(inbuf);
|
||||
av_free(mask);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int compare_ocl_device_desc(const void *a, const void *b)
|
||||
{
|
||||
const OpenCLDeviceBenchmark* va = (const OpenCLDeviceBenchmark*)a;
|
||||
const OpenCLDeviceBenchmark* vb = (const OpenCLDeviceBenchmark*)b;
|
||||
return FFDIFFSIGN(va->runtime , vb->runtime);
|
||||
}
|
||||
|
||||
int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
int i, j, nb_devices = 0, count = 0, ret = 0;
|
||||
int64_t score = 0;
|
||||
AVOpenCLDeviceList *device_list;
|
||||
AVOpenCLDeviceNode *device_node = NULL;
|
||||
OpenCLDeviceBenchmark *devices = NULL;
|
||||
cl_platform_id platform;
|
||||
|
||||
ret = av_opencl_get_device_list(&device_list);
|
||||
if (ret < 0) {
|
||||
return ret;
|
||||
}
|
||||
for (i = 0; i < device_list->platform_num; i++)
|
||||
nb_devices += device_list->platform_node[i]->device_num;
|
||||
if (!nb_devices) {
|
||||
av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n");
|
||||
av_opencl_free_device_list(&device_list);
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
if (!(devices = av_malloc_array(nb_devices, sizeof(OpenCLDeviceBenchmark)))) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n");
|
||||
av_opencl_free_device_list(&device_list);
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
for (i = 0; i < device_list->platform_num; i++) {
|
||||
for (j = 0; j < device_list->platform_node[i]->device_num; j++) {
|
||||
device_node = device_list->platform_node[i]->device_node[j];
|
||||
platform = device_list->platform_node[i]->platform_id;
|
||||
score = av_opencl_benchmark(device_node, platform, run_opencl_bench);
|
||||
if (score > 0) {
|
||||
devices[count].platform_idx = i;
|
||||
devices[count].device_idx = j;
|
||||
devices[count].runtime = score;
|
||||
av_strlcpy(devices[count].device_name, device_node->device_name,
|
||||
sizeof(devices[count].device_name));
|
||||
count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
qsort(devices, count, sizeof(OpenCLDeviceBenchmark), compare_ocl_device_desc);
|
||||
fprintf(stderr, "platform_idx\tdevice_idx\tdevice_name\truntime\n");
|
||||
for (i = 0; i < count; i++)
|
||||
fprintf(stdout, "%d\t%d\t%s\t%"PRId64"\n",
|
||||
devices[i].platform_idx, devices[i].device_idx,
|
||||
devices[i].device_name, devices[i].runtime);
|
||||
|
||||
av_opencl_free_device_list(&device_list);
|
||||
av_free(devices);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int opt_opencl(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
char *key, *value;
|
||||
const char *opts = arg;
|
||||
int ret = 0;
|
||||
while (*opts) {
|
||||
ret = av_opt_get_key_value(&opts, "=", ":", 0, &key, &value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ret = av_opencl_set_option(key, value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (*opts)
|
||||
opts++;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
@@ -2,12 +2,15 @@
|
||||
# common bits used by all libraries
|
||||
#
|
||||
|
||||
DEFAULT_X86ASMD=.dbg
|
||||
# first so "all" becomes default target
|
||||
all: all-yes
|
||||
|
||||
DEFAULT_YASMD=.dbg
|
||||
|
||||
ifeq ($(DBG),1)
|
||||
X86ASMD=$(DEFAULT_X86ASMD)
|
||||
YASMD=$(DEFAULT_YASMD)
|
||||
else
|
||||
X86ASMD=
|
||||
YASMD=
|
||||
endif
|
||||
|
||||
ifndef SUBDIR
|
||||
@@ -15,8 +18,8 @@ ifndef SUBDIR
|
||||
ifndef V
|
||||
Q = @
|
||||
ECHO = printf "$(1)\t%s\n" $(2)
|
||||
BRIEF = CC CXX OBJCC HOSTCC HOSTLD AS X86ASM AR LD STRIP CP WINDRES NVCC
|
||||
SILENT = DEPCC DEPHOSTCC DEPAS DEPX86ASM RANLIB RM
|
||||
BRIEF = CC CXX OBJCC HOSTCC HOSTLD AS YASM AR LD STRIP CP WINDRES
|
||||
SILENT = DEPCC DEPHOSTCC DEPAS DEPYASM RANLIB RM
|
||||
|
||||
MSG = $@
|
||||
M = @$(call ECHO,$(TAG),$@);
|
||||
@@ -37,8 +40,7 @@ OBJCFLAGS += $(EOBJCFLAGS)
|
||||
OBJCCFLAGS = $(CPPFLAGS) $(CFLAGS) $(OBJCFLAGS)
|
||||
ASFLAGS := $(CPPFLAGS) $(ASFLAGS)
|
||||
CXXFLAGS := $(CPPFLAGS) $(CFLAGS) $(CXXFLAGS)
|
||||
X86ASMFLAGS += $(IFLAGS:%=%/) -I$(<D)/ -Pconfig.asm
|
||||
NVCCFLAGS += -ptx
|
||||
YASMFLAGS += $(IFLAGS:%=%/) -Pconfig.asm
|
||||
|
||||
HOSTCCFLAGS = $(IFLAGS) $(HOSTCPPFLAGS) $(HOSTCFLAGS)
|
||||
LDFLAGS := $(ALLFFLIBS:%=$(LD_PATH)lib%) $(LDFLAGS)
|
||||
@@ -52,9 +54,7 @@ COMPILE_C = $(call COMPILE,CC)
|
||||
COMPILE_CXX = $(call COMPILE,CXX)
|
||||
COMPILE_S = $(call COMPILE,AS)
|
||||
COMPILE_M = $(call COMPILE,OBJCC)
|
||||
COMPILE_X86ASM = $(call COMPILE,X86ASM)
|
||||
COMPILE_HOSTC = $(call COMPILE,HOSTCC)
|
||||
COMPILE_NVCC = $(call COMPILE,NVCC)
|
||||
|
||||
%.o: %.c
|
||||
$(COMPILE_C)
|
||||
@@ -74,12 +74,13 @@ COMPILE_NVCC = $(call COMPILE,NVCC)
|
||||
%_host.o: %.c
|
||||
$(COMPILE_HOSTC)
|
||||
|
||||
%$(DEFAULT_X86ASMD).asm: %.asm
|
||||
$(DEPX86ASM) $(X86ASMFLAGS) -M -o $@ $< > $(@:.asm=.d)
|
||||
$(X86ASM) $(X86ASMFLAGS) -e $< | sed '/^%/d;/^$$/d;' > $@
|
||||
%$(DEFAULT_YASMD).asm: %.asm
|
||||
$(DEPYASM) $(YASMFLAGS) -I $(<D)/ -M -o $@ $< > $(@:.asm=.d)
|
||||
$(YASM) $(YASMFLAGS) -I $(<D)/ -e $< | sed '/^%/d;/^$$/d;' > $@
|
||||
|
||||
%.o: %.asm
|
||||
$(COMPILE_X86ASM)
|
||||
$(DEPYASM) $(YASMFLAGS) -I $(<D)/ -M -o $@ $< > $(@:.o=.d)
|
||||
$(YASM) $(YASMFLAGS) -I $(<D)/ -o $@ $(patsubst $(SRC_PATH)/%,$(SRC_LINK)/%,$<)
|
||||
-$(if $(ASMSTRIPFLAGS), $(STRIP) $(ASMSTRIPFLAGS) $@)
|
||||
|
||||
%.o: %.rc
|
||||
@@ -91,13 +92,7 @@ COMPILE_NVCC = $(call COMPILE,NVCC)
|
||||
%.h.c:
|
||||
$(Q)echo '#include "$*.h"' >$@
|
||||
|
||||
%.ptx: %.cu
|
||||
$(COMPILE_NVCC)
|
||||
|
||||
%.ptx.c: %.ptx
|
||||
$(Q)sh $(SRC_PATH)/compat/cuda/ptx2c.sh $@ $(patsubst $(SRC_PATH)/%,$(SRC_LINK)/%,$<)
|
||||
|
||||
%.c %.h %.pc %.ver %.version: TAG = GEN
|
||||
%.c %.h %.ver: TAG = GEN
|
||||
|
||||
# Dummy rule to stop make trying to rebuild removed or renamed headers
|
||||
%.h:
|
||||
@@ -111,7 +106,7 @@ COMPILE_NVCC = $(call COMPILE,NVCC)
|
||||
$(OBJS):
|
||||
endif
|
||||
|
||||
include $(SRC_PATH)/ffbuild/arch.mak
|
||||
include $(SRC_PATH)/arch.mak
|
||||
|
||||
OBJS += $(OBJS-yes)
|
||||
SLIBOBJS += $(SLIBOBJS-yes)
|
||||
@@ -119,7 +114,7 @@ FFLIBS := $($(NAME)_FFLIBS) $(FFLIBS-yes) $(FFLIBS)
|
||||
TESTPROGS += $(TESTPROGS-yes)
|
||||
|
||||
LDLIBS = $(FFLIBS:%=%$(BUILDSUF))
|
||||
FFEXTRALIBS := $(LDLIBS:%=$(LD_LIB)) $(foreach lib,EXTRALIBS-$(NAME) $(FFLIBS:%=EXTRALIBS-%),$($(lib))) $(EXTRALIBS)
|
||||
FFEXTRALIBS := $(LDLIBS:%=$(LD_LIB)) $(EXTRALIBS)
|
||||
|
||||
OBJS := $(sort $(OBJS:%=$(SUBDIR)%))
|
||||
SLIBOBJS := $(sort $(SLIBOBJS:%=$(SUBDIR)%))
|
||||
@@ -141,10 +136,8 @@ ALLHEADERS := $(subst $(SRC_DIR)/,$(SUBDIR),$(wildcard $(SRC_DIR)/*.h $(SRC_DIR)
|
||||
SKIPHEADERS += $(ARCH_HEADERS:%=$(ARCH)/%) $(SKIPHEADERS-)
|
||||
SKIPHEADERS := $(SKIPHEADERS:%=$(SUBDIR)%)
|
||||
HOBJS = $(filter-out $(SKIPHEADERS:.h=.h.o),$(ALLHEADERS:.h=.h.o))
|
||||
PTXOBJS = $(filter %.ptx.o,$(OBJS))
|
||||
$(HOBJS): CCFLAGS += $(CFLAGS_HEADERS)
|
||||
checkheaders: $(HOBJS)
|
||||
.SECONDARY: $(HOBJS:.o=.c) $(PTXOBJS:.o=.c) $(PTXOBJS:.o=)
|
||||
.SECONDARY: $(HOBJS:.o=.c)
|
||||
|
||||
alltools: $(TOOLS)
|
||||
|
||||
@@ -152,7 +145,7 @@ $(HOSTOBJS): %.o: %.c
|
||||
$(COMPILE_HOSTC)
|
||||
|
||||
$(HOSTPROGS): %$(HOSTEXESUF): %.o
|
||||
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $^ $(HOSTEXTRALIBS)
|
||||
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $^ $(HOSTLIBS)
|
||||
|
||||
$(OBJS): | $(sort $(dir $(OBJS)))
|
||||
$(HOBJS): | $(sort $(dir $(HOBJS)))
|
||||
@@ -163,7 +156,8 @@ $(TOOLOBJS): | tools
|
||||
|
||||
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
|
||||
|
||||
CLEANSUFFIXES = *.d *.gcda *.gcno *.h.c *.ho *.map *.o *.pc *.ptx *.ptx.c *.ver *.version *$(DEFAULT_X86ASMD).asm *~
|
||||
CLEANSUFFIXES = *.d *.o *~ *.h.c *.gcda *.gcno *.map *.ver *.ho *$(DEFAULT_YASMD).asm
|
||||
DISTCLEANSUFFIXES = *.pc
|
||||
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a
|
||||
|
||||
define RULES
|
||||
@@ -173,4 +167,4 @@ endef
|
||||
|
||||
$(eval $(RULES))
|
||||
|
||||
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_X86ASMD).d)
|
||||
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_YASMD).d)
|
||||
@@ -108,7 +108,7 @@ static inline int atomic_compare_exchange_strong(intptr_t *object, intptr_t *exp
|
||||
intptr_t desired)
|
||||
{
|
||||
intptr_t old = *expected;
|
||||
*expected = (intptr_t)atomic_cas_ptr(object, (void *)old, (void *)desired);
|
||||
*expected = atomic_cas_ptr(object, old, desired);
|
||||
return *expected == old;
|
||||
}
|
||||
|
||||
|
||||
@@ -19,7 +19,6 @@
|
||||
#ifndef COMPAT_ATOMICS_WIN32_STDATOMIC_H
|
||||
#define COMPAT_ATOMICS_WIN32_STDATOMIC_H
|
||||
|
||||
#define WIN32_LEAN_AND_MEAN
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
#include <windows.h>
|
||||
@@ -105,8 +104,7 @@ static inline int atomic_compare_exchange_strong(intptr_t *object, intptr_t *exp
|
||||
intptr_t desired)
|
||||
{
|
||||
intptr_t old = *expected;
|
||||
*expected = (intptr_t)InterlockedCompareExchangePointer(
|
||||
(PVOID *)object, (PVOID)desired, (PVOID)old);
|
||||
*expected = InterlockedCompareExchangePointer(object, desired, old);
|
||||
return *expected == old;
|
||||
}
|
||||
|
||||
|
||||
97
compat/cuda/dynlink_cuda.h
Normal file
97
compat/cuda/dynlink_cuda.h
Normal file
@@ -0,0 +1,97 @@
|
||||
/*
|
||||
* This copyright notice applies to this header file only:
|
||||
*
|
||||
* Copyright (c) 2016
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person
|
||||
* obtaining a copy of this software and associated documentation
|
||||
* files (the "Software"), to deal in the Software without
|
||||
* restriction, including without limitation the rights to use,
|
||||
* copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the software, and to permit persons to whom the
|
||||
* software is furnished to do so, subject to the following
|
||||
* conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#if !defined(AV_COMPAT_DYNLINK_CUDA_H) && !defined(CUDA_VERSION)
|
||||
#define AV_COMPAT_DYNLINK_CUDA_H
|
||||
|
||||
#include <stddef.h>
|
||||
|
||||
#define CUDA_VERSION 7050
|
||||
|
||||
#if defined(_WIN32) || defined(__CYGWIN__)
|
||||
#define CUDAAPI __stdcall
|
||||
#else
|
||||
#define CUDAAPI
|
||||
#endif
|
||||
|
||||
#define CU_CTX_SCHED_BLOCKING_SYNC 4
|
||||
|
||||
typedef int CUdevice;
|
||||
typedef void* CUarray;
|
||||
typedef void* CUcontext;
|
||||
#if defined(__x86_64) || defined(AMD64) || defined(_M_AMD64)
|
||||
typedef unsigned long long CUdeviceptr;
|
||||
#else
|
||||
typedef unsigned int CUdeviceptr;
|
||||
#endif
|
||||
|
||||
typedef enum cudaError_enum {
|
||||
CUDA_SUCCESS = 0
|
||||
} CUresult;
|
||||
|
||||
typedef enum CUmemorytype_enum {
|
||||
CU_MEMORYTYPE_HOST = 1,
|
||||
CU_MEMORYTYPE_DEVICE = 2
|
||||
} CUmemorytype;
|
||||
|
||||
typedef struct CUDA_MEMCPY2D_st {
|
||||
size_t srcXInBytes;
|
||||
size_t srcY;
|
||||
CUmemorytype srcMemoryType;
|
||||
const void *srcHost;
|
||||
CUdeviceptr srcDevice;
|
||||
CUarray srcArray;
|
||||
size_t srcPitch;
|
||||
|
||||
size_t dstXInBytes;
|
||||
size_t dstY;
|
||||
CUmemorytype dstMemoryType;
|
||||
void *dstHost;
|
||||
CUdeviceptr dstDevice;
|
||||
CUarray dstArray;
|
||||
size_t dstPitch;
|
||||
|
||||
size_t WidthInBytes;
|
||||
size_t Height;
|
||||
} CUDA_MEMCPY2D;
|
||||
|
||||
typedef CUresult CUDAAPI tcuInit(unsigned int Flags);
|
||||
typedef CUresult CUDAAPI tcuDeviceGetCount(int *count);
|
||||
typedef CUresult CUDAAPI tcuDeviceGet(CUdevice *device, int ordinal);
|
||||
typedef CUresult CUDAAPI tcuDeviceGetName(char *name, int len, CUdevice dev);
|
||||
typedef CUresult CUDAAPI tcuDeviceComputeCapability(int *major, int *minor, CUdevice dev);
|
||||
typedef CUresult CUDAAPI tcuCtxCreate_v2(CUcontext *pctx, unsigned int flags, CUdevice dev);
|
||||
typedef CUresult CUDAAPI tcuCtxPushCurrent_v2(CUcontext *pctx);
|
||||
typedef CUresult CUDAAPI tcuCtxPopCurrent_v2(CUcontext *pctx);
|
||||
typedef CUresult CUDAAPI tcuCtxDestroy_v2(CUcontext ctx);
|
||||
typedef CUresult CUDAAPI tcuMemAlloc_v2(CUdeviceptr *dptr, size_t bytesize);
|
||||
typedef CUresult CUDAAPI tcuMemFree_v2(CUdeviceptr dptr);
|
||||
typedef CUresult CUDAAPI tcuMemcpy2D_v2(const CUDA_MEMCPY2D *pcopy);
|
||||
typedef CUresult CUDAAPI tcuGetErrorName(CUresult error, const char** pstr);
|
||||
typedef CUresult CUDAAPI tcuGetErrorString(CUresult error, const char** pstr);
|
||||
|
||||
#endif
|
||||
815
compat/cuda/dynlink_cuviddec.h
Normal file
815
compat/cuda/dynlink_cuviddec.h
Normal file
@@ -0,0 +1,815 @@
|
||||
/*
|
||||
* This copyright notice applies to this header file only:
|
||||
*
|
||||
* Copyright (c) 2010-2016 NVIDIA Corporation
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person
|
||||
* obtaining a copy of this software and associated documentation
|
||||
* files (the "Software"), to deal in the Software without
|
||||
* restriction, including without limitation the rights to use,
|
||||
* copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the software, and to permit persons to whom the
|
||||
* software is furnished to do so, subject to the following
|
||||
* conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* \file cuviddec.h
|
||||
* NvCuvid API provides Video Decoding interface to NVIDIA GPU devices.
|
||||
* \date 2015-2016
|
||||
* This file contains constants, structure definitions and function prototypes used for decoding.
|
||||
*/
|
||||
|
||||
#if !defined(__CUDA_VIDEO_H__)
|
||||
#define __CUDA_VIDEO_H__
|
||||
|
||||
#if defined(__x86_64) || defined(AMD64) || defined(_M_AMD64)
|
||||
#if (CUDA_VERSION >= 3020) && (!defined(CUDA_FORCE_API_VERSION) || (CUDA_FORCE_API_VERSION >= 3020))
|
||||
#define __CUVID_DEVPTR64
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#if defined(__cplusplus)
|
||||
extern "C" {
|
||||
#endif /* __cplusplus */
|
||||
|
||||
#if defined(__CYGWIN__)
|
||||
typedef unsigned int tcu_ulong;
|
||||
#else
|
||||
typedef unsigned long tcu_ulong;
|
||||
#endif
|
||||
|
||||
typedef void *CUvideodecoder;
|
||||
typedef struct _CUcontextlock_st *CUvideoctxlock;
|
||||
|
||||
/**
|
||||
* \addtogroup VIDEO_DECODER Video Decoder
|
||||
* @{
|
||||
*/
|
||||
|
||||
/*!
|
||||
* \enum cudaVideoCodec
|
||||
* Video Codec Enums
|
||||
*/
|
||||
typedef enum cudaVideoCodec_enum {
|
||||
cudaVideoCodec_MPEG1=0, /**< MPEG1 */
|
||||
cudaVideoCodec_MPEG2, /**< MPEG2 */
|
||||
cudaVideoCodec_MPEG4, /**< MPEG4 */
|
||||
cudaVideoCodec_VC1, /**< VC1 */
|
||||
cudaVideoCodec_H264, /**< H264 */
|
||||
cudaVideoCodec_JPEG, /**< JPEG */
|
||||
cudaVideoCodec_H264_SVC, /**< H264-SVC */
|
||||
cudaVideoCodec_H264_MVC, /**< H264-MVC */
|
||||
cudaVideoCodec_HEVC, /**< HEVC */
|
||||
cudaVideoCodec_VP8, /**< VP8 */
|
||||
cudaVideoCodec_VP9, /**< VP9 */
|
||||
cudaVideoCodec_NumCodecs, /**< Max COdecs */
|
||||
// Uncompressed YUV
|
||||
cudaVideoCodec_YUV420 = (('I'<<24)|('Y'<<16)|('U'<<8)|('V')), /**< Y,U,V (4:2:0) */
|
||||
cudaVideoCodec_YV12 = (('Y'<<24)|('V'<<16)|('1'<<8)|('2')), /**< Y,V,U (4:2:0) */
|
||||
cudaVideoCodec_NV12 = (('N'<<24)|('V'<<16)|('1'<<8)|('2')), /**< Y,UV (4:2:0) */
|
||||
cudaVideoCodec_YUYV = (('Y'<<24)|('U'<<16)|('Y'<<8)|('V')), /**< YUYV/YUY2 (4:2:2) */
|
||||
cudaVideoCodec_UYVY = (('U'<<24)|('Y'<<16)|('V'<<8)|('Y')) /**< UYVY (4:2:2) */
|
||||
} cudaVideoCodec;
|
||||
|
||||
/*!
|
||||
* \enum cudaVideoSurfaceFormat
|
||||
* Video Surface Formats Enums
|
||||
*/
|
||||
typedef enum cudaVideoSurfaceFormat_enum {
|
||||
cudaVideoSurfaceFormat_NV12=0, /**< NV12 */
|
||||
cudaVideoSurfaceFormat_P016=1 /**< P016 */
|
||||
} cudaVideoSurfaceFormat;
|
||||
|
||||
/*!
|
||||
* \enum cudaVideoDeinterlaceMode
|
||||
* Deinterlacing Modes Enums
|
||||
*/
|
||||
typedef enum cudaVideoDeinterlaceMode_enum {
|
||||
cudaVideoDeinterlaceMode_Weave=0, /**< Weave both fields (no deinterlacing) */
|
||||
cudaVideoDeinterlaceMode_Bob, /**< Drop one field */
|
||||
cudaVideoDeinterlaceMode_Adaptive /**< Adaptive deinterlacing */
|
||||
} cudaVideoDeinterlaceMode;
|
||||
|
||||
/*!
|
||||
* \enum cudaVideoChromaFormat
|
||||
* Chroma Formats Enums
|
||||
*/
|
||||
typedef enum cudaVideoChromaFormat_enum {
|
||||
cudaVideoChromaFormat_Monochrome=0, /**< MonoChrome */
|
||||
cudaVideoChromaFormat_420, /**< 4:2:0 */
|
||||
cudaVideoChromaFormat_422, /**< 4:2:2 */
|
||||
cudaVideoChromaFormat_444 /**< 4:4:4 */
|
||||
} cudaVideoChromaFormat;
|
||||
|
||||
/*!
|
||||
* \enum cudaVideoCreateFlags
|
||||
* Decoder Flags Enums
|
||||
*/
|
||||
typedef enum cudaVideoCreateFlags_enum {
|
||||
cudaVideoCreate_Default = 0x00, /**< Default operation mode: use dedicated video engines */
|
||||
cudaVideoCreate_PreferCUDA = 0x01, /**< Use a CUDA-based decoder if faster than dedicated engines (requires a valid vidLock object for multi-threading) */
|
||||
cudaVideoCreate_PreferDXVA = 0x02, /**< Go through DXVA internally if possible (requires D3D9 interop) */
|
||||
cudaVideoCreate_PreferCUVID = 0x04 /**< Use dedicated video engines directly */
|
||||
} cudaVideoCreateFlags;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDDECODECREATEINFO
|
||||
* Struct used in create decoder
|
||||
*/
|
||||
typedef struct _CUVIDDECODECREATEINFO
|
||||
{
|
||||
tcu_ulong ulWidth; /**< Coded Sequence Width */
|
||||
tcu_ulong ulHeight; /**< Coded Sequence Height */
|
||||
tcu_ulong ulNumDecodeSurfaces; /**< Maximum number of internal decode surfaces */
|
||||
cudaVideoCodec CodecType; /**< cudaVideoCodec_XXX */
|
||||
cudaVideoChromaFormat ChromaFormat; /**< cudaVideoChromaFormat_XXX (only 4:2:0 is currently supported) */
|
||||
tcu_ulong ulCreationFlags; /**< Decoder creation flags (cudaVideoCreateFlags_XXX) */
|
||||
tcu_ulong bitDepthMinus8;
|
||||
tcu_ulong Reserved1[4]; /**< Reserved for future use - set to zero */
|
||||
/**
|
||||
* area of the frame that should be displayed
|
||||
*/
|
||||
struct {
|
||||
short left;
|
||||
short top;
|
||||
short right;
|
||||
short bottom;
|
||||
} display_area;
|
||||
|
||||
cudaVideoSurfaceFormat OutputFormat; /**< cudaVideoSurfaceFormat_XXX */
|
||||
cudaVideoDeinterlaceMode DeinterlaceMode; /**< cudaVideoDeinterlaceMode_XXX */
|
||||
tcu_ulong ulTargetWidth; /**< Post-processed Output Width (Should be aligned to 2) */
|
||||
tcu_ulong ulTargetHeight; /**< Post-processed Output Height (Should be aligbed to 2) */
|
||||
tcu_ulong ulNumOutputSurfaces; /**< Maximum number of output surfaces simultaneously mapped */
|
||||
CUvideoctxlock vidLock; /**< If non-NULL, context lock used for synchronizing ownership of the cuda context */
|
||||
/**
|
||||
* target rectangle in the output frame (for aspect ratio conversion)
|
||||
* if a null rectangle is specified, {0,0,ulTargetWidth,ulTargetHeight} will be used
|
||||
*/
|
||||
struct {
|
||||
short left;
|
||||
short top;
|
||||
short right;
|
||||
short bottom;
|
||||
} target_rect;
|
||||
tcu_ulong Reserved2[5]; /**< Reserved for future use - set to zero */
|
||||
} CUVIDDECODECREATEINFO;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDH264DPBENTRY
|
||||
* H.264 DPB Entry
|
||||
*/
|
||||
typedef struct _CUVIDH264DPBENTRY
|
||||
{
|
||||
int PicIdx; /**< picture index of reference frame */
|
||||
int FrameIdx; /**< frame_num(short-term) or LongTermFrameIdx(long-term) */
|
||||
int is_long_term; /**< 0=short term reference, 1=long term reference */
|
||||
int not_existing; /**< non-existing reference frame (corresponding PicIdx should be set to -1) */
|
||||
int used_for_reference; /**< 0=unused, 1=top_field, 2=bottom_field, 3=both_fields */
|
||||
int FieldOrderCnt[2]; /**< field order count of top and bottom fields */
|
||||
} CUVIDH264DPBENTRY;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDH264MVCEXT
|
||||
* H.264 MVC Picture Parameters Ext
|
||||
*/
|
||||
typedef struct _CUVIDH264MVCEXT
|
||||
{
|
||||
int num_views_minus1;
|
||||
int view_id;
|
||||
unsigned char inter_view_flag;
|
||||
unsigned char num_inter_view_refs_l0;
|
||||
unsigned char num_inter_view_refs_l1;
|
||||
unsigned char MVCReserved8Bits;
|
||||
int InterViewRefsL0[16];
|
||||
int InterViewRefsL1[16];
|
||||
} CUVIDH264MVCEXT;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDH264SVCEXT
|
||||
* H.264 SVC Picture Parameters Ext
|
||||
*/
|
||||
typedef struct _CUVIDH264SVCEXT
|
||||
{
|
||||
unsigned char profile_idc;
|
||||
unsigned char level_idc;
|
||||
unsigned char DQId;
|
||||
unsigned char DQIdMax;
|
||||
unsigned char disable_inter_layer_deblocking_filter_idc;
|
||||
unsigned char ref_layer_chroma_phase_y_plus1;
|
||||
signed char inter_layer_slice_alpha_c0_offset_div2;
|
||||
signed char inter_layer_slice_beta_offset_div2;
|
||||
|
||||
unsigned short DPBEntryValidFlag;
|
||||
unsigned char inter_layer_deblocking_filter_control_present_flag;
|
||||
unsigned char extended_spatial_scalability_idc;
|
||||
unsigned char adaptive_tcoeff_level_prediction_flag;
|
||||
unsigned char slice_header_restriction_flag;
|
||||
unsigned char chroma_phase_x_plus1_flag;
|
||||
unsigned char chroma_phase_y_plus1;
|
||||
|
||||
unsigned char tcoeff_level_prediction_flag;
|
||||
unsigned char constrained_intra_resampling_flag;
|
||||
unsigned char ref_layer_chroma_phase_x_plus1_flag;
|
||||
unsigned char store_ref_base_pic_flag;
|
||||
unsigned char Reserved8BitsA;
|
||||
unsigned char Reserved8BitsB;
|
||||
// For the 4 scaled_ref_layer_XX fields below,
|
||||
// if (extended_spatial_scalability_idc == 1), SPS field, G.7.3.2.1.4, add prefix "seq_"
|
||||
// if (extended_spatial_scalability_idc == 2), SLH field, G.7.3.3.4,
|
||||
short scaled_ref_layer_left_offset;
|
||||
short scaled_ref_layer_top_offset;
|
||||
short scaled_ref_layer_right_offset;
|
||||
short scaled_ref_layer_bottom_offset;
|
||||
unsigned short Reserved16Bits;
|
||||
struct _CUVIDPICPARAMS *pNextLayer; /**< Points to the picparams for the next layer to be decoded. Linked list ends at the target layer. */
|
||||
int bRefBaseLayer; /**< whether to store ref base pic */
|
||||
} CUVIDH264SVCEXT;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDH264PICPARAMS
|
||||
* H.264 Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDH264PICPARAMS
|
||||
{
|
||||
// SPS
|
||||
int log2_max_frame_num_minus4;
|
||||
int pic_order_cnt_type;
|
||||
int log2_max_pic_order_cnt_lsb_minus4;
|
||||
int delta_pic_order_always_zero_flag;
|
||||
int frame_mbs_only_flag;
|
||||
int direct_8x8_inference_flag;
|
||||
int num_ref_frames; // NOTE: shall meet level 4.1 restrictions
|
||||
unsigned char residual_colour_transform_flag;
|
||||
unsigned char bit_depth_luma_minus8; // Must be 0 (only 8-bit supported)
|
||||
unsigned char bit_depth_chroma_minus8; // Must be 0 (only 8-bit supported)
|
||||
unsigned char qpprime_y_zero_transform_bypass_flag;
|
||||
// PPS
|
||||
int entropy_coding_mode_flag;
|
||||
int pic_order_present_flag;
|
||||
int num_ref_idx_l0_active_minus1;
|
||||
int num_ref_idx_l1_active_minus1;
|
||||
int weighted_pred_flag;
|
||||
int weighted_bipred_idc;
|
||||
int pic_init_qp_minus26;
|
||||
int deblocking_filter_control_present_flag;
|
||||
int redundant_pic_cnt_present_flag;
|
||||
int transform_8x8_mode_flag;
|
||||
int MbaffFrameFlag;
|
||||
int constrained_intra_pred_flag;
|
||||
int chroma_qp_index_offset;
|
||||
int second_chroma_qp_index_offset;
|
||||
int ref_pic_flag;
|
||||
int frame_num;
|
||||
int CurrFieldOrderCnt[2];
|
||||
// DPB
|
||||
CUVIDH264DPBENTRY dpb[16]; // List of reference frames within the DPB
|
||||
// Quantization Matrices (raster-order)
|
||||
unsigned char WeightScale4x4[6][16];
|
||||
unsigned char WeightScale8x8[2][64];
|
||||
// FMO/ASO
|
||||
unsigned char fmo_aso_enable;
|
||||
unsigned char num_slice_groups_minus1;
|
||||
unsigned char slice_group_map_type;
|
||||
signed char pic_init_qs_minus26;
|
||||
unsigned int slice_group_change_rate_minus1;
|
||||
union
|
||||
{
|
||||
unsigned long long slice_group_map_addr;
|
||||
const unsigned char *pMb2SliceGroupMap;
|
||||
} fmo;
|
||||
unsigned int Reserved[12];
|
||||
// SVC/MVC
|
||||
union
|
||||
{
|
||||
CUVIDH264MVCEXT mvcext;
|
||||
CUVIDH264SVCEXT svcext;
|
||||
} svcmvc;
|
||||
} CUVIDH264PICPARAMS;
|
||||
|
||||
|
||||
/*!
|
||||
* \struct CUVIDMPEG2PICPARAMS
|
||||
* MPEG-2 Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDMPEG2PICPARAMS
|
||||
{
|
||||
int ForwardRefIdx; // Picture index of forward reference (P/B-frames)
|
||||
int BackwardRefIdx; // Picture index of backward reference (B-frames)
|
||||
int picture_coding_type;
|
||||
int full_pel_forward_vector;
|
||||
int full_pel_backward_vector;
|
||||
int f_code[2][2];
|
||||
int intra_dc_precision;
|
||||
int frame_pred_frame_dct;
|
||||
int concealment_motion_vectors;
|
||||
int q_scale_type;
|
||||
int intra_vlc_format;
|
||||
int alternate_scan;
|
||||
int top_field_first;
|
||||
// Quantization matrices (raster order)
|
||||
unsigned char QuantMatrixIntra[64];
|
||||
unsigned char QuantMatrixInter[64];
|
||||
} CUVIDMPEG2PICPARAMS;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// MPEG-4 Picture Parameters
|
||||
//
|
||||
|
||||
// MPEG-4 has VOP types instead of Picture types
|
||||
#define I_VOP 0
|
||||
#define P_VOP 1
|
||||
#define B_VOP 2
|
||||
#define S_VOP 3
|
||||
|
||||
/*!
|
||||
* \struct CUVIDMPEG4PICPARAMS
|
||||
* MPEG-4 Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDMPEG4PICPARAMS
|
||||
{
|
||||
int ForwardRefIdx; // Picture index of forward reference (P/B-frames)
|
||||
int BackwardRefIdx; // Picture index of backward reference (B-frames)
|
||||
// VOL
|
||||
int video_object_layer_width;
|
||||
int video_object_layer_height;
|
||||
int vop_time_increment_bitcount;
|
||||
int top_field_first;
|
||||
int resync_marker_disable;
|
||||
int quant_type;
|
||||
int quarter_sample;
|
||||
int short_video_header;
|
||||
int divx_flags;
|
||||
// VOP
|
||||
int vop_coding_type;
|
||||
int vop_coded;
|
||||
int vop_rounding_type;
|
||||
int alternate_vertical_scan_flag;
|
||||
int interlaced;
|
||||
int vop_fcode_forward;
|
||||
int vop_fcode_backward;
|
||||
int trd[2];
|
||||
int trb[2];
|
||||
// Quantization matrices (raster order)
|
||||
unsigned char QuantMatrixIntra[64];
|
||||
unsigned char QuantMatrixInter[64];
|
||||
int gmc_enabled;
|
||||
} CUVIDMPEG4PICPARAMS;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDVC1PICPARAMS
|
||||
* VC1 Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDVC1PICPARAMS
|
||||
{
|
||||
int ForwardRefIdx; /**< Picture index of forward reference (P/B-frames) */
|
||||
int BackwardRefIdx; /**< Picture index of backward reference (B-frames) */
|
||||
int FrameWidth; /**< Actual frame width */
|
||||
int FrameHeight; /**< Actual frame height */
|
||||
// PICTURE
|
||||
int intra_pic_flag; /**< Set to 1 for I,BI frames */
|
||||
int ref_pic_flag; /**< Set to 1 for I,P frames */
|
||||
int progressive_fcm; /**< Progressive frame */
|
||||
// SEQUENCE
|
||||
int profile;
|
||||
int postprocflag;
|
||||
int pulldown;
|
||||
int interlace;
|
||||
int tfcntrflag;
|
||||
int finterpflag;
|
||||
int psf;
|
||||
int multires;
|
||||
int syncmarker;
|
||||
int rangered;
|
||||
int maxbframes;
|
||||
// ENTRYPOINT
|
||||
int panscan_flag;
|
||||
int refdist_flag;
|
||||
int extended_mv;
|
||||
int dquant;
|
||||
int vstransform;
|
||||
int loopfilter;
|
||||
int fastuvmc;
|
||||
int overlap;
|
||||
int quantizer;
|
||||
int extended_dmv;
|
||||
int range_mapy_flag;
|
||||
int range_mapy;
|
||||
int range_mapuv_flag;
|
||||
int range_mapuv;
|
||||
int rangeredfrm; // range reduction state
|
||||
} CUVIDVC1PICPARAMS;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDJPEGPICPARAMS
|
||||
* JPEG Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDJPEGPICPARAMS
|
||||
{
|
||||
int Reserved;
|
||||
} CUVIDJPEGPICPARAMS;
|
||||
|
||||
|
||||
/*!
|
||||
* \struct CUVIDHEVCPICPARAMS
|
||||
* HEVC Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDHEVCPICPARAMS
|
||||
{
|
||||
// sps
|
||||
int pic_width_in_luma_samples;
|
||||
int pic_height_in_luma_samples;
|
||||
unsigned char log2_min_luma_coding_block_size_minus3;
|
||||
unsigned char log2_diff_max_min_luma_coding_block_size;
|
||||
unsigned char log2_min_transform_block_size_minus2;
|
||||
unsigned char log2_diff_max_min_transform_block_size;
|
||||
unsigned char pcm_enabled_flag;
|
||||
unsigned char log2_min_pcm_luma_coding_block_size_minus3;
|
||||
unsigned char log2_diff_max_min_pcm_luma_coding_block_size;
|
||||
unsigned char pcm_sample_bit_depth_luma_minus1;
|
||||
|
||||
unsigned char pcm_sample_bit_depth_chroma_minus1;
|
||||
unsigned char pcm_loop_filter_disabled_flag;
|
||||
unsigned char strong_intra_smoothing_enabled_flag;
|
||||
unsigned char max_transform_hierarchy_depth_intra;
|
||||
unsigned char max_transform_hierarchy_depth_inter;
|
||||
unsigned char amp_enabled_flag;
|
||||
unsigned char separate_colour_plane_flag;
|
||||
unsigned char log2_max_pic_order_cnt_lsb_minus4;
|
||||
|
||||
unsigned char num_short_term_ref_pic_sets;
|
||||
unsigned char long_term_ref_pics_present_flag;
|
||||
unsigned char num_long_term_ref_pics_sps;
|
||||
unsigned char sps_temporal_mvp_enabled_flag;
|
||||
unsigned char sample_adaptive_offset_enabled_flag;
|
||||
unsigned char scaling_list_enable_flag;
|
||||
unsigned char IrapPicFlag;
|
||||
unsigned char IdrPicFlag;
|
||||
|
||||
unsigned char bit_depth_luma_minus8;
|
||||
unsigned char bit_depth_chroma_minus8;
|
||||
unsigned char reserved1[14];
|
||||
|
||||
// pps
|
||||
unsigned char dependent_slice_segments_enabled_flag;
|
||||
unsigned char slice_segment_header_extension_present_flag;
|
||||
unsigned char sign_data_hiding_enabled_flag;
|
||||
unsigned char cu_qp_delta_enabled_flag;
|
||||
unsigned char diff_cu_qp_delta_depth;
|
||||
signed char init_qp_minus26;
|
||||
signed char pps_cb_qp_offset;
|
||||
signed char pps_cr_qp_offset;
|
||||
|
||||
unsigned char constrained_intra_pred_flag;
|
||||
unsigned char weighted_pred_flag;
|
||||
unsigned char weighted_bipred_flag;
|
||||
unsigned char transform_skip_enabled_flag;
|
||||
unsigned char transquant_bypass_enabled_flag;
|
||||
unsigned char entropy_coding_sync_enabled_flag;
|
||||
unsigned char log2_parallel_merge_level_minus2;
|
||||
unsigned char num_extra_slice_header_bits;
|
||||
|
||||
unsigned char loop_filter_across_tiles_enabled_flag;
|
||||
unsigned char loop_filter_across_slices_enabled_flag;
|
||||
unsigned char output_flag_present_flag;
|
||||
unsigned char num_ref_idx_l0_default_active_minus1;
|
||||
unsigned char num_ref_idx_l1_default_active_minus1;
|
||||
unsigned char lists_modification_present_flag;
|
||||
unsigned char cabac_init_present_flag;
|
||||
unsigned char pps_slice_chroma_qp_offsets_present_flag;
|
||||
|
||||
unsigned char deblocking_filter_override_enabled_flag;
|
||||
unsigned char pps_deblocking_filter_disabled_flag;
|
||||
signed char pps_beta_offset_div2;
|
||||
signed char pps_tc_offset_div2;
|
||||
unsigned char tiles_enabled_flag;
|
||||
unsigned char uniform_spacing_flag;
|
||||
unsigned char num_tile_columns_minus1;
|
||||
unsigned char num_tile_rows_minus1;
|
||||
|
||||
unsigned short column_width_minus1[21];
|
||||
unsigned short row_height_minus1[21];
|
||||
unsigned int reserved3[15];
|
||||
|
||||
// RefPicSets
|
||||
int NumBitsForShortTermRPSInSlice;
|
||||
int NumDeltaPocsOfRefRpsIdx;
|
||||
int NumPocTotalCurr;
|
||||
int NumPocStCurrBefore;
|
||||
int NumPocStCurrAfter;
|
||||
int NumPocLtCurr;
|
||||
int CurrPicOrderCntVal;
|
||||
int RefPicIdx[16]; // [refpic] Indices of valid reference pictures (-1 if unused for reference)
|
||||
int PicOrderCntVal[16]; // [refpic]
|
||||
unsigned char IsLongTerm[16]; // [refpic] 0=not a long-term reference, 1=long-term reference
|
||||
unsigned char RefPicSetStCurrBefore[8]; // [0..NumPocStCurrBefore-1] -> refpic (0..15)
|
||||
unsigned char RefPicSetStCurrAfter[8]; // [0..NumPocStCurrAfter-1] -> refpic (0..15)
|
||||
unsigned char RefPicSetLtCurr[8]; // [0..NumPocLtCurr-1] -> refpic (0..15)
|
||||
unsigned char RefPicSetInterLayer0[8];
|
||||
unsigned char RefPicSetInterLayer1[8];
|
||||
unsigned int reserved4[12];
|
||||
|
||||
// scaling lists (diag order)
|
||||
unsigned char ScalingList4x4[6][16]; // [matrixId][i]
|
||||
unsigned char ScalingList8x8[6][64]; // [matrixId][i]
|
||||
unsigned char ScalingList16x16[6][64]; // [matrixId][i]
|
||||
unsigned char ScalingList32x32[2][64]; // [matrixId][i]
|
||||
unsigned char ScalingListDCCoeff16x16[6]; // [matrixId]
|
||||
unsigned char ScalingListDCCoeff32x32[2]; // [matrixId]
|
||||
} CUVIDHEVCPICPARAMS;
|
||||
|
||||
|
||||
/*!
|
||||
* \struct CUVIDVP8PICPARAMS
|
||||
* VP8 Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDVP8PICPARAMS
|
||||
{
|
||||
int width;
|
||||
int height;
|
||||
unsigned int first_partition_size;
|
||||
//Frame Indexes
|
||||
unsigned char LastRefIdx;
|
||||
unsigned char GoldenRefIdx;
|
||||
unsigned char AltRefIdx;
|
||||
union {
|
||||
struct {
|
||||
unsigned char frame_type : 1; /**< 0 = KEYFRAME, 1 = INTERFRAME */
|
||||
unsigned char version : 3;
|
||||
unsigned char show_frame : 1;
|
||||
unsigned char update_mb_segmentation_data : 1; /**< Must be 0 if segmentation is not enabled */
|
||||
unsigned char Reserved2Bits : 2;
|
||||
};
|
||||
unsigned char wFrameTagFlags;
|
||||
} tagflags;
|
||||
unsigned char Reserved1[4];
|
||||
unsigned int Reserved2[3];
|
||||
} CUVIDVP8PICPARAMS;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDVP9PICPARAMS
|
||||
* VP9 Picture Parameters
|
||||
*/
|
||||
typedef struct _CUVIDVP9PICPARAMS
|
||||
{
|
||||
unsigned int width;
|
||||
unsigned int height;
|
||||
|
||||
//Frame Indices
|
||||
unsigned char LastRefIdx;
|
||||
unsigned char GoldenRefIdx;
|
||||
unsigned char AltRefIdx;
|
||||
unsigned char colorSpace;
|
||||
|
||||
unsigned short profile : 3;
|
||||
unsigned short frameContextIdx : 2;
|
||||
unsigned short frameType : 1;
|
||||
unsigned short showFrame : 1;
|
||||
unsigned short errorResilient : 1;
|
||||
unsigned short frameParallelDecoding : 1;
|
||||
unsigned short subSamplingX : 1;
|
||||
unsigned short subSamplingY : 1;
|
||||
unsigned short intraOnly : 1;
|
||||
unsigned short allow_high_precision_mv : 1;
|
||||
unsigned short refreshEntropyProbs : 1;
|
||||
unsigned short reserved2Bits : 2;
|
||||
|
||||
unsigned short reserved16Bits;
|
||||
|
||||
unsigned char refFrameSignBias[4];
|
||||
|
||||
unsigned char bitDepthMinus8Luma;
|
||||
unsigned char bitDepthMinus8Chroma;
|
||||
unsigned char loopFilterLevel;
|
||||
unsigned char loopFilterSharpness;
|
||||
|
||||
unsigned char modeRefLfEnabled;
|
||||
unsigned char log2_tile_columns;
|
||||
unsigned char log2_tile_rows;
|
||||
|
||||
unsigned char segmentEnabled : 1;
|
||||
unsigned char segmentMapUpdate : 1;
|
||||
unsigned char segmentMapTemporalUpdate : 1;
|
||||
unsigned char segmentFeatureMode : 1;
|
||||
unsigned char reserved4Bits : 4;
|
||||
|
||||
|
||||
unsigned char segmentFeatureEnable[8][4];
|
||||
short segmentFeatureData[8][4];
|
||||
unsigned char mb_segment_tree_probs[7];
|
||||
unsigned char segment_pred_probs[3];
|
||||
unsigned char reservedSegment16Bits[2];
|
||||
|
||||
int qpYAc;
|
||||
int qpYDc;
|
||||
int qpChDc;
|
||||
int qpChAc;
|
||||
|
||||
unsigned int activeRefIdx[3];
|
||||
unsigned int resetFrameContext;
|
||||
unsigned int mcomp_filter_type;
|
||||
unsigned int mbRefLfDelta[4];
|
||||
unsigned int mbModeLfDelta[2];
|
||||
unsigned int frameTagSize;
|
||||
unsigned int offsetToDctParts;
|
||||
unsigned int reserved128Bits[4];
|
||||
|
||||
} CUVIDVP9PICPARAMS;
|
||||
|
||||
|
||||
/*!
|
||||
* \struct CUVIDPICPARAMS
|
||||
* Picture Parameters for Decoding
|
||||
*/
|
||||
typedef struct _CUVIDPICPARAMS
|
||||
{
|
||||
int PicWidthInMbs; /**< Coded Frame Size */
|
||||
int FrameHeightInMbs; /**< Coded Frame Height */
|
||||
int CurrPicIdx; /**< Output index of the current picture */
|
||||
int field_pic_flag; /**< 0=frame picture, 1=field picture */
|
||||
int bottom_field_flag; /**< 0=top field, 1=bottom field (ignored if field_pic_flag=0) */
|
||||
int second_field; /**< Second field of a complementary field pair */
|
||||
// Bitstream data
|
||||
unsigned int nBitstreamDataLen; /**< Number of bytes in bitstream data buffer */
|
||||
const unsigned char *pBitstreamData; /**< Ptr to bitstream data for this picture (slice-layer) */
|
||||
unsigned int nNumSlices; /**< Number of slices in this picture */
|
||||
const unsigned int *pSliceDataOffsets; /**< nNumSlices entries, contains offset of each slice within the bitstream data buffer */
|
||||
int ref_pic_flag; /**< This picture is a reference picture */
|
||||
int intra_pic_flag; /**< This picture is entirely intra coded */
|
||||
unsigned int Reserved[30]; /**< Reserved for future use */
|
||||
// Codec-specific data
|
||||
union {
|
||||
CUVIDMPEG2PICPARAMS mpeg2; /**< Also used for MPEG-1 */
|
||||
CUVIDH264PICPARAMS h264;
|
||||
CUVIDVC1PICPARAMS vc1;
|
||||
CUVIDMPEG4PICPARAMS mpeg4;
|
||||
CUVIDJPEGPICPARAMS jpeg;
|
||||
CUVIDHEVCPICPARAMS hevc;
|
||||
CUVIDVP8PICPARAMS vp8;
|
||||
CUVIDVP9PICPARAMS vp9;
|
||||
unsigned int CodecReserved[1024];
|
||||
} CodecSpecific;
|
||||
} CUVIDPICPARAMS;
|
||||
|
||||
|
||||
/*!
|
||||
* \struct CUVIDPROCPARAMS
|
||||
* Picture Parameters for Postprocessing
|
||||
*/
|
||||
typedef struct _CUVIDPROCPARAMS
|
||||
{
|
||||
int progressive_frame; /**< Input is progressive (deinterlace_mode will be ignored) */
|
||||
int second_field; /**< Output the second field (ignored if deinterlace mode is Weave) */
|
||||
int top_field_first; /**< Input frame is top field first (1st field is top, 2nd field is bottom) */
|
||||
int unpaired_field; /**< Input only contains one field (2nd field is invalid) */
|
||||
// The fields below are used for raw YUV input
|
||||
unsigned int reserved_flags; /**< Reserved for future use (set to zero) */
|
||||
unsigned int reserved_zero; /**< Reserved (set to zero) */
|
||||
unsigned long long raw_input_dptr; /**< Input CUdeviceptr for raw YUV extensions */
|
||||
unsigned int raw_input_pitch; /**< pitch in bytes of raw YUV input (should be aligned appropriately) */
|
||||
unsigned int raw_input_format; /**< Reserved for future use (set to zero) */
|
||||
unsigned long long raw_output_dptr; /**< Reserved for future use (set to zero) */
|
||||
unsigned int raw_output_pitch; /**< Reserved for future use (set to zero) */
|
||||
unsigned int Reserved[48];
|
||||
void *Reserved3[3];
|
||||
} CUVIDPROCPARAMS;
|
||||
|
||||
|
||||
/**
|
||||
*
|
||||
* In order to minimize decode latencies, there should be always at least 2 pictures in the decode
|
||||
* queue at any time, in order to make sure that all decode engines are always busy.
|
||||
*
|
||||
* Overall data flow:
|
||||
* - cuvidCreateDecoder(...)
|
||||
* For each picture:
|
||||
* - cuvidDecodePicture(N)
|
||||
* - cuvidMapVideoFrame(N-4)
|
||||
* - do some processing in cuda
|
||||
* - cuvidUnmapVideoFrame(N-4)
|
||||
* - cuvidDecodePicture(N+1)
|
||||
* - cuvidMapVideoFrame(N-3)
|
||||
* ...
|
||||
* - cuvidDestroyDecoder(...)
|
||||
*
|
||||
* NOTE:
|
||||
* - When the cuda context is created from a D3D device, the D3D device must also be created
|
||||
* with the D3DCREATE_MULTITHREADED flag.
|
||||
* - There is a limit to how many pictures can be mapped simultaneously (ulNumOutputSurfaces)
|
||||
* - cuVidDecodePicture may block the calling thread if there are too many pictures pending
|
||||
* in the decode queue
|
||||
*/
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCreateDecoder(CUvideodecoder *phDecoder, CUVIDDECODECREATEINFO *pdci)
|
||||
* Create the decoder object
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCreateDecoder(CUvideodecoder *phDecoder, CUVIDDECODECREATEINFO *pdci);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidDestroyDecoder(CUvideodecoder hDecoder)
|
||||
* Destroy the decoder object
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidDestroyDecoder(CUvideodecoder hDecoder);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidDecodePicture(CUvideodecoder hDecoder, CUVIDPICPARAMS *pPicParams)
|
||||
* Decode a single picture (field or frame)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidDecodePicture(CUvideodecoder hDecoder, CUVIDPICPARAMS *pPicParams);
|
||||
|
||||
|
||||
#if !defined(__CUVID_DEVPTR64) || defined(__CUVID_INTERNAL)
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidMapVideoFrame(CUvideodecoder hDecoder, int nPicIdx, unsigned int *pDevPtr, unsigned int *pPitch, CUVIDPROCPARAMS *pVPP);
|
||||
* Post-process and map a video frame for use in cuda
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidMapVideoFrame(CUvideodecoder hDecoder, int nPicIdx,
|
||||
unsigned int *pDevPtr, unsigned int *pPitch,
|
||||
CUVIDPROCPARAMS *pVPP);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidUnmapVideoFrame(CUvideodecoder hDecoder, unsigned int DevPtr)
|
||||
* Unmap a previously mapped video frame
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidUnmapVideoFrame(CUvideodecoder hDecoder, unsigned int DevPtr);
|
||||
#endif
|
||||
|
||||
#if defined(WIN64) || defined(_WIN64) || defined(__x86_64) || defined(AMD64) || defined(_M_AMD64)
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidMapVideoFrame64(CUvideodecoder hDecoder, int nPicIdx, unsigned long long *pDevPtr, unsigned int *pPitch, CUVIDPROCPARAMS *pVPP);
|
||||
* map a video frame
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidMapVideoFrame64(CUvideodecoder hDecoder, int nPicIdx, unsigned long long *pDevPtr,
|
||||
unsigned int *pPitch, CUVIDPROCPARAMS *pVPP);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidUnmapVideoFrame64(CUvideodecoder hDecoder, unsigned long long DevPtr);
|
||||
* Unmap a previously mapped video frame
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidUnmapVideoFrame64(CUvideodecoder hDecoder, unsigned long long DevPtr);
|
||||
|
||||
#if defined(__CUVID_DEVPTR64) && !defined(__CUVID_INTERNAL)
|
||||
#define tcuvidMapVideoFrame tcuvidMapVideoFrame64
|
||||
#define tcuvidUnmapVideoFrame tcuvidUnmapVideoFrame64
|
||||
#endif
|
||||
#endif
|
||||
|
||||
|
||||
/**
|
||||
*
|
||||
* Context-locking: to facilitate multi-threaded implementations, the following 4 functions
|
||||
* provide a simple mutex-style host synchronization. If a non-NULL context is specified
|
||||
* in CUVIDDECODECREATEINFO, the codec library will acquire the mutex associated with the given
|
||||
* context before making any cuda calls.
|
||||
* A multi-threaded application could create a lock associated with a context handle so that
|
||||
* multiple threads can safely share the same cuda context:
|
||||
* - use cuCtxPopCurrent immediately after context creation in order to create a 'floating' context
|
||||
* that can be passed to cuvidCtxLockCreate.
|
||||
* - When using a floating context, all cuda calls should only be made within a cuvidCtxLock/cuvidCtxUnlock section.
|
||||
*
|
||||
* NOTE: This is a safer alternative to cuCtxPushCurrent and cuCtxPopCurrent, and is not related to video
|
||||
* decoder in any way (implemented as a critical section associated with cuCtx{Push|Pop}Current calls).
|
||||
*/
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCtxLockCreate(CUvideoctxlock *pLock, CUcontext ctx)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCtxLockCreate(CUvideoctxlock *pLock, CUcontext ctx);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCtxLockDestroy(CUvideoctxlock lck)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCtxLockDestroy(CUvideoctxlock lck);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCtxLock(CUvideoctxlock lck, unsigned int reserved_flags)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCtxLock(CUvideoctxlock lck, unsigned int reserved_flags);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCtxUnlock(CUvideoctxlock lck, unsigned int reserved_flags)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCtxUnlock(CUvideoctxlock lck, unsigned int reserved_flags);
|
||||
|
||||
/** @} */ /* End VIDEO_DECODER */
|
||||
|
||||
#if defined(__cplusplus)
|
||||
}
|
||||
#endif /* __cplusplus */
|
||||
|
||||
#endif // __CUDA_VIDEO_H__
|
||||
@@ -1,33 +1,254 @@
|
||||
/*
|
||||
* This file is part of FFmpeg.
|
||||
* This copyright notice applies to this header file only:
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
* Copyright (c) 2016
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
* Permission is hereby granted, free of charge, to any person
|
||||
* obtaining a copy of this software and associated documentation
|
||||
* files (the "Software"), to deal in the Software without
|
||||
* restriction, including without limitation the rights to use,
|
||||
* copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the software, and to permit persons to whom the
|
||||
* software is furnished to do so, subject to the following
|
||||
* conditions:
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#ifndef AV_COMPAT_CUDA_DYNLINK_LOADER_H
|
||||
#define AV_COMPAT_CUDA_DYNLINK_LOADER_H
|
||||
|
||||
#include "libavutil/log.h"
|
||||
#include "compat/cuda/dynlink_cuda.h"
|
||||
#include "compat/cuda/dynlink_nvcuvid.h"
|
||||
#include "compat/nvenc/nvEncodeAPI.h"
|
||||
#include "compat/w32dlfcn.h"
|
||||
|
||||
#define FFNV_LOAD_FUNC(path) dlopen((path), RTLD_LAZY)
|
||||
#define FFNV_SYM_FUNC(lib, sym) dlsym((lib), (sym))
|
||||
#define FFNV_FREE_FUNC(lib) dlclose(lib)
|
||||
#define FFNV_LOG_FUNC(logctx, msg, ...) av_log(logctx, AV_LOG_ERROR, msg, __VA_ARGS__)
|
||||
#define FFNV_DEBUG_LOG_FUNC(logctx, msg, ...) av_log(logctx, AV_LOG_DEBUG, msg, __VA_ARGS__)
|
||||
#include "libavutil/log.h"
|
||||
#include "libavutil/error.h"
|
||||
|
||||
#include <ffnvcodec/dynlink_loader.h>
|
||||
#if defined(_WIN32)
|
||||
# define LIB_HANDLE HMODULE
|
||||
#else
|
||||
# define LIB_HANDLE void*
|
||||
#endif
|
||||
|
||||
#if defined(_WIN32) || defined(__CYGWIN__)
|
||||
# define CUDA_LIBNAME "nvcuda.dll"
|
||||
# define NVCUVID_LIBNAME "nvcuvid.dll"
|
||||
# if ARCH_X86_64
|
||||
# define NVENC_LIBNAME "nvEncodeAPI64.dll"
|
||||
# else
|
||||
# define NVENC_LIBNAME "nvEncodeAPI.dll"
|
||||
# endif
|
||||
#else
|
||||
# define CUDA_LIBNAME "libcuda.so.1"
|
||||
# define NVCUVID_LIBNAME "libnvcuvid.so.1"
|
||||
# define NVENC_LIBNAME "libnvidia-encode.so.1"
|
||||
#endif
|
||||
|
||||
#define LOAD_LIBRARY(l, path) \
|
||||
do { \
|
||||
if (!((l) = dlopen(path, RTLD_LAZY))) { \
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot load %s\n", path); \
|
||||
ret = AVERROR_UNKNOWN; \
|
||||
goto error; \
|
||||
} \
|
||||
av_log(NULL, AV_LOG_TRACE, "Loaded lib: %s\n", path); \
|
||||
} while (0)
|
||||
|
||||
#define LOAD_SYMBOL(fun, symbol) \
|
||||
do { \
|
||||
if (!((f->fun) = dlsym(f->lib, symbol))) { \
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot load %s\n", symbol); \
|
||||
ret = AVERROR_UNKNOWN; \
|
||||
goto error; \
|
||||
} \
|
||||
av_log(NULL, AV_LOG_TRACE, "Loaded sym: %s\n", symbol); \
|
||||
} while (0)
|
||||
|
||||
#define GENERIC_LOAD_FUNC_PREAMBLE(T, n, N) \
|
||||
T *f; \
|
||||
int ret; \
|
||||
\
|
||||
n##_free_functions(functions); \
|
||||
\
|
||||
f = *functions = av_mallocz(sizeof(*f)); \
|
||||
if (!f) \
|
||||
return AVERROR(ENOMEM); \
|
||||
\
|
||||
LOAD_LIBRARY(f->lib, N);
|
||||
|
||||
#define GENERIC_LOAD_FUNC_FINALE(n) \
|
||||
return 0; \
|
||||
error: \
|
||||
n##_free_functions(functions); \
|
||||
return ret;
|
||||
|
||||
#define GENERIC_FREE_FUNC() \
|
||||
if (!functions) \
|
||||
return; \
|
||||
if (*functions && (*functions)->lib) \
|
||||
dlclose((*functions)->lib); \
|
||||
av_freep(functions);
|
||||
|
||||
#ifdef AV_COMPAT_DYNLINK_CUDA_H
|
||||
typedef struct CudaFunctions {
|
||||
tcuInit *cuInit;
|
||||
tcuDeviceGetCount *cuDeviceGetCount;
|
||||
tcuDeviceGet *cuDeviceGet;
|
||||
tcuDeviceGetName *cuDeviceGetName;
|
||||
tcuDeviceComputeCapability *cuDeviceComputeCapability;
|
||||
tcuCtxCreate_v2 *cuCtxCreate;
|
||||
tcuCtxPushCurrent_v2 *cuCtxPushCurrent;
|
||||
tcuCtxPopCurrent_v2 *cuCtxPopCurrent;
|
||||
tcuCtxDestroy_v2 *cuCtxDestroy;
|
||||
tcuMemAlloc_v2 *cuMemAlloc;
|
||||
tcuMemFree_v2 *cuMemFree;
|
||||
tcuMemcpy2D_v2 *cuMemcpy2D;
|
||||
tcuGetErrorName *cuGetErrorName;
|
||||
tcuGetErrorString *cuGetErrorString;
|
||||
|
||||
LIB_HANDLE lib;
|
||||
} CudaFunctions;
|
||||
#else
|
||||
typedef struct CudaFunctions CudaFunctions;
|
||||
#endif
|
||||
|
||||
typedef struct CuvidFunctions {
|
||||
tcuvidCreateDecoder *cuvidCreateDecoder;
|
||||
tcuvidDestroyDecoder *cuvidDestroyDecoder;
|
||||
tcuvidDecodePicture *cuvidDecodePicture;
|
||||
tcuvidMapVideoFrame *cuvidMapVideoFrame;
|
||||
tcuvidUnmapVideoFrame *cuvidUnmapVideoFrame;
|
||||
tcuvidCtxLockCreate *cuvidCtxLockCreate;
|
||||
tcuvidCtxLockDestroy *cuvidCtxLockDestroy;
|
||||
tcuvidCtxLock *cuvidCtxLock;
|
||||
tcuvidCtxUnlock *cuvidCtxUnlock;
|
||||
|
||||
tcuvidCreateVideoSource *cuvidCreateVideoSource;
|
||||
tcuvidCreateVideoSourceW *cuvidCreateVideoSourceW;
|
||||
tcuvidDestroyVideoSource *cuvidDestroyVideoSource;
|
||||
tcuvidSetVideoSourceState *cuvidSetVideoSourceState;
|
||||
tcuvidGetVideoSourceState *cuvidGetVideoSourceState;
|
||||
tcuvidGetSourceVideoFormat *cuvidGetSourceVideoFormat;
|
||||
tcuvidGetSourceAudioFormat *cuvidGetSourceAudioFormat;
|
||||
tcuvidCreateVideoParser *cuvidCreateVideoParser;
|
||||
tcuvidParseVideoData *cuvidParseVideoData;
|
||||
tcuvidDestroyVideoParser *cuvidDestroyVideoParser;
|
||||
|
||||
LIB_HANDLE lib;
|
||||
} CuvidFunctions;
|
||||
|
||||
typedef struct NvencFunctions {
|
||||
NVENCSTATUS (NVENCAPI *NvEncodeAPICreateInstance)(NV_ENCODE_API_FUNCTION_LIST *functionList);
|
||||
NVENCSTATUS (NVENCAPI *NvEncodeAPIGetMaxSupportedVersion)(uint32_t* version);
|
||||
|
||||
LIB_HANDLE lib;
|
||||
} NvencFunctions;
|
||||
|
||||
#ifdef AV_COMPAT_DYNLINK_CUDA_H
|
||||
static inline void cuda_free_functions(CudaFunctions **functions)
|
||||
{
|
||||
GENERIC_FREE_FUNC();
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void cuvid_free_functions(CuvidFunctions **functions)
|
||||
{
|
||||
GENERIC_FREE_FUNC();
|
||||
}
|
||||
|
||||
static inline void nvenc_free_functions(NvencFunctions **functions)
|
||||
{
|
||||
GENERIC_FREE_FUNC();
|
||||
}
|
||||
|
||||
#ifdef AV_COMPAT_DYNLINK_CUDA_H
|
||||
static inline int cuda_load_functions(CudaFunctions **functions)
|
||||
{
|
||||
GENERIC_LOAD_FUNC_PREAMBLE(CudaFunctions, cuda, CUDA_LIBNAME);
|
||||
|
||||
LOAD_SYMBOL(cuInit, "cuInit");
|
||||
LOAD_SYMBOL(cuDeviceGetCount, "cuDeviceGetCount");
|
||||
LOAD_SYMBOL(cuDeviceGet, "cuDeviceGet");
|
||||
LOAD_SYMBOL(cuDeviceGetName, "cuDeviceGetName");
|
||||
LOAD_SYMBOL(cuDeviceComputeCapability, "cuDeviceComputeCapability");
|
||||
LOAD_SYMBOL(cuCtxCreate, "cuCtxCreate_v2");
|
||||
LOAD_SYMBOL(cuCtxPushCurrent, "cuCtxPushCurrent_v2");
|
||||
LOAD_SYMBOL(cuCtxPopCurrent, "cuCtxPopCurrent_v2");
|
||||
LOAD_SYMBOL(cuCtxDestroy, "cuCtxDestroy_v2");
|
||||
LOAD_SYMBOL(cuMemAlloc, "cuMemAlloc_v2");
|
||||
LOAD_SYMBOL(cuMemFree, "cuMemFree_v2");
|
||||
LOAD_SYMBOL(cuMemcpy2D, "cuMemcpy2D_v2");
|
||||
LOAD_SYMBOL(cuGetErrorName, "cuGetErrorName");
|
||||
LOAD_SYMBOL(cuGetErrorString, "cuGetErrorString");
|
||||
|
||||
GENERIC_LOAD_FUNC_FINALE(cuda);
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline int cuvid_load_functions(CuvidFunctions **functions)
|
||||
{
|
||||
GENERIC_LOAD_FUNC_PREAMBLE(CuvidFunctions, cuvid, NVCUVID_LIBNAME);
|
||||
|
||||
LOAD_SYMBOL(cuvidCreateDecoder, "cuvidCreateDecoder");
|
||||
LOAD_SYMBOL(cuvidDestroyDecoder, "cuvidDestroyDecoder");
|
||||
LOAD_SYMBOL(cuvidDecodePicture, "cuvidDecodePicture");
|
||||
#ifdef __CUVID_DEVPTR64
|
||||
LOAD_SYMBOL(cuvidMapVideoFrame, "cuvidMapVideoFrame64");
|
||||
LOAD_SYMBOL(cuvidUnmapVideoFrame, "cuvidUnmapVideoFrame64");
|
||||
#else
|
||||
LOAD_SYMBOL(cuvidMapVideoFrame, "cuvidMapVideoFrame");
|
||||
LOAD_SYMBOL(cuvidUnmapVideoFrame, "cuvidUnmapVideoFrame");
|
||||
#endif
|
||||
LOAD_SYMBOL(cuvidCtxLockCreate, "cuvidCtxLockCreate");
|
||||
LOAD_SYMBOL(cuvidCtxLockDestroy, "cuvidCtxLockDestroy");
|
||||
LOAD_SYMBOL(cuvidCtxLock, "cuvidCtxLock");
|
||||
LOAD_SYMBOL(cuvidCtxUnlock, "cuvidCtxUnlock");
|
||||
|
||||
LOAD_SYMBOL(cuvidCreateVideoSource, "cuvidCreateVideoSource");
|
||||
LOAD_SYMBOL(cuvidCreateVideoSourceW, "cuvidCreateVideoSourceW");
|
||||
LOAD_SYMBOL(cuvidDestroyVideoSource, "cuvidDestroyVideoSource");
|
||||
LOAD_SYMBOL(cuvidSetVideoSourceState, "cuvidSetVideoSourceState");
|
||||
LOAD_SYMBOL(cuvidGetVideoSourceState, "cuvidGetVideoSourceState");
|
||||
LOAD_SYMBOL(cuvidGetSourceVideoFormat, "cuvidGetSourceVideoFormat");
|
||||
LOAD_SYMBOL(cuvidGetSourceAudioFormat, "cuvidGetSourceAudioFormat");
|
||||
LOAD_SYMBOL(cuvidCreateVideoParser, "cuvidCreateVideoParser");
|
||||
LOAD_SYMBOL(cuvidParseVideoData, "cuvidParseVideoData");
|
||||
LOAD_SYMBOL(cuvidDestroyVideoParser, "cuvidDestroyVideoParser");
|
||||
|
||||
GENERIC_LOAD_FUNC_FINALE(cuvid);
|
||||
}
|
||||
|
||||
static inline int nvenc_load_functions(NvencFunctions **functions)
|
||||
{
|
||||
GENERIC_LOAD_FUNC_PREAMBLE(NvencFunctions, nvenc, NVENC_LIBNAME);
|
||||
|
||||
LOAD_SYMBOL(NvEncodeAPICreateInstance, "NvEncodeAPICreateInstance");
|
||||
LOAD_SYMBOL(NvEncodeAPIGetMaxSupportedVersion, "NvEncodeAPIGetMaxSupportedVersion");
|
||||
|
||||
GENERIC_LOAD_FUNC_FINALE(nvenc);
|
||||
}
|
||||
|
||||
#undef GENERIC_LOAD_FUNC_PREAMBLE
|
||||
#undef LOAD_LIBRARY
|
||||
#undef LOAD_SYMBOL
|
||||
#undef GENERIC_LOAD_FUNC_FINALE
|
||||
#undef GENERIC_FREE_FUNC
|
||||
#undef CUDA_LIBNAME
|
||||
#undef NVCUVID_LIBNAME
|
||||
#undef NVENC_LIBNAME
|
||||
#undef LIB_HANDLE
|
||||
|
||||
#endif
|
||||
|
||||
|
||||
316
compat/cuda/dynlink_nvcuvid.h
Normal file
316
compat/cuda/dynlink_nvcuvid.h
Normal file
@@ -0,0 +1,316 @@
|
||||
/*
|
||||
* This copyright notice applies to this header file only:
|
||||
*
|
||||
* Copyright (c) 2010-2016 NVIDIA Corporation
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person
|
||||
* obtaining a copy of this software and associated documentation
|
||||
* files (the "Software"), to deal in the Software without
|
||||
* restriction, including without limitation the rights to use,
|
||||
* copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the software, and to permit persons to whom the
|
||||
* software is furnished to do so, subject to the following
|
||||
* conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* \file nvcuvid.h
|
||||
* NvCuvid API provides Video Decoding interface to NVIDIA GPU devices.
|
||||
* \date 2015-2015
|
||||
* This file contains the interface constants, structure definitions and function prototypes.
|
||||
*/
|
||||
|
||||
#if !defined(__NVCUVID_H__)
|
||||
#define __NVCUVID_H__
|
||||
|
||||
#include "compat/cuda/dynlink_cuviddec.h"
|
||||
|
||||
#if defined(__cplusplus)
|
||||
extern "C" {
|
||||
#endif /* __cplusplus */
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// High-level helper APIs for video sources
|
||||
//
|
||||
|
||||
typedef void *CUvideosource;
|
||||
typedef void *CUvideoparser;
|
||||
typedef long long CUvideotimestamp;
|
||||
|
||||
/**
|
||||
* \addtogroup VIDEO_PARSER Video Parser
|
||||
* @{
|
||||
*/
|
||||
|
||||
/*!
|
||||
* \enum cudaVideoState
|
||||
* Video Source State
|
||||
*/
|
||||
typedef enum {
|
||||
cudaVideoState_Error = -1, /**< Error state (invalid source) */
|
||||
cudaVideoState_Stopped = 0, /**< Source is stopped (or reached end-of-stream) */
|
||||
cudaVideoState_Started = 1 /**< Source is running and delivering data */
|
||||
} cudaVideoState;
|
||||
|
||||
/*!
|
||||
* \enum cudaAudioCodec
|
||||
* Audio compression
|
||||
*/
|
||||
typedef enum {
|
||||
cudaAudioCodec_MPEG1=0, /**< MPEG-1 Audio */
|
||||
cudaAudioCodec_MPEG2, /**< MPEG-2 Audio */
|
||||
cudaAudioCodec_MP3, /**< MPEG-1 Layer III Audio */
|
||||
cudaAudioCodec_AC3, /**< Dolby Digital (AC3) Audio */
|
||||
cudaAudioCodec_LPCM /**< PCM Audio */
|
||||
} cudaAudioCodec;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDEOFORMAT
|
||||
* Video format
|
||||
*/
|
||||
typedef struct
|
||||
{
|
||||
cudaVideoCodec codec; /**< Compression format */
|
||||
/**
|
||||
* frame rate = numerator / denominator (for example: 30000/1001)
|
||||
*/
|
||||
struct {
|
||||
unsigned int numerator; /**< frame rate numerator (0 = unspecified or variable frame rate) */
|
||||
unsigned int denominator; /**< frame rate denominator (0 = unspecified or variable frame rate) */
|
||||
} frame_rate;
|
||||
unsigned char progressive_sequence; /**< 0=interlaced, 1=progressive */
|
||||
unsigned char bit_depth_luma_minus8; /**< high bit depth Luma */
|
||||
unsigned char bit_depth_chroma_minus8; /**< high bit depth Chroma */
|
||||
unsigned char reserved1; /**< Reserved for future use */
|
||||
unsigned int coded_width; /**< coded frame width */
|
||||
unsigned int coded_height; /**< coded frame height */
|
||||
/**
|
||||
* area of the frame that should be displayed
|
||||
* typical example:
|
||||
* coded_width = 1920, coded_height = 1088
|
||||
* display_area = { 0,0,1920,1080 }
|
||||
*/
|
||||
struct {
|
||||
int left; /**< left position of display rect */
|
||||
int top; /**< top position of display rect */
|
||||
int right; /**< right position of display rect */
|
||||
int bottom; /**< bottom position of display rect */
|
||||
} display_area;
|
||||
cudaVideoChromaFormat chroma_format; /**< Chroma format */
|
||||
unsigned int bitrate; /**< video bitrate (bps, 0=unknown) */
|
||||
/**
|
||||
* Display Aspect Ratio = x:y (4:3, 16:9, etc)
|
||||
*/
|
||||
struct {
|
||||
int x;
|
||||
int y;
|
||||
} display_aspect_ratio;
|
||||
/**
|
||||
* Video Signal Description
|
||||
*/
|
||||
struct {
|
||||
unsigned char video_format : 3;
|
||||
unsigned char video_full_range_flag : 1;
|
||||
unsigned char reserved_zero_bits : 4;
|
||||
unsigned char color_primaries;
|
||||
unsigned char transfer_characteristics;
|
||||
unsigned char matrix_coefficients;
|
||||
} video_signal_description;
|
||||
unsigned int seqhdr_data_length; /**< Additional bytes following (CUVIDEOFORMATEX) */
|
||||
} CUVIDEOFORMAT;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDEOFORMATEX
|
||||
* Video format including raw sequence header information
|
||||
*/
|
||||
typedef struct
|
||||
{
|
||||
CUVIDEOFORMAT format;
|
||||
unsigned char raw_seqhdr_data[1024];
|
||||
} CUVIDEOFORMATEX;
|
||||
|
||||
/*!
|
||||
* \struct CUAUDIOFORMAT
|
||||
* Audio Formats
|
||||
*/
|
||||
typedef struct
|
||||
{
|
||||
cudaAudioCodec codec; /**< Compression format */
|
||||
unsigned int channels; /**< number of audio channels */
|
||||
unsigned int samplespersec; /**< sampling frequency */
|
||||
unsigned int bitrate; /**< For uncompressed, can also be used to determine bits per sample */
|
||||
unsigned int reserved1; /**< Reserved for future use */
|
||||
unsigned int reserved2; /**< Reserved for future use */
|
||||
} CUAUDIOFORMAT;
|
||||
|
||||
|
||||
/*!
|
||||
* \enum CUvideopacketflags
|
||||
* Data packet flags
|
||||
*/
|
||||
typedef enum {
|
||||
CUVID_PKT_ENDOFSTREAM = 0x01, /**< Set when this is the last packet for this stream */
|
||||
CUVID_PKT_TIMESTAMP = 0x02, /**< Timestamp is valid */
|
||||
CUVID_PKT_DISCONTINUITY = 0x04 /**< Set when a discontinuity has to be signalled */
|
||||
} CUvideopacketflags;
|
||||
|
||||
/*!
|
||||
* \struct CUVIDSOURCEDATAPACKET
|
||||
* Data Packet
|
||||
*/
|
||||
typedef struct _CUVIDSOURCEDATAPACKET
|
||||
{
|
||||
tcu_ulong flags; /**< Combination of CUVID_PKT_XXX flags */
|
||||
tcu_ulong payload_size; /**< number of bytes in the payload (may be zero if EOS flag is set) */
|
||||
const unsigned char *payload; /**< Pointer to packet payload data (may be NULL if EOS flag is set) */
|
||||
CUvideotimestamp timestamp; /**< Presentation timestamp (10MHz clock), only valid if CUVID_PKT_TIMESTAMP flag is set */
|
||||
} CUVIDSOURCEDATAPACKET;
|
||||
|
||||
// Callback for packet delivery
|
||||
typedef int (CUDAAPI *PFNVIDSOURCECALLBACK)(void *, CUVIDSOURCEDATAPACKET *);
|
||||
|
||||
/*!
|
||||
* \struct CUVIDSOURCEPARAMS
|
||||
* Source Params
|
||||
*/
|
||||
typedef struct _CUVIDSOURCEPARAMS
|
||||
{
|
||||
unsigned int ulClockRate; /**< Timestamp units in Hz (0=default=10000000Hz) */
|
||||
unsigned int uReserved1[7]; /**< Reserved for future use - set to zero */
|
||||
void *pUserData; /**< Parameter passed in to the data handlers */
|
||||
PFNVIDSOURCECALLBACK pfnVideoDataHandler; /**< Called to deliver audio packets */
|
||||
PFNVIDSOURCECALLBACK pfnAudioDataHandler; /**< Called to deliver video packets */
|
||||
void *pvReserved2[8]; /**< Reserved for future use - set to NULL */
|
||||
} CUVIDSOURCEPARAMS;
|
||||
|
||||
/*!
|
||||
* \enum CUvideosourceformat_flags
|
||||
* CUvideosourceformat_flags
|
||||
*/
|
||||
typedef enum {
|
||||
CUVID_FMT_EXTFORMATINFO = 0x100 /**< Return extended format structure (CUVIDEOFORMATEX) */
|
||||
} CUvideosourceformat_flags;
|
||||
|
||||
#if !defined(__APPLE__)
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCreateVideoSource(CUvideosource *pObj, const char *pszFileName, CUVIDSOURCEPARAMS *pParams)
|
||||
* Create Video Source
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCreateVideoSource(CUvideosource *pObj, const char *pszFileName, CUVIDSOURCEPARAMS *pParams);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCreateVideoSourceW(CUvideosource *pObj, const wchar_t *pwszFileName, CUVIDSOURCEPARAMS *pParams)
|
||||
* Create Video Source
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCreateVideoSourceW(CUvideosource *pObj, const wchar_t *pwszFileName, CUVIDSOURCEPARAMS *pParams);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidDestroyVideoSource(CUvideosource obj)
|
||||
* Destroy Video Source
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidDestroyVideoSource(CUvideosource obj);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidSetVideoSourceState(CUvideosource obj, cudaVideoState state)
|
||||
* Set Video Source state
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidSetVideoSourceState(CUvideosource obj, cudaVideoState state);
|
||||
|
||||
/**
|
||||
* \fn cudaVideoState CUDAAPI cuvidGetVideoSourceState(CUvideosource obj)
|
||||
* Get Video Source state
|
||||
*/
|
||||
typedef cudaVideoState CUDAAPI tcuvidGetVideoSourceState(CUvideosource obj);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidGetSourceVideoFormat(CUvideosource obj, CUVIDEOFORMAT *pvidfmt, unsigned int flags)
|
||||
* Get Video Source Format
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidGetSourceVideoFormat(CUvideosource obj, CUVIDEOFORMAT *pvidfmt, unsigned int flags);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidGetSourceAudioFormat(CUvideosource obj, CUAUDIOFORMAT *paudfmt, unsigned int flags)
|
||||
* Set Video Source state
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidGetSourceAudioFormat(CUvideosource obj, CUAUDIOFORMAT *paudfmt, unsigned int flags);
|
||||
|
||||
#endif
|
||||
|
||||
/**
|
||||
* \struct CUVIDPARSERDISPINFO
|
||||
*/
|
||||
typedef struct _CUVIDPARSERDISPINFO
|
||||
{
|
||||
int picture_index; /**< */
|
||||
int progressive_frame; /**< */
|
||||
int top_field_first; /**< */
|
||||
int repeat_first_field; /**< Number of additional fields (1=ivtc, 2=frame doubling, 4=frame tripling, -1=unpaired field) */
|
||||
CUvideotimestamp timestamp; /**< */
|
||||
} CUVIDPARSERDISPINFO;
|
||||
|
||||
//
|
||||
// Parser callbacks
|
||||
// The parser will call these synchronously from within cuvidParseVideoData(), whenever a picture is ready to
|
||||
// be decoded and/or displayed.
|
||||
//
|
||||
typedef int (CUDAAPI *PFNVIDSEQUENCECALLBACK)(void *, CUVIDEOFORMAT *);
|
||||
typedef int (CUDAAPI *PFNVIDDECODECALLBACK)(void *, CUVIDPICPARAMS *);
|
||||
typedef int (CUDAAPI *PFNVIDDISPLAYCALLBACK)(void *, CUVIDPARSERDISPINFO *);
|
||||
|
||||
/**
|
||||
* \struct CUVIDPARSERPARAMS
|
||||
*/
|
||||
typedef struct _CUVIDPARSERPARAMS
|
||||
{
|
||||
cudaVideoCodec CodecType; /**< cudaVideoCodec_XXX */
|
||||
unsigned int ulMaxNumDecodeSurfaces; /**< Max # of decode surfaces (parser will cycle through these) */
|
||||
unsigned int ulClockRate; /**< Timestamp units in Hz (0=default=10000000Hz) */
|
||||
unsigned int ulErrorThreshold; /**< % Error threshold (0-100) for calling pfnDecodePicture (100=always call pfnDecodePicture even if picture bitstream is fully corrupted) */
|
||||
unsigned int ulMaxDisplayDelay; /**< Max display queue delay (improves pipelining of decode with display) - 0=no delay (recommended values: 2..4) */
|
||||
unsigned int uReserved1[5]; /**< Reserved for future use - set to 0 */
|
||||
void *pUserData; /**< User data for callbacks */
|
||||
PFNVIDSEQUENCECALLBACK pfnSequenceCallback; /**< Called before decoding frames and/or whenever there is a format change */
|
||||
PFNVIDDECODECALLBACK pfnDecodePicture; /**< Called when a picture is ready to be decoded (decode order) */
|
||||
PFNVIDDISPLAYCALLBACK pfnDisplayPicture; /**< Called whenever a picture is ready to be displayed (display order) */
|
||||
void *pvReserved2[7]; /**< Reserved for future use - set to NULL */
|
||||
CUVIDEOFORMATEX *pExtVideoInfo; /**< [Optional] sequence header data from system layer */
|
||||
} CUVIDPARSERPARAMS;
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidCreateVideoParser(CUvideoparser *pObj, CUVIDPARSERPARAMS *pParams)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidCreateVideoParser(CUvideoparser *pObj, CUVIDPARSERPARAMS *pParams);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidParseVideoData(CUvideoparser obj, CUVIDSOURCEDATAPACKET *pPacket)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidParseVideoData(CUvideoparser obj, CUVIDSOURCEDATAPACKET *pPacket);
|
||||
|
||||
/**
|
||||
* \fn CUresult CUDAAPI cuvidDestroyVideoParser(CUvideoparser obj)
|
||||
*/
|
||||
typedef CUresult CUDAAPI tcuvidDestroyVideoParser(CUvideoparser obj);
|
||||
|
||||
/** @} */ /* END VIDEO_PARSER */
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
#if defined(__cplusplus)
|
||||
}
|
||||
#endif /* __cplusplus */
|
||||
|
||||
#endif // __NVCUVID_H__
|
||||
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a
|
||||
# copy of this software and associated documentation files (the "Software"),
|
||||
# to deal in the Software without restriction, including without limitation
|
||||
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
# and/or sell copies of the Software, and to permit persons to whom the
|
||||
# Software is furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
# DEALINGS IN THE SOFTWARE.
|
||||
|
||||
set -e
|
||||
|
||||
OUT="$1"
|
||||
IN="$2"
|
||||
NAME="$(basename "$IN" | sed 's/\..*//')"
|
||||
|
||||
printf "const char %s_ptx[] = \\" "$NAME" > "$OUT"
|
||||
while read LINE
|
||||
do
|
||||
printf "\n\t\"%s\\\n\"" "$(printf "%s" "$LINE" | sed -e 's/\r//g' -e 's/["\\]/\\&/g')" >> "$OUT"
|
||||
done < "$IN"
|
||||
printf ";\n" >> "$OUT"
|
||||
|
||||
exit 0
|
||||
3219
compat/nvenc/nvEncodeAPI.h
Normal file
3219
compat/nvenc/nvEncodeAPI.h
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright (c) 2011-2017 KO Myung-Hun <komh@chollian.net>
|
||||
* Copyright (c) 2011 KO Myung-Hun <komh@chollian.net>
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
@@ -46,11 +46,9 @@ typedef struct {
|
||||
|
||||
typedef void pthread_attr_t;
|
||||
|
||||
typedef _fmutex pthread_mutex_t;
|
||||
typedef HMTX pthread_mutex_t;
|
||||
typedef void pthread_mutexattr_t;
|
||||
|
||||
#define PTHREAD_MUTEX_INITIALIZER _FMUTEX_INITIALIZER
|
||||
|
||||
typedef struct {
|
||||
HEV event_sem;
|
||||
HEV ack_sem;
|
||||
@@ -100,28 +98,28 @@ static av_always_inline int pthread_join(pthread_t thread, void **value_ptr)
|
||||
static av_always_inline int pthread_mutex_init(pthread_mutex_t *mutex,
|
||||
const pthread_mutexattr_t *attr)
|
||||
{
|
||||
_fmutex_create(mutex, 0);
|
||||
DosCreateMutexSem(NULL, (PHMTX)mutex, 0, FALSE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_always_inline int pthread_mutex_destroy(pthread_mutex_t *mutex)
|
||||
{
|
||||
_fmutex_close(mutex);
|
||||
DosCloseMutexSem(*(PHMTX)mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_always_inline int pthread_mutex_lock(pthread_mutex_t *mutex)
|
||||
{
|
||||
_fmutex_request(mutex, 0);
|
||||
DosRequestMutexSem(*(PHMTX)mutex, SEM_INDEFINITE_WAIT);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_always_inline int pthread_mutex_unlock(pthread_mutex_t *mutex)
|
||||
{
|
||||
_fmutex_release(mutex);
|
||||
DosReleaseMutexSem(*(PHMTX)mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
10
compat/plan9/head
Executable file
10
compat/plan9/head
Executable file
@@ -0,0 +1,10 @@
|
||||
#!/bin/sh
|
||||
|
||||
n=10
|
||||
|
||||
case "$1" in
|
||||
-n) n=$2; shift 2 ;;
|
||||
-n*) n=${1#-n}; shift ;;
|
||||
esac
|
||||
|
||||
exec sed ${n}q "$@"
|
||||
@@ -16,18 +16,19 @@
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#ifndef AVCODEC_EXRDSP_H
|
||||
#define AVCODEC_EXRDSP_H
|
||||
int plan9_main(int argc, char **argv);
|
||||
|
||||
#include <stdint.h>
|
||||
#include "libavutil/common.h"
|
||||
#undef main
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
/* The setfcr() function in lib9 is broken, must use asm. */
|
||||
#ifdef __i386
|
||||
short fcr;
|
||||
__asm__ volatile ("fstcw %0 \n"
|
||||
"or $63, %0 \n"
|
||||
"fldcw %0 \n"
|
||||
: "=m"(fcr));
|
||||
#endif
|
||||
|
||||
typedef struct ExrDSPContext {
|
||||
void (*reorder_pixels)(uint8_t *dst, const uint8_t *src, ptrdiff_t size);
|
||||
void (*predictor)(uint8_t *src, ptrdiff_t size);
|
||||
} ExrDSPContext;
|
||||
|
||||
void ff_exrdsp_init(ExrDSPContext *c);
|
||||
void ff_exrdsp_init_x86(ExrDSPContext *c);
|
||||
|
||||
#endif /* AVCODEC_EXRDSP_H */
|
||||
return plan9_main(argc, argv);
|
||||
}
|
||||
2
compat/plan9/printf
Executable file
2
compat/plan9/printf
Executable file
@@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
exec awk "BEGIN { for (i = 2; i < ARGC; i++) printf \"$1\", ARGV[i] }" "$@"
|
||||
@@ -25,9 +25,9 @@
|
||||
#include "libavutil/avstring.h"
|
||||
#include "libavutil/mathematics.h"
|
||||
|
||||
static const char *check_nan_suffix(const char *s)
|
||||
static char *check_nan_suffix(char *s)
|
||||
{
|
||||
const char *start = s;
|
||||
char *start = s;
|
||||
|
||||
if (*s++ != '(')
|
||||
return start;
|
||||
@@ -44,7 +44,7 @@ double strtod(const char *, char **);
|
||||
|
||||
double avpriv_strtod(const char *nptr, char **endptr)
|
||||
{
|
||||
const char *end;
|
||||
char *end;
|
||||
double res;
|
||||
|
||||
/* Skip leading spaces */
|
||||
@@ -81,13 +81,13 @@ double avpriv_strtod(const char *nptr, char **endptr)
|
||||
!av_strncasecmp(nptr, "+0x", 3)) {
|
||||
/* FIXME this doesn't handle exponents, non-integers (float/double)
|
||||
* and numbers too large for long long */
|
||||
res = strtoll(nptr, (char **)&end, 16);
|
||||
res = strtoll(nptr, &end, 16);
|
||||
} else {
|
||||
res = strtod(nptr, (char **)&end);
|
||||
res = strtod(nptr, &end);
|
||||
}
|
||||
|
||||
if (endptr)
|
||||
*endptr = (char *)end;
|
||||
*endptr = end;
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
@@ -16,12 +16,15 @@
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#ifndef AVFILTER_OPENCL_SOURCE_H
|
||||
#define AVFILTER_OPENCL_SOURCE_H
|
||||
#ifndef COMPAT_TMS470_MATH_H
|
||||
#define COMPAT_TMS470_MATH_H
|
||||
|
||||
extern const char *ff_opencl_source_avgblur;
|
||||
extern const char *ff_opencl_source_convolution;
|
||||
extern const char *ff_opencl_source_overlay;
|
||||
extern const char *ff_opencl_source_unsharp;
|
||||
#include_next <math.h>
|
||||
|
||||
#endif /* AVFILTER_OPENCL_SOURCE_H */
|
||||
#undef INFINITY
|
||||
#undef NAN
|
||||
|
||||
#define INFINITY (*(const float*)((const unsigned []){ 0x7f800000 }))
|
||||
#define NAN (*(const float*)((const unsigned []){ 0x7fc00000 }))
|
||||
|
||||
#endif /* COMPAT_TMS470_MATH_H */
|
||||
@@ -21,8 +21,7 @@
|
||||
|
||||
#ifdef _WIN32
|
||||
#include <windows.h>
|
||||
#include "config.h"
|
||||
#if (_WIN32_WINNT < 0x0602) || HAVE_WINRT
|
||||
#if _WIN32_WINNT < 0x0602
|
||||
#include "libavutil/wchar_filename.h"
|
||||
#endif
|
||||
/**
|
||||
@@ -72,17 +71,7 @@ exit:
|
||||
#ifndef LOAD_LIBRARY_SEARCH_SYSTEM32
|
||||
# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800
|
||||
#endif
|
||||
#if HAVE_WINRT
|
||||
wchar_t *name_w = NULL;
|
||||
int ret;
|
||||
if (utf8towchar(name, &name_w))
|
||||
return NULL;
|
||||
ret = LoadPackagedLibrary(name_w, 0);
|
||||
av_free(name_w);
|
||||
return ret;
|
||||
#else
|
||||
return LoadLibraryExA(name, NULL, LOAD_LIBRARY_SEARCH_APPLICATION_DIR | LOAD_LIBRARY_SEARCH_SYSTEM32);
|
||||
#endif
|
||||
}
|
||||
#define dlopen(name, flags) win32_dlopen(name)
|
||||
#define dlclose FreeLibrary
|
||||
|
||||
@@ -39,6 +39,11 @@
|
||||
#include <windows.h>
|
||||
#include <process.h>
|
||||
|
||||
#if _WIN32_WINNT < 0x0600 && defined(__MINGW32__)
|
||||
#undef MemoryBarrier
|
||||
#define MemoryBarrier __sync_synchronize
|
||||
#endif
|
||||
|
||||
#include "libavutil/attributes.h"
|
||||
#include "libavutil/common.h"
|
||||
#include "libavutil/internal.h"
|
||||
@@ -51,19 +56,28 @@ typedef struct pthread_t {
|
||||
void *ret;
|
||||
} pthread_t;
|
||||
|
||||
/* use light weight mutex/condition variable API for Windows Vista and later */
|
||||
typedef SRWLOCK pthread_mutex_t;
|
||||
/* the conditional variable api for windows 6.0+ uses critical sections and
|
||||
* not mutexes */
|
||||
typedef CRITICAL_SECTION pthread_mutex_t;
|
||||
|
||||
/* This is the CONDITION_VARIABLE typedef for using Windows' native
|
||||
* conditional variables on kernels 6.0+. */
|
||||
#if HAVE_CONDITION_VARIABLE_PTR
|
||||
typedef CONDITION_VARIABLE pthread_cond_t;
|
||||
#else
|
||||
typedef struct pthread_cond_t {
|
||||
void *Ptr;
|
||||
} pthread_cond_t;
|
||||
#endif
|
||||
|
||||
#define PTHREAD_MUTEX_INITIALIZER SRWLOCK_INIT
|
||||
#define PTHREAD_COND_INITIALIZER CONDITION_VARIABLE_INIT
|
||||
|
||||
#if _WIN32_WINNT >= 0x0600
|
||||
#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
|
||||
#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
|
||||
#endif
|
||||
|
||||
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
|
||||
{
|
||||
pthread_t *h = (pthread_t*)arg;
|
||||
pthread_t *h = arg;
|
||||
h->ret = h->func(h->arg);
|
||||
return 0;
|
||||
}
|
||||
@@ -100,25 +114,26 @@ static av_unused int pthread_join(pthread_t thread, void **value_ptr)
|
||||
|
||||
static inline int pthread_mutex_init(pthread_mutex_t *m, void* attr)
|
||||
{
|
||||
InitializeSRWLock(m);
|
||||
InitializeCriticalSection(m);
|
||||
return 0;
|
||||
}
|
||||
static inline int pthread_mutex_destroy(pthread_mutex_t *m)
|
||||
{
|
||||
/* Unlocked SWR locks use no resources */
|
||||
DeleteCriticalSection(m);
|
||||
return 0;
|
||||
}
|
||||
static inline int pthread_mutex_lock(pthread_mutex_t *m)
|
||||
{
|
||||
AcquireSRWLockExclusive(m);
|
||||
EnterCriticalSection(m);
|
||||
return 0;
|
||||
}
|
||||
static inline int pthread_mutex_unlock(pthread_mutex_t *m)
|
||||
{
|
||||
ReleaseSRWLockExclusive(m);
|
||||
LeaveCriticalSection(m);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#if _WIN32_WINNT >= 0x0600
|
||||
typedef INIT_ONCE pthread_once_t;
|
||||
#define PTHREAD_ONCE_INIT INIT_ONCE_STATIC_INIT
|
||||
|
||||
@@ -152,7 +167,7 @@ static inline int pthread_cond_broadcast(pthread_cond_t *cond)
|
||||
|
||||
static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
|
||||
{
|
||||
SleepConditionVariableSRW(cond, mutex, INFINITE, 0);
|
||||
SleepConditionVariableCS(cond, mutex, INFINITE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -162,4 +177,242 @@ static inline int pthread_cond_signal(pthread_cond_t *cond)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#else // _WIN32_WINNT < 0x0600
|
||||
|
||||
/* atomic init state of dynamically loaded functions */
|
||||
static LONG w32thread_init_state = 0;
|
||||
static av_unused void w32thread_init(void);
|
||||
|
||||
/* for pre-Windows 6.0 platforms, define INIT_ONCE struct,
|
||||
* compatible to the one used in the native API */
|
||||
|
||||
typedef union pthread_once_t {
|
||||
void * Ptr; ///< For the Windows 6.0+ native functions
|
||||
LONG state; ///< For the pre-Windows 6.0 compat code
|
||||
} pthread_once_t;
|
||||
|
||||
#define PTHREAD_ONCE_INIT {0}
|
||||
|
||||
/* function pointers to init once API on windows 6.0+ kernels */
|
||||
static BOOL (WINAPI *initonce_begin)(pthread_once_t *lpInitOnce, DWORD dwFlags, BOOL *fPending, void **lpContext);
|
||||
static BOOL (WINAPI *initonce_complete)(pthread_once_t *lpInitOnce, DWORD dwFlags, void *lpContext);
|
||||
|
||||
/* pre-Windows 6.0 compat using a spin-lock */
|
||||
static inline void w32thread_once_fallback(LONG volatile *state, void (*init_routine)(void))
|
||||
{
|
||||
switch (InterlockedCompareExchange(state, 1, 0)) {
|
||||
/* Initial run */
|
||||
case 0:
|
||||
init_routine();
|
||||
InterlockedExchange(state, 2);
|
||||
break;
|
||||
/* Another thread is running init */
|
||||
case 1:
|
||||
while (1) {
|
||||
MemoryBarrier();
|
||||
if (*state == 2)
|
||||
break;
|
||||
Sleep(0);
|
||||
}
|
||||
break;
|
||||
/* Initialization complete */
|
||||
case 2:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static av_unused int pthread_once(pthread_once_t *once_control, void (*init_routine)(void))
|
||||
{
|
||||
w32thread_once_fallback(&w32thread_init_state, w32thread_init);
|
||||
|
||||
/* Use native functions on Windows 6.0+ */
|
||||
if (initonce_begin && initonce_complete) {
|
||||
BOOL pending = FALSE;
|
||||
initonce_begin(once_control, 0, &pending, NULL);
|
||||
if (pending)
|
||||
init_routine();
|
||||
initonce_complete(once_control, 0, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
w32thread_once_fallback(&once_control->state, init_routine);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* for pre-Windows 6.0 platforms we need to define and use our own condition
|
||||
* variable and api */
|
||||
|
||||
typedef struct win32_cond_t {
|
||||
pthread_mutex_t mtx_broadcast;
|
||||
pthread_mutex_t mtx_waiter_count;
|
||||
volatile int waiter_count;
|
||||
HANDLE semaphore;
|
||||
HANDLE waiters_done;
|
||||
volatile int is_broadcast;
|
||||
} win32_cond_t;
|
||||
|
||||
/* function pointers to conditional variable API on windows 6.0+ kernels */
|
||||
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
|
||||
static void (WINAPI *cond_init)(pthread_cond_t *cond);
|
||||
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
|
||||
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
|
||||
DWORD milliseconds);
|
||||
|
||||
static av_unused int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
|
||||
{
|
||||
win32_cond_t *win32_cond = NULL;
|
||||
|
||||
w32thread_once_fallback(&w32thread_init_state, w32thread_init);
|
||||
|
||||
if (cond_init) {
|
||||
cond_init(cond);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* non native condition variables */
|
||||
win32_cond = av_mallocz(sizeof(win32_cond_t));
|
||||
if (!win32_cond)
|
||||
return ENOMEM;
|
||||
cond->Ptr = win32_cond;
|
||||
win32_cond->semaphore = CreateSemaphore(NULL, 0, 0x7fffffff, NULL);
|
||||
if (!win32_cond->semaphore)
|
||||
return ENOMEM;
|
||||
win32_cond->waiters_done = CreateEvent(NULL, TRUE, FALSE, NULL);
|
||||
if (!win32_cond->waiters_done)
|
||||
return ENOMEM;
|
||||
|
||||
pthread_mutex_init(&win32_cond->mtx_waiter_count, NULL);
|
||||
pthread_mutex_init(&win32_cond->mtx_broadcast, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_unused int pthread_cond_destroy(pthread_cond_t *cond)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
/* native condition variables do not destroy */
|
||||
if (cond_init)
|
||||
return 0;
|
||||
|
||||
/* non native condition variables */
|
||||
CloseHandle(win32_cond->semaphore);
|
||||
CloseHandle(win32_cond->waiters_done);
|
||||
pthread_mutex_destroy(&win32_cond->mtx_waiter_count);
|
||||
pthread_mutex_destroy(&win32_cond->mtx_broadcast);
|
||||
av_freep(&win32_cond);
|
||||
cond->Ptr = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_unused int pthread_cond_broadcast(pthread_cond_t *cond)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
int have_waiter;
|
||||
|
||||
if (cond_broadcast) {
|
||||
cond_broadcast(cond);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* non native condition variables */
|
||||
pthread_mutex_lock(&win32_cond->mtx_broadcast);
|
||||
pthread_mutex_lock(&win32_cond->mtx_waiter_count);
|
||||
have_waiter = 0;
|
||||
|
||||
if (win32_cond->waiter_count) {
|
||||
win32_cond->is_broadcast = 1;
|
||||
have_waiter = 1;
|
||||
}
|
||||
|
||||
if (have_waiter) {
|
||||
ReleaseSemaphore(win32_cond->semaphore, win32_cond->waiter_count, NULL);
|
||||
pthread_mutex_unlock(&win32_cond->mtx_waiter_count);
|
||||
WaitForSingleObject(win32_cond->waiters_done, INFINITE);
|
||||
ResetEvent(win32_cond->waiters_done);
|
||||
win32_cond->is_broadcast = 0;
|
||||
} else
|
||||
pthread_mutex_unlock(&win32_cond->mtx_waiter_count);
|
||||
pthread_mutex_unlock(&win32_cond->mtx_broadcast);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
int last_waiter;
|
||||
if (cond_wait) {
|
||||
cond_wait(cond, mutex, INFINITE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* non native condition variables */
|
||||
pthread_mutex_lock(&win32_cond->mtx_broadcast);
|
||||
pthread_mutex_lock(&win32_cond->mtx_waiter_count);
|
||||
win32_cond->waiter_count++;
|
||||
pthread_mutex_unlock(&win32_cond->mtx_waiter_count);
|
||||
pthread_mutex_unlock(&win32_cond->mtx_broadcast);
|
||||
|
||||
// unlock the external mutex
|
||||
pthread_mutex_unlock(mutex);
|
||||
WaitForSingleObject(win32_cond->semaphore, INFINITE);
|
||||
|
||||
pthread_mutex_lock(&win32_cond->mtx_waiter_count);
|
||||
win32_cond->waiter_count--;
|
||||
last_waiter = !win32_cond->waiter_count || !win32_cond->is_broadcast;
|
||||
pthread_mutex_unlock(&win32_cond->mtx_waiter_count);
|
||||
|
||||
if (last_waiter)
|
||||
SetEvent(win32_cond->waiters_done);
|
||||
|
||||
// lock the external mutex
|
||||
return pthread_mutex_lock(mutex);
|
||||
}
|
||||
|
||||
static av_unused int pthread_cond_signal(pthread_cond_t *cond)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
int have_waiter;
|
||||
if (cond_signal) {
|
||||
cond_signal(cond);
|
||||
return 0;
|
||||
}
|
||||
|
||||
pthread_mutex_lock(&win32_cond->mtx_broadcast);
|
||||
|
||||
/* non-native condition variables */
|
||||
pthread_mutex_lock(&win32_cond->mtx_waiter_count);
|
||||
have_waiter = win32_cond->waiter_count;
|
||||
pthread_mutex_unlock(&win32_cond->mtx_waiter_count);
|
||||
|
||||
if (have_waiter) {
|
||||
ReleaseSemaphore(win32_cond->semaphore, 1, NULL);
|
||||
WaitForSingleObject(win32_cond->waiters_done, INFINITE);
|
||||
ResetEvent(win32_cond->waiters_done);
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&win32_cond->mtx_broadcast);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static av_unused void w32thread_init(void)
|
||||
{
|
||||
#if _WIN32_WINNT < 0x0600
|
||||
HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll"));
|
||||
/* if one is available, then they should all be available */
|
||||
cond_init =
|
||||
(void*)GetProcAddress(kernel_dll, "InitializeConditionVariable");
|
||||
cond_broadcast =
|
||||
(void*)GetProcAddress(kernel_dll, "WakeAllConditionVariable");
|
||||
cond_signal =
|
||||
(void*)GetProcAddress(kernel_dll, "WakeConditionVariable");
|
||||
cond_wait =
|
||||
(void*)GetProcAddress(kernel_dll, "SleepConditionVariableCS");
|
||||
initonce_begin =
|
||||
(void*)GetProcAddress(kernel_dll, "InitOnceBeginInitialize");
|
||||
initonce_complete =
|
||||
(void*)GetProcAddress(kernel_dll, "InitOnceComplete");
|
||||
#endif
|
||||
|
||||
}
|
||||
|
||||
#endif /* COMPAT_W32PTHREADS_H */
|
||||
|
||||
@@ -45,11 +45,7 @@ libname=$(mktemp -u "library").lib
|
||||
|
||||
trap 'rm -f -- $libname' EXIT
|
||||
|
||||
if [ -n "$AR" ]; then
|
||||
$AR rcs ${libname} $@ >/dev/null
|
||||
else
|
||||
lib -out:${libname} $@ >/dev/null
|
||||
fi
|
||||
lib -out:${libname} $@ >/dev/null
|
||||
if [ $? != 0 ]; then
|
||||
echo "Could not create temporary library." >&2
|
||||
exit 1
|
||||
@@ -58,7 +54,23 @@ fi
|
||||
IFS='
|
||||
'
|
||||
|
||||
prefix="$EXTERN_PREFIX"
|
||||
# Determine if we're building for x86 or x86_64 and
|
||||
# set the symbol prefix accordingly.
|
||||
prefix=""
|
||||
arch=$(dumpbin -headers ${libname} |
|
||||
tr '\t' ' ' |
|
||||
grep '^ \+.\+machine \+(.\+)' |
|
||||
head -1 |
|
||||
sed -e 's/^ \{1,\}.\{1,\} \{1,\}machine \{1,\}(\(...\)).*/\1/')
|
||||
|
||||
if [ "${arch}" = "x86" ]; then
|
||||
prefix="_"
|
||||
else
|
||||
if [ "${arch}" != "ARM" ] && [ "${arch}" != "x64" ]; then
|
||||
echo "Unknown machine type." >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
started=0
|
||||
regex="none"
|
||||
@@ -100,19 +112,7 @@ for line in $(cat ${vscript} | tr '\t' ' '); do
|
||||
'
|
||||
done
|
||||
|
||||
if [ -n "$NM" ]; then
|
||||
# Use eval, since NM="nm -g"
|
||||
dump=$(eval "$NM --defined-only -g ${libname}" |
|
||||
grep -v : |
|
||||
grep -v ^$ |
|
||||
cut -d' ' -f3 |
|
||||
sed -e "s/^${prefix}//")
|
||||
else
|
||||
dump=$(dumpbin -linkermember:1 ${libname} |
|
||||
sed -e '/public symbols/,$!d' -e '/^ \{1,\}Summary/,$d' -e "s/ \{1,\}${prefix}/ /" -e 's/ \{1,\}/ /g' |
|
||||
tail -n +2 |
|
||||
cut -d' ' -f3)
|
||||
fi
|
||||
dump=$(dumpbin -linkermember:1 ${libname})
|
||||
|
||||
rm ${libname}
|
||||
|
||||
@@ -121,6 +121,9 @@ list=""
|
||||
for exp in ${regex}; do
|
||||
list="${list}"'
|
||||
'$(echo "${dump}" |
|
||||
sed -e '/public symbols/,$!d' -e '/^ \{1,\}Summary/,$d' -e "s/ \{1,\}${prefix}/ /" -e 's/ \{1,\}/ /g' |
|
||||
tail -n +2 |
|
||||
cut -d' ' -f3 |
|
||||
grep "^${exp}" |
|
||||
sed -e 's/^/ /')
|
||||
done
|
||||
|
||||
285
doc/APIchanges
285
doc/APIchanges
@@ -2,282 +2,19 @@ Never assume the API of libav* to be stable unless at least 1 month has passed
|
||||
since the last major version increase or the API was added.
|
||||
|
||||
The last version increases were:
|
||||
libavcodec: 2017-10-21
|
||||
libavdevice: 2017-10-21
|
||||
libavfilter: 2017-10-21
|
||||
libavformat: 2017-10-21
|
||||
libavresample: 2017-10-21
|
||||
libpostproc: 2017-10-21
|
||||
libswresample: 2017-10-21
|
||||
libswscale: 2017-10-21
|
||||
libavutil: 2017-10-21
|
||||
libavcodec: 2015-08-28
|
||||
libavdevice: 2015-08-28
|
||||
libavfilter: 2015-08-28
|
||||
libavformat: 2015-08-28
|
||||
libavresample: 2015-08-28
|
||||
libpostproc: 2015-08-28
|
||||
libswresample: 2015-08-28
|
||||
libswscale: 2015-08-28
|
||||
libavutil: 2015-08-28
|
||||
|
||||
|
||||
API changes, most recent first:
|
||||
|
||||
-------- 8< --------- FFmpeg 4.0 was cut here -------- 8< ---------
|
||||
|
||||
2018-04-03 - d6fc031caf - lavu 56.13.100 - pixdesc.h
|
||||
Deprecate AV_PIX_FMT_FLAG_PSEUDOPAL and make allocating a pseudo palette
|
||||
optional for API users (see AV_PIX_FMT_FLAG_PSEUDOPAL doxygen for details).
|
||||
|
||||
2018-04-01 - 860086ee16 - lavc 58.17.100 - avcodec.h
|
||||
Add av_packet_make_refcounted().
|
||||
|
||||
2018-04-01 - f1805d160d - lavfi 7.14.100 - avfilter.h
|
||||
Deprecate use of avfilter_register(), avfilter_register_all(),
|
||||
avfilter_next(). Add av_filter_iterate().
|
||||
|
||||
2018-03-25 - b7d0d912ef - lavc 58.16.100 - avcodec.h
|
||||
Add FF_SUB_CHARENC_MODE_IGNORE.
|
||||
|
||||
2018-03-23 - db2a7c947e - lavu 56.12.100 - encryption_info.h
|
||||
Add AVEncryptionInitInfo and AVEncryptionInfo structures to hold new side-data
|
||||
for encryption info.
|
||||
|
||||
2018-03-21 - f14ca60001 - lavc 58.15.100 - avcodec.h
|
||||
Add av_packet_make_writable().
|
||||
|
||||
2018-03-18 - 4b86ac27a0 - lavu 56.11.100 - frame.h
|
||||
Add AV_FRAME_DATA_QP_TABLE_PROPERTIES and AV_FRAME_DATA_QP_TABLE_DATA.
|
||||
|
||||
2018-03-15 - e0e72539cf - lavu 56.10.100 - opt.h
|
||||
Add AV_OPT_FLAG_BSF_PARAM
|
||||
|
||||
2018-03-07 - 950170bd3b - lavu 56.9.100 - crc.h
|
||||
Add AV_CRC_8_EBU crc variant.
|
||||
|
||||
2018-03-07 - 2a0eb86857 - lavc 58.14.100 - mediacodec.h
|
||||
Change the default behavior of avcodec_flush() on mediacodec
|
||||
video decoders. To restore the previous behavior, use the new
|
||||
delay_flush=1 option.
|
||||
|
||||
2018-03-01 - 6731f60598 - lavu 56.8.100 - frame.h
|
||||
Add av_frame_new_side_data_from_buf().
|
||||
|
||||
2018-02-15 - 8a8d0b319a
|
||||
Change av_ripemd_update(), av_murmur3_update() and av_hash_update() length
|
||||
parameter type to size_t at next major bump.
|
||||
|
||||
2018-02-12 - bcab11a1a2 - lavfi 7.12.100 - avfilter.h
|
||||
Add AVFilterContext.extra_hw_frames.
|
||||
|
||||
2018-02-12 - d23fff0d8a - lavc 58.11.100 - avcodec.h
|
||||
Add AVCodecContext.extra_hw_frames.
|
||||
|
||||
2018-02-06 - 0694d87024 - lavf 58.9.100 - avformat.h
|
||||
Deprecate use of av_register_input_format(), av_register_output_format(),
|
||||
av_register_all(), av_iformat_next(), av_oformat_next().
|
||||
Add av_demuxer_iterate(), and av_muxer_iterate().
|
||||
|
||||
2018-02-06 - 36c85d6e77 - lavc 58.10.100 - avcodec.h
|
||||
Deprecate use of avcodec_register(), avcodec_register_all(),
|
||||
av_codec_next(), av_register_codec_parser(), and av_parser_next().
|
||||
Add av_codec_iterate() and av_parser_iterate().
|
||||
|
||||
2018-02-04 - ff46124b0d - lavf 58.8.100 - avformat.h
|
||||
Deprecate the current names of the RTSP "timeout", "stimeout", "user-agent"
|
||||
options. Introduce "listen_timeout" as replacement for the current "timeout"
|
||||
option, and "user_agent" as replacement for "user-agent". Once the deprecation
|
||||
is over, the old "timeout" option will be removed, and "stimeout" will be
|
||||
renamed to "stimeout" (the "timeout" option will essentially change semantics).
|
||||
|
||||
2018-01-28 - ea3672b7d6 - lavf 58.7.100 - avformat.h
|
||||
Deprecate AVFormatContext filename field which had limited length, use the
|
||||
new dynamically allocated url field instead.
|
||||
|
||||
2018-01-28 - ea3672b7d6 - lavf 58.7.100 - avformat.h
|
||||
Add url field to AVFormatContext and add ff_format_set_url helper function.
|
||||
|
||||
2018-01-27 - 6194d7e564 - lavf 58.6.100 - avformat.h
|
||||
Add AVFMTCTX_UNSEEKABLE (for HLS demuxer).
|
||||
|
||||
2018-01-23 - 9f07cf7c00 - lavu 56.9.100 - aes_ctr.h
|
||||
Add method to set the 16-byte IV.
|
||||
|
||||
2018-01-16 - 631c56a8e4 - lavf 58.5.100 - avformat.h
|
||||
Explicitly make avformat_network_init() and avformat_network_deinit() optional.
|
||||
If these are not called, network initialization and deinitialization is
|
||||
automatic, and unlike in older versions, fully supported, unless libavformat
|
||||
is linked to ancient GnuTLS and OpenSSL.
|
||||
|
||||
2018-01-16 - 6512ff72f9 - lavf 58.4.100 - avformat.h
|
||||
Deprecate AVStream.recommended_encoder_configuration. It was useful only for
|
||||
FFserver, which has been removed.
|
||||
|
||||
2018-01-05 - 798dcf2432 - lavfi 7.11.101 - avfilter.h
|
||||
Deprecate avfilter_link_get_channels(). Use av_buffersink_get_channels().
|
||||
|
||||
2017-01-04 - c29038f304 - lavr 4.0.0 - avresample.h
|
||||
Deprecate the entire library. Merged years ago to provide compatibility
|
||||
with Libav, it remained unmaintained by the FFmpeg project and duplicated
|
||||
functionality provided by libswresample.
|
||||
|
||||
In order to improve consistency and reduce attack surface, it has been deprecated.
|
||||
Users of this library are asked to migrate to libswresample, which, as well as
|
||||
providing more functionality, is faster and has higher accuracy.
|
||||
|
||||
2017-12-26 - a04c2c707d - lavc 58.9.100 - avcodec.h
|
||||
Deprecate av_lockmgr_register(). You need to build FFmpeg with threading
|
||||
support enabled to get basic thread-safety (which is the default build
|
||||
configuration).
|
||||
|
||||
2017-12-24 - 8b81eabe57 - lavu 56.7.100 - cpu.h
|
||||
AVX-512 flags added.
|
||||
|
||||
2017-12-16 - 8bf4e6d3ce - lavc 58.8.100 - avcodec.h
|
||||
The MediaCodec decoders now support AVCodecContext.hw_device_ctx.
|
||||
|
||||
2017-12-16 - e4d9f05ca7 - lavu 56.6.100 - hwcontext.h hwcontext_mediacodec.h
|
||||
Add AV_HWDEVICE_TYPE_MEDIACODEC and a new installed header with
|
||||
MediaCodec-specific hwcontext definitions.
|
||||
|
||||
2017-12-14 - b945fed629 - lavc 58.7.100 - avcodec.h
|
||||
Add AV_CODEC_CAP_HARDWARE, AV_CODEC_CAP_HYBRID, and AVCodec.wrapper_name,
|
||||
and mark all AVCodecs accordingly.
|
||||
|
||||
2017-11-29 - d268094f88 - lavu 56.4.100 / 56.7.0 - stereo3d.h
|
||||
Add view field to AVStereo3D structure and AVStereo3DView enum.
|
||||
|
||||
2017-11-26 - 3a71bcc213 - lavc 58.6.100 - avcodec.h
|
||||
Add const to AVCodecContext.hwaccel.
|
||||
|
||||
2017-11-26 - 3536a3efb9 - lavc 58.5.100 - avcodec.h
|
||||
Deprecate user visibility of the AVHWAccel structure and the functions
|
||||
av_register_hwaccel() and av_hwaccel_next().
|
||||
|
||||
2017-11-26 - 24cc0a53e9 - lavc 58.4.100 - avcodec.h
|
||||
Add AVCodecHWConfig and avcodec_get_hw_config().
|
||||
|
||||
2017-11-22 - 3650cb2dfa - lavu 56.3.100 - opencl.h
|
||||
Remove experimental OpenCL API (av_opencl_*).
|
||||
|
||||
2017-11-22 - b25d8ef0a7 - lavu 56.2.100 - hwcontext.h hwcontext_opencl.h
|
||||
Add AV_HWDEVICE_TYPE_OPENCL and a new installed header with
|
||||
OpenCL-specific hwcontext definitions.
|
||||
|
||||
2017-11-22 - a050f56c09 - lavu 56.1.100 - pixfmt.h
|
||||
Add AV_PIX_FMT_OPENCL.
|
||||
|
||||
2017-11-11 - 48e4eda11d - lavc 58.3.100 - avcodec.h
|
||||
Add avcodec_get_hw_frames_parameters().
|
||||
|
||||
-------- 8< --------- FFmpeg 3.4 was cut here -------- 8< ---------
|
||||
|
||||
2017-09-28 - b6cf66ae1c - lavc 57.106.104 - avcodec.h
|
||||
Add AV_PKT_DATA_A53_CC packet side data, to export closed captions
|
||||
|
||||
2017-09-27 - 7aa6b8a68f - lavu 55.77.101 / lavu 55.31.1 - frame.h
|
||||
Allow passing the value of 0 (meaning "automatic") as the required alignment
|
||||
to av_frame_get_buffer().
|
||||
|
||||
2017-09-27 - 522f877086 - lavu 55.77.100 / lavu 55.31.0 - cpu.h
|
||||
Add av_cpu_max_align() for querying maximum required data alignment.
|
||||
|
||||
2017-09-26 - b1cf151c4d - lavc 57.106.102 - avcodec.h
|
||||
Deprecate AVCodecContext.refcounted_frames. This was useful for deprecated
|
||||
API only (avcodec_decode_video2/avcodec_decode_audio4). The new decode APIs
|
||||
(avcodec_send_packet/avcodec_receive_frame) always work with reference
|
||||
counted frames.
|
||||
|
||||
2017-09-21 - 6f15f1cdc8 - lavu 55.76.100 / 56.6.0 - pixdesc.h
|
||||
Add av_color_range_from_name(), av_color_primaries_from_name(),
|
||||
av_color_transfer_from_name(), av_color_space_from_name(), and
|
||||
av_chroma_location_from_name().
|
||||
|
||||
2017-09-13 - 82342cead1 - lavc 57.106.100 - avcodec.h
|
||||
Add AV_PKT_FLAG_TRUSTED.
|
||||
|
||||
2017-09-13 - 9cb23cd9fe - lavu 55.75.100 - hwcontext.h hwcontext_drm.h
|
||||
Add AV_HWDEVICE_TYPE_DRM and implementation.
|
||||
|
||||
2017-09-08 - 5ba2aef6ec - lavfi 6.103.100 - buffersrc.h
|
||||
Add av_buffersrc_close().
|
||||
|
||||
2017-09-04 - 6cadbb16e9 - lavc 57.105.100 - avcodec.h
|
||||
Add AV_HWACCEL_CODEC_CAP_EXPERIMENTAL, replacing the deprecated
|
||||
HWACCEL_CODEC_CAP_EXPERIMENTAL flag.
|
||||
|
||||
2017-09-01 - 5d76674756 - lavf 57.81.100 - avio.h
|
||||
Add avio_read_partial().
|
||||
|
||||
2017-09-01 - xxxxxxx - lavf 57.80.100 / 57.11.0 - avio.h
|
||||
Add avio_context_free(). From now on it must be used for freeing AVIOContext.
|
||||
|
||||
2017-08-08 - 1460408703 - lavu 55.74.100 - pixdesc.h
|
||||
Add AV_PIX_FMT_FLAG_FLOAT pixel format flag.
|
||||
|
||||
2017-08-08 - 463b81de2b - lavu 55.72.100 - imgutils.h
|
||||
Add av_image_fill_black().
|
||||
|
||||
2017-08-08 - caa12027ba - lavu 55.71.100 - frame.h
|
||||
Add av_frame_apply_cropping().
|
||||
|
||||
2017-07-25 - 24de4fddca - lavu 55.69.100 - frame.h
|
||||
Add AV_FRAME_DATA_ICC_PROFILE side data type.
|
||||
|
||||
2017-06-27 - 70143a3954 - lavc 57.100.100 - avcodec.h
|
||||
DXVA2 and D3D11 hardware accelerated decoding now supports the new hwaccel API,
|
||||
which can create the decoder context and allocate hardware frame automatically.
|
||||
See AVCodecContext.hw_device_ctx and AVCodecContext.hw_frames_ctx. For D3D11,
|
||||
the new AV_PIX_FMT_D3D11 pixfmt must be used with the new API.
|
||||
|
||||
2017-06-27 - 3303511f33 - lavu 56.67.100 - hwcontext.h
|
||||
Add AV_HWDEVICE_TYPE_D3D11VA and AV_PIX_FMT_D3D11.
|
||||
|
||||
2017-06-24 - 09891c5391 - lavf 57.75.100 - avio.h
|
||||
Add AVIO_DATA_MARKER_FLUSH_POINT to signal preferred flush points to aviobuf.
|
||||
|
||||
2017-06-14 - d59c6a3aeb - lavu 55.66.100 - hwcontext.h
|
||||
av_hwframe_ctx_create_derived() now takes some AV_HWFRAME_MAP_* combination
|
||||
as its flags argument (which was previously unused).
|
||||
|
||||
2017-06-14 - 49ae8a5e87 - lavc 57.99.100 - avcodec.h
|
||||
Add AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH.
|
||||
|
||||
2017-06-14 - 0b1794a43e - lavu 55.65.100 - hwcontext.h
|
||||
Add AV_HWDEVICE_TYPE_NONE, av_hwdevice_find_type_by_name(),
|
||||
av_hwdevice_get_type_name() and av_hwdevice_iterate_types().
|
||||
|
||||
2017-06-14 - b22172f6f3 - lavu 55.64.100 - hwcontext.h
|
||||
Add av_hwdevice_ctx_create_derived().
|
||||
|
||||
2017-05-15 - 532b23f079 - lavc 57.96.100 - avcodec.h
|
||||
VideoToolbox hardware-accelerated decoding now supports the new hwaccel API,
|
||||
which can create the decoder context and allocate hardware frames automatically.
|
||||
See AVCodecContext.hw_device_ctx and AVCodecContext.hw_frames_ctx.
|
||||
|
||||
2017-05-15 - 532b23f079 - lavu 57.63.100 - hwcontext.h
|
||||
Add AV_HWDEVICE_TYPE_VIDEOTOOLBOX and implementation.
|
||||
|
||||
2017-05-08 - f089e02fa2 - lavc 57.95.100 / 57.31.0 - avcodec.h
|
||||
Add AVCodecContext.apply_cropping to control whether cropping
|
||||
is handled by libavcodec or the caller.
|
||||
|
||||
2017-05-08 - a47bd5d77e - lavu 55.62.100 / 55.30.0 - frame.h
|
||||
Add AVFrame.crop_left/right/top/bottom fields for attaching cropping
|
||||
information to video frames.
|
||||
|
||||
2017-xx-xx - xxxxxxxxxx
|
||||
Change av_sha_update(), av_sha512_update() and av_md5_sum()/av_md5_update() length
|
||||
parameter type to size_t at next major bump.
|
||||
|
||||
2017-05-05 - c0f17a905f - lavc 57.94.100 - avcodec.h
|
||||
The cuvid decoders now support AVCodecContext.hw_device_ctx, which removes
|
||||
the requirement to set an incomplete AVCodecContext.hw_frames_ctx only to
|
||||
set the Cuda device handle.
|
||||
|
||||
2017-04-11 - 8378466507 - lavu 55.61.100 - avstring.h
|
||||
Add av_strireplace().
|
||||
|
||||
2016-04-06 - 157e57a181 - lavc 57.92.100 - avcodec.h
|
||||
Add AV_PKT_DATA_CONTENT_LIGHT_LEVEL packet side data.
|
||||
|
||||
2016-04-06 - b378f5bd64 - lavu 55.60.100 - mastering_display_metadata.h
|
||||
Add AV_FRAME_DATA_CONTENT_LIGHT_LEVEL value, av_content_light_metadata_alloc()
|
||||
and av_content_light_metadata_create_side_data() API, and AVContentLightMetadata
|
||||
type to export content light level video properties.
|
||||
|
||||
2017-03-31 - 9033e8723c - lavu 55.57.100 - spherical.h
|
||||
Add av_spherical_projection_name().
|
||||
Add av_spherical_from_name().
|
||||
@@ -889,7 +626,7 @@ API changes, most recent first:
|
||||
Add av_opt_get_dict_val/set_dict_val with AV_OPT_TYPE_DICT to support
|
||||
dictionary types being set as options.
|
||||
|
||||
2014-08-13 - afbd4b7e09 - lavf 56.01.0 - avformat.h
|
||||
2014-08-13 - afbd4b8 - lavf 56.01.0 - avformat.h
|
||||
Add AVFormatContext.event_flags and AVStream.event_flags for signaling to
|
||||
the user when events happen in the file/stream.
|
||||
|
||||
@@ -906,7 +643,7 @@ API changes, most recent first:
|
||||
2014-08-08 - 5c3c671 - lavf 55.53.100 - avio.h
|
||||
Add avio_feof() and deprecate url_feof().
|
||||
|
||||
2014-08-07 - bb789016d4 - lsws 2.1.3 - swscale.h
|
||||
2014-08-07 - bb78903 - lsws 2.1.3 - swscale.h
|
||||
sws_getContext is not going to be removed in the future.
|
||||
|
||||
2014-08-07 - a561662 / ad1ee5f - lavc 55.73.101 / 55.57.3 - avcodec.h
|
||||
|
||||
@@ -38,7 +38,7 @@ PROJECT_NAME = FFmpeg
|
||||
# could be handy for archiving the generated documentation or if some version
|
||||
# control system is used.
|
||||
|
||||
PROJECT_NUMBER = 4.0
|
||||
PROJECT_NUMBER = 3.3.6
|
||||
|
||||
# Using the PROJECT_BRIEF tag one can provide an optional one line description
|
||||
# for a project that appears at the top of each page and should give viewer a
|
||||
|
||||
42
doc/Makefile
42
doc/Makefile
@@ -24,7 +24,6 @@ HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMP
|
||||
doc/fate.html \
|
||||
doc/general.html \
|
||||
doc/git-howto.html \
|
||||
doc/mailing-list-faq.html \
|
||||
doc/nut.html \
|
||||
doc/platform.html \
|
||||
|
||||
@@ -37,6 +36,33 @@ DOCS-$(CONFIG_MANPAGES) += $(MANPAGES)
|
||||
DOCS-$(CONFIG_TXTPAGES) += $(TXTPAGES)
|
||||
DOCS = $(DOCS-yes)
|
||||
|
||||
DOC_EXAMPLES-$(CONFIG_AVIO_DIR_CMD_EXAMPLE) += avio_dir_cmd
|
||||
DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
|
||||
DOC_EXAMPLES-$(CONFIG_DECODE_AUDIO_EXAMPLE) += decode_audio
|
||||
DOC_EXAMPLES-$(CONFIG_DECODE_VIDEO_EXAMPLE) += decode_video
|
||||
DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
|
||||
DOC_EXAMPLES-$(CONFIG_ENCODE_AUDIO_EXAMPLE) += encode_audio
|
||||
DOC_EXAMPLES-$(CONFIG_ENCODE_VIDEO_EXAMPLE) += encode_video
|
||||
DOC_EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
|
||||
DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
|
||||
DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
|
||||
DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
|
||||
DOC_EXAMPLES-$(CONFIG_HTTP_MULTICLIENT_EXAMPLE) += http_multiclient
|
||||
DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
|
||||
DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
|
||||
DOC_EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
|
||||
DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
|
||||
DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
|
||||
DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
|
||||
DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
|
||||
DOC_EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
|
||||
ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes)
|
||||
|
||||
DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
ALL_DOC_EXAMPLES := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
ALL_DOC_EXAMPLES_G := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))
|
||||
PROGS += $(DOC_EXAMPLES)
|
||||
|
||||
all-$(CONFIG_DOC): doc
|
||||
|
||||
doc: documentation
|
||||
@@ -44,6 +70,8 @@ doc: documentation
|
||||
apidoc: doc/doxy/html
|
||||
documentation: $(DOCS)
|
||||
|
||||
examples: $(DOC_EXAMPLES)
|
||||
|
||||
TEXIDEP = perl $(SRC_PATH)/doc/texidep.pl $(SRC_PATH) $< $@ >$(@:%=%.d)
|
||||
|
||||
doc/%.txt: TAG = TXT
|
||||
@@ -96,9 +124,11 @@ doc/%.3: doc/%.pod $(GENTEXI)
|
||||
$(M)pod2man --section=3 --center=" " --release=" " --date=" " $< > $@
|
||||
|
||||
$(DOCS) doc/doxy/html: | doc/
|
||||
$(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples
|
||||
OBJDIRS += doc/examples
|
||||
|
||||
DOXY_INPUT = $(INSTHEADERS)
|
||||
DOXY_INPUT_DEPS = $(addprefix $(SRC_PATH)/, $(DOXY_INPUT)) ffbuild/config.mak
|
||||
DOXY_INPUT = $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c)
|
||||
DOXY_INPUT_DEPS = $(addprefix $(SRC_PATH)/, $(DOXY_INPUT)) config.mak
|
||||
|
||||
doc/doxy/html: TAG = DOXY
|
||||
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT_DEPS)
|
||||
@@ -144,7 +174,11 @@ clean:: docclean
|
||||
distclean:: docclean
|
||||
$(RM) doc/config.texi
|
||||
|
||||
docclean::
|
||||
examplesclean:
|
||||
$(RM) $(ALL_DOC_EXAMPLES) $(ALL_DOC_EXAMPLES_G)
|
||||
$(RM) $(CLEANSUFFIXES:%=doc/examples/%)
|
||||
|
||||
docclean: examplesclean
|
||||
$(RM) $(CLEANSUFFIXES:%=doc/%)
|
||||
$(RM) $(TXTPAGES) doc/*.html doc/*.pod doc/*.1 doc/*.3 doc/avoptions_*.texi
|
||||
$(RM) -r doc/doxy/html
|
||||
|
||||
@@ -50,22 +50,21 @@ DTS-HD.
|
||||
|
||||
Add extradata to the beginning of the filtered packets.
|
||||
|
||||
@table @option
|
||||
@item freq
|
||||
The additional argument specifies which packets should be filtered.
|
||||
It accepts the values:
|
||||
@table @samp
|
||||
@item a
|
||||
add extradata to all key packets, but only if @var{local_header} is
|
||||
set in the @option{flags2} codec context field
|
||||
|
||||
@item k
|
||||
@item keyframe
|
||||
add extradata to all key packets
|
||||
|
||||
@item e
|
||||
@item all
|
||||
add extradata to all packets
|
||||
@end table
|
||||
@end table
|
||||
|
||||
If not specified it is assumed @samp{e}.
|
||||
If not specified it is assumed @samp{k}.
|
||||
|
||||
For example the following @command{ffmpeg} command forces a global
|
||||
header (thus disabling individual packet headers) in the H.264 packets
|
||||
@@ -75,10 +74,6 @@ the header stored in extradata to the key packets:
|
||||
ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts
|
||||
@end example
|
||||
|
||||
@section eac3_core
|
||||
|
||||
Extract the core from a E-AC-3 stream, dropping extra channels.
|
||||
|
||||
@section extract_extradata
|
||||
|
||||
Extract the in-band extradata.
|
||||
@@ -97,126 +92,6 @@ When this option is enabled, the long-term headers are removed from the
|
||||
bitstream after extraction.
|
||||
@end table
|
||||
|
||||
@section filter_units
|
||||
|
||||
Remove units with types in or not in a given set from the stream.
|
||||
|
||||
@table @option
|
||||
@item pass_types
|
||||
List of unit types or ranges of unit types to pass through while removing
|
||||
all others. This is specified as a '|'-separated list of unit type values
|
||||
or ranges of values with '-'.
|
||||
|
||||
@item remove_types
|
||||
Identical to @option{pass_types}, except the units in the given set
|
||||
removed and all others passed through.
|
||||
@end table
|
||||
|
||||
Extradata is unchanged by this transformation, but note that if the stream
|
||||
contains inline parameter sets then the output may be unusable if they are
|
||||
removed.
|
||||
|
||||
For example, to remove all non-VCL NAL units from an H.264 stream:
|
||||
@example
|
||||
ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=pass_types=1-5' OUTPUT
|
||||
@end example
|
||||
|
||||
To remove all AUDs, SEI and filler from an H.265 stream:
|
||||
@example
|
||||
ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=35|38-40' OUTPUT
|
||||
@end example
|
||||
|
||||
@section hapqa_extract
|
||||
|
||||
Extract Rgb or Alpha part of an HAPQA file, without recompression, in order to create an HAPQ or an HAPAlphaOnly file.
|
||||
|
||||
@table @option
|
||||
@item texture
|
||||
Specifies the texture to keep.
|
||||
|
||||
@table @option
|
||||
@item color
|
||||
@item alpha
|
||||
@end table
|
||||
|
||||
@end table
|
||||
|
||||
Convert HAPQA to HAPQ
|
||||
@example
|
||||
ffmpeg -i hapqa_inputfile.mov -c copy -bsf:v hapqa_extract=texture=color -tag:v HapY -metadata:s:v:0 encoder="HAPQ" hapq_file.mov
|
||||
@end example
|
||||
|
||||
Convert HAPQA to HAPAlphaOnly
|
||||
@example
|
||||
ffmpeg -i hapqa_inputfile.mov -c copy -bsf:v hapqa_extract=texture=alpha -tag:v HapA -metadata:s:v:0 encoder="HAPAlpha Only" hapalphaonly_file.mov
|
||||
@end example
|
||||
|
||||
@section h264_metadata
|
||||
|
||||
Modify metadata embedded in an H.264 stream.
|
||||
|
||||
@table @option
|
||||
@item aud
|
||||
Insert or remove AUD NAL units in all access units of the stream.
|
||||
|
||||
@table @samp
|
||||
@item insert
|
||||
@item remove
|
||||
@end table
|
||||
|
||||
@item sample_aspect_ratio
|
||||
Set the sample aspect ratio of the stream in the VUI parameters.
|
||||
|
||||
@item video_format
|
||||
@item video_full_range_flag
|
||||
Set the video format in the stream (see H.264 section E.2.1 and
|
||||
table E-2).
|
||||
|
||||
@item colour_primaries
|
||||
@item transfer_characteristics
|
||||
@item matrix_coefficients
|
||||
Set the colour description in the stream (see H.264 section E.2.1
|
||||
and tables E-3, E-4 and E-5).
|
||||
|
||||
@item chroma_sample_loc_type
|
||||
Set the chroma sample location in the stream (see H.264 section
|
||||
E.2.1 and figure E-1).
|
||||
|
||||
@item tick_rate
|
||||
Set the tick rate (num_units_in_tick / time_scale) in the VUI
|
||||
parameters. This is the smallest time unit representable in the
|
||||
stream, and in many cases represents the field rate of the stream
|
||||
(double the frame rate).
|
||||
@item fixed_frame_rate_flag
|
||||
Set whether the stream has fixed framerate - typically this indicates
|
||||
that the framerate is exactly half the tick rate, but the exact
|
||||
meaning is dependent on interlacing and the picture structure (see
|
||||
H.264 section E.2.1 and table E-6).
|
||||
|
||||
@item crop_left
|
||||
@item crop_right
|
||||
@item crop_top
|
||||
@item crop_bottom
|
||||
Set the frame cropping offsets in the SPS. These values will replace
|
||||
the current ones if the stream is already cropped.
|
||||
|
||||
These fields are set in pixels. Note that some sizes may not be
|
||||
representable if the chroma is subsampled or the stream is interlaced
|
||||
(see H.264 section 7.4.2.1.1).
|
||||
|
||||
@item sei_user_data
|
||||
Insert a string as SEI unregistered user data. The argument must
|
||||
be of the form @emph{UUID+string}, where the UUID is as hex digits
|
||||
possibly separated by hyphens, and the string can be anything.
|
||||
|
||||
For example, @samp{086f3693-b7b3-4f2c-9653-21492feee5b8+hello} will
|
||||
insert the string ``hello'' associated with the given UUID.
|
||||
|
||||
@item delete_filler
|
||||
Deletes both filler NAL units and filler SEI messages.
|
||||
|
||||
@end table
|
||||
|
||||
@section h264_mp4toannexb
|
||||
|
||||
Convert an H.264 bitstream from length prefixed mode to start code
|
||||
@@ -236,69 +111,6 @@ ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts
|
||||
Please note that this filter is auto-inserted for MPEG-TS (muxer
|
||||
@code{mpegts}) and raw H.264 (muxer @code{h264}) output formats.
|
||||
|
||||
@section h264_redundant_pps
|
||||
|
||||
This applies a specific fixup to some Blu-ray streams which contain
|
||||
redundant PPSs modifying irrelevant parameters of the stream which
|
||||
confuse other transformations which require correct extradata.
|
||||
|
||||
A new single global PPS is created, and all of the redundant PPSs
|
||||
within the stream are removed.
|
||||
|
||||
@section hevc_metadata
|
||||
|
||||
Modify metadata embedded in an HEVC stream.
|
||||
|
||||
@table @option
|
||||
@item aud
|
||||
Insert or remove AUD NAL units in all access units of the stream.
|
||||
|
||||
@table @samp
|
||||
@item insert
|
||||
@item remove
|
||||
@end table
|
||||
|
||||
@item sample_aspect_ratio
|
||||
Set the sample aspect ratio in the stream in the VUI parameters.
|
||||
|
||||
@item video_format
|
||||
@item video_full_range_flag
|
||||
Set the video format in the stream (see H.265 section E.3.1 and
|
||||
table E.2).
|
||||
|
||||
@item colour_primaries
|
||||
@item transfer_characteristics
|
||||
@item matrix_coefficients
|
||||
Set the colour description in the stream (see H.265 section E.3.1
|
||||
and tables E.3, E.4 and E.5).
|
||||
|
||||
@item chroma_sample_loc_type
|
||||
Set the chroma sample location in the stream (see H.265 section
|
||||
E.3.1 and figure E.1).
|
||||
|
||||
@item tick_rate
|
||||
Set the tick rate in the VPS and VUI parameters (num_units_in_tick /
|
||||
time_scale). Combined with @option{num_ticks_poc_diff_one}, this can
|
||||
set a constant framerate in the stream. Note that it is likely to be
|
||||
overridden by container parameters when the stream is in a container.
|
||||
|
||||
@item num_ticks_poc_diff_one
|
||||
Set poc_proportional_to_timing_flag in VPS and VUI and use this value
|
||||
to set num_ticks_poc_diff_one_minus1 (see H.265 sections 7.4.3.1 and
|
||||
E.3.1). Ignored if @option{tick_rate} is not also set.
|
||||
|
||||
@item crop_left
|
||||
@item crop_right
|
||||
@item crop_top
|
||||
@item crop_bottom
|
||||
Set the conformance window cropping offsets in the SPS. These values
|
||||
will replace the current ones if the stream is already cropped.
|
||||
|
||||
These fields are set in pixels. Note that some sizes may not be
|
||||
representable if the chroma is subsampled (H.265 section 7.4.3.2.1).
|
||||
|
||||
@end table
|
||||
|
||||
@section hevc_mp4toannexb
|
||||
|
||||
Convert an HEVC/H.265 bitstream from length prefixed mode to start code
|
||||
@@ -386,42 +198,6 @@ See also the @ref{text2movsub} filter.
|
||||
|
||||
Decompress non-standard compressed MP3 audio headers.
|
||||
|
||||
@section mpeg2_metadata
|
||||
|
||||
Modify metadata embedded in an MPEG-2 stream.
|
||||
|
||||
@table @option
|
||||
@item display_aspect_ratio
|
||||
Set the display aspect ratio in the stream.
|
||||
|
||||
The following fixed values are supported:
|
||||
@table @option
|
||||
@item 4/3
|
||||
@item 16/9
|
||||
@item 221/100
|
||||
@end table
|
||||
Any other value will result in square pixels being signalled instead
|
||||
(see H.262 section 6.3.3 and table 6-3).
|
||||
|
||||
@item frame_rate
|
||||
Set the frame rate in the stream. This is constructed from a table
|
||||
of known values combined with a small multiplier and divisor - if
|
||||
the supplied value is not exactly representable, the nearest
|
||||
representable value will be used instead (see H.262 section 6.3.3
|
||||
and table 6-4).
|
||||
|
||||
@item video_format
|
||||
Set the video format in the stream (see H.262 section 6.3.6 and
|
||||
table 6-6).
|
||||
|
||||
@item colour_primaries
|
||||
@item transfer_characteristics
|
||||
@item matrix_coefficients
|
||||
Set the colour description in the stream (see H.262 section 6.3.6
|
||||
and tables 6-7, 6-8 and 6-9).
|
||||
|
||||
@end table
|
||||
|
||||
@section mpeg4_unpack_bframes
|
||||
|
||||
Unpack DivX-style packed B-frames.
|
||||
@@ -444,30 +220,19 @@ ffmpeg -i INPUT.avi -codec copy -bsf:v mpeg4_unpack_bframes OUTPUT.avi
|
||||
|
||||
@section noise
|
||||
|
||||
Damages the contents of packets or simply drops them without damaging the
|
||||
container. Can be used for fuzzing or testing error resilience/concealment.
|
||||
Damages the contents of packets without damaging the container. Can be
|
||||
used for fuzzing or testing error resilience/concealment.
|
||||
|
||||
Parameters:
|
||||
@table @option
|
||||
@item amount
|
||||
A numeral string, whose value is related to how often output bytes will
|
||||
be modified. Therefore, values below or equal to 0 are forbidden, and
|
||||
the lower the more frequent bytes will be modified, with 1 meaning
|
||||
every byte is modified.
|
||||
@item dropamount
|
||||
A numeral string, whose value is related to how often packets will be dropped.
|
||||
Therefore, values below or equal to 0 are forbidden, and the lower the more
|
||||
frequent packets will be dropped, with 1 meaning every packet is dropped.
|
||||
@end table
|
||||
|
||||
The following example applies the modification to every byte but does not drop
|
||||
any packets.
|
||||
@example
|
||||
ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
|
||||
@end example
|
||||
|
||||
@section null
|
||||
This bitstream filter passes the packets through unchanged.
|
||||
applies the modification to every byte.
|
||||
|
||||
@section remove_extra
|
||||
|
||||
@@ -499,27 +264,10 @@ codec) with metadata headers.
|
||||
|
||||
See also the @ref{mov2textsub} filter.
|
||||
|
||||
@section trace_headers
|
||||
|
||||
Log trace output containing all syntax elements in the coded stream
|
||||
headers (everything above the level of individual coded blocks).
|
||||
This can be useful for debugging low-level stream issues.
|
||||
|
||||
Supports H.264, H.265 and MPEG-2.
|
||||
|
||||
@section vp9_superframe
|
||||
|
||||
Merge VP9 invisible (alt-ref) frames back into VP9 superframes. This
|
||||
fixes merging of split/segmented VP9 streams where the alt-ref frame
|
||||
was split from its visible counterpart.
|
||||
|
||||
@section vp9_superframe_split
|
||||
|
||||
Split VP9 superframes into single frames.
|
||||
|
||||
@section vp9_raw_reorder
|
||||
|
||||
Given a VP9 stream with correct timestamps but possibly out of order,
|
||||
insert additional show-existing-frame packets to correct the ordering.
|
||||
|
||||
@c man end BITSTREAM FILTERS
|
||||
|
||||
@@ -45,9 +45,6 @@ libswscale/swscale-test
|
||||
config
|
||||
Reconfigure the project with the current configuration.
|
||||
|
||||
tools/target_dec_<decoder>_fuzzer
|
||||
Build fuzzer to fuzz the specified decoder.
|
||||
|
||||
|
||||
Useful standard make commands:
|
||||
make -t <target>
|
||||
|
||||
@@ -44,6 +44,12 @@ Use 1/4 pel motion compensation.
|
||||
Use loop filter.
|
||||
@item qscale
|
||||
Use fixed qscale.
|
||||
@item gmc
|
||||
Use gmc.
|
||||
@item mv0
|
||||
Always try a mb with mv=<0,0>.
|
||||
@item input_preserved
|
||||
|
||||
@item pass1
|
||||
Use internal 2pass ratecontrol in first pass mode.
|
||||
@item pass2
|
||||
@@ -56,6 +62,8 @@ Do not draw edges.
|
||||
Set error[?] variables during encoding.
|
||||
@item truncated
|
||||
|
||||
@item naq
|
||||
Normalize adaptive quantization.
|
||||
@item ildct
|
||||
Use interlaced DCT.
|
||||
@item low_delay
|
||||
@@ -467,6 +475,8 @@ rate control
|
||||
macroblock (MB) type
|
||||
@item qp
|
||||
per-block quantization parameter (QP)
|
||||
@item mv
|
||||
motion vector
|
||||
@item dct_coeff
|
||||
|
||||
@item green_metadata
|
||||
@@ -476,12 +486,18 @@ display complexity metadata for the upcoming frame, GoP or for a given duration.
|
||||
|
||||
@item startcode
|
||||
|
||||
@item pts
|
||||
|
||||
@item er
|
||||
error recognition
|
||||
@item mmco
|
||||
memory management control operations (H.264)
|
||||
@item bugs
|
||||
|
||||
@item vis_qp
|
||||
visualize quantization parameter (QP), lower QP are tinted greener
|
||||
@item vis_mb_type
|
||||
visualize block types
|
||||
@item buffers
|
||||
picture buffer allocations
|
||||
@item thread_ops
|
||||
@@ -490,6 +506,21 @@ threading operations
|
||||
skip motion compensation
|
||||
@end table
|
||||
|
||||
@item vismv @var{integer} (@emph{decoding,video})
|
||||
Visualize motion vectors (MVs).
|
||||
|
||||
This option is deprecated, see the codecview filter instead.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item pf
|
||||
forward predicted MVs of P-frames
|
||||
@item bf
|
||||
forward predicted MVs of B-frames
|
||||
@item bb
|
||||
backward predicted MVs of B-frames
|
||||
@end table
|
||||
|
||||
@item cmp @var{integer} (@emph{encoding,video})
|
||||
Set full pel me compare function.
|
||||
|
||||
@@ -726,6 +757,8 @@ Set context model.
|
||||
|
||||
@item slice_flags @var{integer}
|
||||
|
||||
@item xvmc_acceleration @var{integer}
|
||||
|
||||
@item mbd @var{integer} (@emph{encoding,video})
|
||||
Set macroblock decision algorithm (high quality mode).
|
||||
|
||||
@@ -1225,7 +1258,7 @@ Interlaced video, top coded first, bottom displayed first
|
||||
Interlaced video, bottom coded first, top displayed first
|
||||
@end table
|
||||
|
||||
@item skip_alpha @var{bool} (@emph{decoding,video})
|
||||
@item skip_alpha @var{integer} (@emph{decoding,video})
|
||||
Set to 1 to disable processing alpha (transparency). This works like the
|
||||
@samp{gray} flag in the @option{flags} option which skips chroma information
|
||||
instead of alpha. Default is 0.
|
||||
@@ -1246,16 +1279,6 @@ ffprobe -dump_separator "
|
||||
Maximum number of pixels per image. This value can be used to avoid out of
|
||||
memory failures due to large images.
|
||||
|
||||
@item apply_cropping @var{bool} (@emph{decoding,video})
|
||||
Enable cropping if cropping parameters are multiples of the required
|
||||
alignment for the left and top parameters. If the alignment is not met the
|
||||
cropping will be partially applied to maintain alignment.
|
||||
Default is 1 (enabled).
|
||||
Note: The required alignment depends on if @code{AV_CODEC_FLAG_UNALIGNED} is set and the
|
||||
CPU. @code{AV_CODEC_FLAG_UNALIGNED} cannot be changed from the command line. Also hardware
|
||||
decoders will not apply left/top Cropping.
|
||||
|
||||
|
||||
@end table
|
||||
|
||||
@c man end CODEC OPTIONS
|
||||
|
||||
@@ -25,6 +25,13 @@ enabled decoders.
|
||||
A description of some of the currently available video decoders
|
||||
follows.
|
||||
|
||||
@section hevc
|
||||
|
||||
HEVC / H.265 decoder.
|
||||
|
||||
Note: the @option{skip_loop_filter} option has effect only at level
|
||||
@code{all}.
|
||||
|
||||
@section rawvideo
|
||||
|
||||
Raw video decoder.
|
||||
@@ -102,7 +109,7 @@ correctly by using lavc's old buggy lpc logic for decoding.
|
||||
|
||||
@section ffwavesynth
|
||||
|
||||
Internal wave synthesizer.
|
||||
Internal wave synthetizer.
|
||||
|
||||
This decoder generates wave patterns according to predefined sequences. Its
|
||||
use is purely internal and the format of the data it accepts is not publicly
|
||||
@@ -268,7 +275,7 @@ Y offset of generated bitmaps, default is 0.
|
||||
Chops leading and trailing spaces and removes empty lines from the generated
|
||||
text. This option is useful for teletext based subtitles where empty spaces may
|
||||
be present at the start or at the end of the lines or empty lines may be
|
||||
present between the subtitle lines because of double-sized teletext characters.
|
||||
present between the subtitle lines because of double-sized teletext charactes.
|
||||
Default value is 1.
|
||||
@item txt_duration
|
||||
Sets the display duration of the decoded teletext pages or subtitles in
|
||||
|
||||
@@ -244,16 +244,6 @@ file subdir/file-2.wav
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@section dash
|
||||
|
||||
Dynamic Adaptive Streaming over HTTP demuxer.
|
||||
|
||||
This demuxer presents all AVStreams found in the manifest.
|
||||
By setting the discard flags on AVStreams the caller can decide
|
||||
which streams to actually receive.
|
||||
Each stream mirrors the @code{id} and @code{bandwidth} properties from the
|
||||
@code{<Representation>} as metadata keys named "id" and "variant_bitrate" respectively.
|
||||
|
||||
@section flv, live_flv
|
||||
|
||||
Adobe Flash Video Format demuxer.
|
||||
@@ -326,14 +316,6 @@ segment index to start live streams at (negative values are from the end).
|
||||
@item max_reload
|
||||
Maximum number of times a insufficient list is attempted to be reloaded.
|
||||
Default value is 1000.
|
||||
|
||||
@item http_persistent
|
||||
Use persistent HTTP connections. Applicable only for HTTP streams.
|
||||
Enabled by default.
|
||||
|
||||
@item http_multiple
|
||||
Use multiple HTTP connections for downloading HTTP segments.
|
||||
Enabled by default for HTTP/1.1 servers.
|
||||
@end table
|
||||
|
||||
@section image2
|
||||
|
||||
@@ -10,7 +10,9 @@
|
||||
|
||||
@contents
|
||||
|
||||
@chapter Notes for external developers
|
||||
@chapter Developers Guide
|
||||
|
||||
@section Notes for external developers
|
||||
|
||||
This document is mostly useful for internal FFmpeg developers.
|
||||
External developers who need to use the API in their application should
|
||||
@@ -28,13 +30,15 @@ For more detailed legal information about the use of FFmpeg in
|
||||
external programs read the @file{LICENSE} file in the source tree and
|
||||
consult @url{https://ffmpeg.org/legal.html}.
|
||||
|
||||
@chapter Contributing
|
||||
@section Contributing
|
||||
|
||||
There are 2 ways by which code gets into FFmpeg:
|
||||
There are 3 ways by which code gets into FFmpeg.
|
||||
@itemize @bullet
|
||||
@item Submitting patches to the ffmpeg-devel mailing list.
|
||||
@item Submitting patches to the main developer mailing list.
|
||||
See @ref{Submitting patches} for details.
|
||||
@item Directly committing changes to the main tree.
|
||||
@item Committing changes to a git clone, for example on github.com or
|
||||
gitorious.org. And asking us to merge these changes.
|
||||
@end itemize
|
||||
|
||||
Whichever way, changes should be reviewed by the maintainer of the code
|
||||
@@ -43,9 +47,9 @@ The developer making the commit and the author are responsible for their changes
|
||||
and should try to fix issues their commit causes.
|
||||
|
||||
@anchor{Coding Rules}
|
||||
@chapter Coding Rules
|
||||
@section Coding Rules
|
||||
|
||||
@section Code formatting conventions
|
||||
@subsection Code formatting conventions
|
||||
|
||||
There are the following guidelines regarding the indentation in files:
|
||||
|
||||
@@ -70,7 +74,7 @@ The presentation is one inspired by 'indent -i4 -kr -nut'.
|
||||
The main priority in FFmpeg is simplicity and small code size in order to
|
||||
minimize the bug count.
|
||||
|
||||
@section Comments
|
||||
@subsection Comments
|
||||
Use the JavaDoc/Doxygen format (see examples below) so that code documentation
|
||||
can be generated automatically. All nontrivial functions should have a comment
|
||||
above them explaining what the function does, even if it is just one sentence.
|
||||
@@ -110,7 +114,7 @@ int myfunc(int my_parameter)
|
||||
...
|
||||
@end example
|
||||
|
||||
@section C language features
|
||||
@subsection C language features
|
||||
|
||||
FFmpeg is programmed in the ISO C90 language with a few additional
|
||||
features from ISO C99, namely:
|
||||
@@ -156,7 +160,7 @@ mixing statements and declarations;
|
||||
GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
|
||||
@end itemize
|
||||
|
||||
@section Naming conventions
|
||||
@subsection Naming conventions
|
||||
All names should be composed with underscores (_), not CamelCase. For example,
|
||||
@samp{avfilter_get_video_buffer} is an acceptable function name and
|
||||
@samp{AVFilterGetVideo} is not. The exception from this are type names, like
|
||||
@@ -180,7 +184,7 @@ e.g. @samp{ff_w64_demuxer}.
|
||||
@item
|
||||
For variables and functions visible outside of file scope, used internally
|
||||
across multiple libraries, use @code{avpriv_} as prefix, for example,
|
||||
@samp{avpriv_report_missing_feature}.
|
||||
@samp{avpriv_aac_parse_header}.
|
||||
|
||||
@item
|
||||
Each library has its own prefix for public symbols, in addition to the
|
||||
@@ -200,7 +204,7 @@ letter as they are reserved by the C standard. Names starting with @code{_}
|
||||
are reserved at the file level and may not be used for externally visible
|
||||
symbols. If in doubt, just avoid names starting with @code{_} altogether.
|
||||
|
||||
@section Miscellaneous conventions
|
||||
@subsection Miscellaneous conventions
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
@@ -212,7 +216,7 @@ Casts should be used only when necessary. Unneeded parentheses
|
||||
should also be avoided if they don't make the code easier to understand.
|
||||
@end itemize
|
||||
|
||||
@section Editor configuration
|
||||
@subsection Editor configuration
|
||||
In order to configure Vim to follow FFmpeg formatting conventions, paste
|
||||
the following snippet into your @file{.vimrc}:
|
||||
@example
|
||||
@@ -245,9 +249,9 @@ For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
|
||||
(setq c-default-style "ffmpeg")
|
||||
@end lisp
|
||||
|
||||
@chapter Development Policy
|
||||
@section Development Policy
|
||||
|
||||
@section Patches/Committing
|
||||
@subsection Patches/Committing
|
||||
@subheading Licenses for patches must be compatible with FFmpeg.
|
||||
Contributions should be licensed under the
|
||||
@uref{http://www.gnu.org/licenses/lgpl-2.1.html, LGPL 2.1},
|
||||
@@ -346,7 +350,7 @@ time-frame (12h for build failures and security fixes, 3 days small changes,
|
||||
1 week for big patches) then commit your patch if you think it is OK.
|
||||
Also note, the maintainer can simply ask for more time to review!
|
||||
|
||||
@section Code
|
||||
@subsection Code
|
||||
@subheading API/ABI changes should be discussed before they are made.
|
||||
Do not change behavior of the programs (renaming options etc) or public
|
||||
API or ABI without first discussing it on the ffmpeg-devel mailing list.
|
||||
@@ -377,29 +381,12 @@ Never write to unallocated memory, never write over the end of arrays,
|
||||
always check values read from some untrusted source before using them
|
||||
as array index or other risky things.
|
||||
|
||||
@section Documentation/Other
|
||||
@subheading Subscribe to the ffmpeg-devel mailing list.
|
||||
It is important to be subscribed to the
|
||||
@uref{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-devel, ffmpeg-devel}
|
||||
mailing list. Almost any non-trivial patch is to be sent there for review.
|
||||
Other developers may have comments about your contribution. We expect you see
|
||||
those comments, and to improve it if requested. (N.B. Experienced committers
|
||||
have other channels, and may sometimes skip review for trivial fixes.) Also,
|
||||
discussion here about bug fixes and FFmpeg improvements by other developers may
|
||||
be helpful information for you. Finally, by being a list subscriber, your
|
||||
contribution will be posted immediately to the list, without the moderation
|
||||
hold which messages from non-subscribers experience.
|
||||
|
||||
However, it is more important to the project that we receive your patch than
|
||||
that you be subscribed to the ffmpeg-devel list. If you have a patch, and don't
|
||||
want to subscribe and discuss the patch, then please do send it to the list
|
||||
anyway.
|
||||
|
||||
@subsection Documentation/Other
|
||||
@subheading Subscribe to the ffmpeg-cvslog mailing list.
|
||||
Diffs of all commits are sent to the
|
||||
@uref{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-cvslog, ffmpeg-cvslog}
|
||||
mailing list. Some developers read this list to review all code base changes
|
||||
from all sources. Subscribing to this list is not mandatory.
|
||||
It is important to do this as the diffs of all commits are sent there and
|
||||
reviewed by all the other developers. Bugs and possible improvements or
|
||||
general questions regarding commits are discussed there. We expect you to
|
||||
react if problems with your code are uncovered.
|
||||
|
||||
@subheading Keep the documentation up to date.
|
||||
Update the documentation if you change behavior or add features. If you are
|
||||
@@ -419,7 +406,7 @@ finding a new maintainer and also don't forget to update the @file{MAINTAINERS}
|
||||
|
||||
We think our rules are not too hard. If you have comments, contact us.
|
||||
|
||||
@chapter Code of conduct
|
||||
@section Code of conduct
|
||||
|
||||
Be friendly and respectful towards others and third parties.
|
||||
Treat others the way you yourself want to be treated.
|
||||
@@ -449,7 +436,7 @@ Finally, keep in mind the immortal words of Bill and Ted,
|
||||
"Be excellent to each other."
|
||||
|
||||
@anchor{Submitting patches}
|
||||
@chapter Submitting patches
|
||||
@section Submitting patches
|
||||
|
||||
First, read the @ref{Coding Rules} above if you did not yet, in particular
|
||||
the rules regarding patch submission.
|
||||
@@ -498,7 +485,7 @@ Give us a few days to react. But if some time passes without reaction,
|
||||
send a reminder by email. Your patch should eventually be dealt with.
|
||||
|
||||
|
||||
@chapter New codecs or formats checklist
|
||||
@section New codecs or formats checklist
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
@@ -550,7 +537,7 @@ Did you make sure it compiles standalone, i.e. with
|
||||
@end enumerate
|
||||
|
||||
|
||||
@chapter Patch submission checklist
|
||||
@section patch submission checklist
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
@@ -560,9 +547,9 @@ Does @code{make fate} pass with the patch applied?
|
||||
Was the patch generated with git format-patch or send-email?
|
||||
|
||||
@item
|
||||
Did you sign-off your patch? (@code{git commit -s})
|
||||
See @uref{https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/plain/Documentation/process/submitting-patches.rst, Sign your work} for the meaning
|
||||
of @dfn{sign-off}.
|
||||
Did you sign off your patch? (git commit -s)
|
||||
See @url{http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob_plain;f=Documentation/SubmittingPatches} for the meaning
|
||||
of sign off.
|
||||
|
||||
@item
|
||||
Did you provide a clear git commit log message?
|
||||
@@ -663,7 +650,7 @@ Test your code with valgrind and or Address Sanitizer to ensure it's free
|
||||
of leaks, out of array accesses, etc.
|
||||
@end enumerate
|
||||
|
||||
@chapter Patch review process
|
||||
@section Patch review process
|
||||
|
||||
All patches posted to ffmpeg-devel will be reviewed, unless they contain a
|
||||
clear note that the patch is not for the git master branch.
|
||||
@@ -694,7 +681,7 @@ to be reviewed, please consider helping to review other patches, that is a great
|
||||
way to get everyone's patches reviewed sooner.
|
||||
|
||||
@anchor{Regression tests}
|
||||
@chapter Regression tests
|
||||
@section Regression tests
|
||||
|
||||
Before submitting a patch (or committing to the repository), you should at least
|
||||
test that you did not break anything.
|
||||
@@ -705,7 +692,7 @@ Running 'make fate' accomplishes this, please see @url{fate.html} for details.
|
||||
this case, the reference results of the regression tests shall be modified
|
||||
accordingly].
|
||||
|
||||
@section Adding files to the fate-suite dataset
|
||||
@subsection Adding files to the fate-suite dataset
|
||||
|
||||
When there is no muxer or encoder available to generate test media for a
|
||||
specific test then the media has to be included in the fate-suite.
|
||||
@@ -716,7 +703,7 @@ Once you have a working fate test and fate sample, provide in the commit
|
||||
message or introductory message for the patch series that you post to
|
||||
the ffmpeg-devel mailing list, a direct link to download the sample media.
|
||||
|
||||
@section Visualizing Test Coverage
|
||||
@subsection Visualizing Test Coverage
|
||||
|
||||
The FFmpeg build system allows visualizing the test coverage in an easy
|
||||
manner with the coverage tools @code{gcov}/@code{lcov}. This involves
|
||||
@@ -743,7 +730,7 @@ You can use the command @code{make lcov-reset} to reset the coverage
|
||||
measurements. You will need to rerun @code{make lcov} after running a
|
||||
new test.
|
||||
|
||||
@section Using Valgrind
|
||||
@subsection Using Valgrind
|
||||
|
||||
The configure script provides a shortcut for using valgrind to spot bugs
|
||||
related to memory handling. Just add the option
|
||||
@@ -757,7 +744,7 @@ In case you need finer control over how valgrind is invoked, use the
|
||||
your configure line instead.
|
||||
|
||||
@anchor{Release process}
|
||||
@chapter Release process
|
||||
@section Release process
|
||||
|
||||
FFmpeg maintains a set of @strong{release branches}, which are the
|
||||
recommended deliverable for system integrators and distributors (such as
|
||||
@@ -789,7 +776,7 @@ adjustments to the symbol versioning file. Please discuss such changes
|
||||
on the @strong{ffmpeg-devel} mailing list in time to allow forward planning.
|
||||
|
||||
@anchor{Criteria for Point Releases}
|
||||
@section Criteria for Point Releases
|
||||
@subsection Criteria for Point Releases
|
||||
|
||||
Changes that match the following criteria are valid candidates for
|
||||
inclusion into a point release:
|
||||
@@ -813,7 +800,7 @@ point releases of the same release branch.
|
||||
The order for checking the rules is (1 OR 2 OR 3) AND 4.
|
||||
|
||||
|
||||
@section Release Checklist
|
||||
@subsection Release Checklist
|
||||
|
||||
The release process involves the following steps:
|
||||
|
||||
|
||||
@@ -64,6 +64,7 @@ to find an optimal combination by adding or subtracting a specific value from
|
||||
all quantizers and adjusting some individual quantizer a little. Will tune
|
||||
itself based on whether @option{aac_is}, @option{aac_ms} and @option{aac_pns}
|
||||
are enabled.
|
||||
This is the default choice for a coder.
|
||||
|
||||
@item anmr
|
||||
Average noise to mask ratio (ANMR) trellis-based solution.
|
||||
@@ -76,10 +77,10 @@ Not currently recommended.
|
||||
@item fast
|
||||
Constant quantizer method.
|
||||
|
||||
Uses a cheaper version of twoloop algorithm that doesn't try to do as many
|
||||
clever adjustments. Worse with low bitrates (less than 64kbps), but is better
|
||||
and much faster at higher bitrates.
|
||||
This is the default choice for a coder
|
||||
This method sets a constant quantizer for all bands. This is the fastest of all
|
||||
the methods and has no rate control or support for @option{aac_is} or
|
||||
@option{aac_pns}.
|
||||
Not recommended.
|
||||
|
||||
@end table
|
||||
|
||||
@@ -91,12 +92,12 @@ using the value "enable", which is mainly useful for debugging or disabled using
|
||||
|
||||
@item aac_is
|
||||
Sets intensity stereo coding tool usage. By default, it's enabled and will
|
||||
automatically toggle IS for similar pairs of stereo bands if it's beneficial.
|
||||
automatically toggle IS for similar pairs of stereo bands if it's benefitial.
|
||||
Can be disabled for debugging by setting the value to "disable".
|
||||
|
||||
@item aac_pns
|
||||
Uses perceptual noise substitution to replace low entropy high frequency bands
|
||||
with imperceptible white noise during the decoding process. By default, it's
|
||||
with imperceivable white noise during the decoding process. By default, it's
|
||||
enabled, but can be disabled for debugging purposes by using "disable".
|
||||
|
||||
@item aac_tns
|
||||
@@ -598,7 +599,7 @@ Channel mode
|
||||
@item auto
|
||||
The mode is chosen automatically for each frame
|
||||
@item indep
|
||||
Channels are independently coded
|
||||
Chanels are independently coded
|
||||
@item left_side
|
||||
@item right_side
|
||||
@item mid_side
|
||||
@@ -981,11 +982,6 @@ Other values include 0 for mono and stereo, 1 for surround sound with masking
|
||||
and LFE bandwidth optimizations, and 255 for independent streams with an
|
||||
unspecified channel layout.
|
||||
|
||||
@item apply_phase_inv (N.A.) (requires libopus >= 1.2)
|
||||
If set to 0, disables the use of phase inversion for intensity stereo,
|
||||
improving the quality of mono downmixes, but slightly reducing normal stereo
|
||||
quality. The default is 1 (phase inversion enabled).
|
||||
|
||||
@end table
|
||||
|
||||
@anchor{libshine}
|
||||
@@ -1670,7 +1666,7 @@ option to 2.
|
||||
Enable frame parallel decodability features.
|
||||
@item aq-mode
|
||||
Set adaptive quantization mode (0: off (default), 1: variance 2: complexity, 3:
|
||||
cyclic refresh, 4: equator360).
|
||||
cyclic refresh).
|
||||
@item colorspace @emph{color-space}
|
||||
Set input color space. The VP9 bitstream supports signaling the following
|
||||
colorspaces:
|
||||
@@ -1683,15 +1679,6 @@ colorspaces:
|
||||
@item @samp{smpte240m} @emph{smpte240}
|
||||
@item @samp{bt2020_ncl} @emph{bt2020}
|
||||
@end table
|
||||
@item row-mt @var{boolean}
|
||||
Enable row based multi-threading.
|
||||
@item tune-content
|
||||
Set content type: default (0), screen (1), film (2).
|
||||
@item corpus-complexity
|
||||
Corpus VBR mode is a variant of standard VBR where the complexity distribution
|
||||
midpoint is passed in rather than calculated for a specific clip or chunk.
|
||||
|
||||
The valid range is [0, 10000]. 0 (default) uses standard VBR.
|
||||
@end table
|
||||
|
||||
@end table
|
||||
@@ -1804,7 +1791,7 @@ the documentation of the undocumented generic options, see
|
||||
@ref{codec-options,,the Codec Options chapter}.
|
||||
|
||||
To get a more accurate and extensive documentation of the libx264
|
||||
options, invoke the command @command{x264 --fullhelp} or consult
|
||||
options, invoke the command @command{x264 --full-help} or consult
|
||||
the libx264 documentation.
|
||||
|
||||
@table @option
|
||||
@@ -2117,7 +2104,7 @@ is kept undocumented for some reason.
|
||||
|
||||
For example to specify libx264 encoding options with @command{ffmpeg}:
|
||||
@example
|
||||
ffmpeg -i foo.mpg -c:v libx264 -x264opts keyint=123:min-keyint=20 -an out.mkv
|
||||
ffmpeg -i foo.mpg -vcodec libx264 -x264opts keyint=123:min-keyint=20 -an out.mkv
|
||||
@end example
|
||||
|
||||
@item a53cc @var{boolean}
|
||||
@@ -2159,12 +2146,6 @@ Set the x265 preset.
|
||||
@item tune
|
||||
Set the x265 tune parameter.
|
||||
|
||||
@item profile
|
||||
Set profile restrictions.
|
||||
|
||||
@item crf
|
||||
Set the quality for constant quality mode.
|
||||
|
||||
@item forced-idr
|
||||
Normally, when forcing a I-frame type, the encoder can select any type
|
||||
of I-frame. This option forces it to choose an IDR-frame.
|
||||
@@ -2365,11 +2346,6 @@ Never write it.
|
||||
@itemx always
|
||||
Always write it.
|
||||
@end table
|
||||
@item video_format @var{integer}
|
||||
Specifies the video_format written into the sequence display extension
|
||||
indicating the source of the video pictures. The default is @samp{unspecified},
|
||||
can be @samp{component}, @samp{pal}, @samp{ntsc}, @samp{secam} or @samp{mac}.
|
||||
For maximum compatibility, use @samp{component}.
|
||||
@end table
|
||||
|
||||
@section png
|
||||
@@ -2403,7 +2379,6 @@ Select the ProRes profile to encode
|
||||
@item standard
|
||||
@item hq
|
||||
@item 4444
|
||||
@item 4444xq
|
||||
@end table
|
||||
|
||||
@item quant_mat @var{integer}
|
||||
@@ -2550,117 +2525,6 @@ encoder use CAVLC instead of CABAC.
|
||||
dia size for the iterative motion estimation
|
||||
@end table
|
||||
|
||||
@section VAAPI encoders
|
||||
|
||||
Wrappers for hardware encoders accessible via VAAPI.
|
||||
|
||||
These encoders only accept input in VAAPI hardware surfaces. If you have input
|
||||
in software frames, use the @option{hwupload} filter to upload them to the GPU.
|
||||
|
||||
The following standard libavcodec options are used:
|
||||
@itemize
|
||||
@item
|
||||
@option{g} / @option{gop_size}
|
||||
@item
|
||||
@option{bf} / @option{max_b_frames}
|
||||
@item
|
||||
@option{profile}
|
||||
@item
|
||||
@option{level}
|
||||
@item
|
||||
@option{b} / @option{bit_rate}
|
||||
@item
|
||||
@option{maxrate} / @option{rc_max_rate}
|
||||
@item
|
||||
@option{bufsize} / @option{rc_buffer_size}
|
||||
@item
|
||||
@option{rc_init_occupancy} / @option{rc_initial_buffer_occupancy}
|
||||
@item
|
||||
@option{compression_level}
|
||||
|
||||
Speed / quality tradeoff: higher values are faster / worse quality.
|
||||
@item
|
||||
@option{q} / @option{global_quality}
|
||||
|
||||
Size / quality tradeoff: higher values are smaller / worse quality.
|
||||
@item
|
||||
@option{qmin}
|
||||
(only: @option{qmax} is not supported)
|
||||
@item
|
||||
@option{i_qfactor} / @option{i_quant_factor}
|
||||
@item
|
||||
@option{i_qoffset} / @option{i_quant_offset}
|
||||
@item
|
||||
@option{b_qfactor} / @option{b_quant_factor}
|
||||
@item
|
||||
@option{b_qoffset} / @option{b_quant_offset}
|
||||
@end itemize
|
||||
|
||||
@table @option
|
||||
|
||||
@item h264_vaapi
|
||||
@option{profile} sets the value of @emph{profile_idc} and the @emph{constraint_set*_flag}s.
|
||||
@option{level} sets the value of @emph{level_idc}.
|
||||
|
||||
@table @option
|
||||
@item low_power
|
||||
Use low-power encoding mode.
|
||||
@item coder
|
||||
Set entropy encoder (default is @emph{cabac}). Possible values:
|
||||
|
||||
@table @samp
|
||||
@item ac
|
||||
@item cabac
|
||||
Use CABAC.
|
||||
|
||||
@item vlc
|
||||
@item cavlc
|
||||
Use CAVLC.
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@item hevc_vaapi
|
||||
@option{profile} and @option{level} set the values of
|
||||
@emph{general_profile_idc} and @emph{general_level_idc} respectively.
|
||||
|
||||
@item mjpeg_vaapi
|
||||
Always encodes using the standard quantisation and huffman tables -
|
||||
@option{global_quality} scales the standard quantisation table (range 1-100).
|
||||
|
||||
@item mpeg2_vaapi
|
||||
@option{profile} and @option{level} set the value of @emph{profile_and_level_indication}.
|
||||
|
||||
No rate control is supported.
|
||||
|
||||
@item vp8_vaapi
|
||||
B-frames are not supported.
|
||||
|
||||
@option{global_quality} sets the @emph{q_idx} used for non-key frames (range 0-127).
|
||||
|
||||
@table @option
|
||||
@item loop_filter_level
|
||||
@item loop_filter_sharpness
|
||||
Manually set the loop filter parameters.
|
||||
@end table
|
||||
|
||||
@item vp9_vaapi
|
||||
@option{global_quality} sets the @emph{q_idx} used for P-frames (range 0-255).
|
||||
|
||||
@table @option
|
||||
@item loop_filter_level
|
||||
@item loop_filter_sharpness
|
||||
Manually set the loop filter parameters.
|
||||
@end table
|
||||
|
||||
B-frames are supported, but the output stream is always in encode order rather than display
|
||||
order. If B-frames are enabled, it may be necessary to use the @option{vp9_raw_reorder}
|
||||
bitstream filter to modify the output stream to display frames in the correct order.
|
||||
|
||||
Only normal frames are produced - the @option{vp9_superframe} bitstream filter may be
|
||||
required to produce a stream usable with all decoders.
|
||||
|
||||
@end table
|
||||
|
||||
@section vc2
|
||||
|
||||
SMPTE VC-2 (previously BBC Dirac Pro). This codec was primarily aimed at
|
||||
|
||||
1
doc/examples/.gitignore
vendored
1
doc/examples/.gitignore
vendored
@@ -10,7 +10,6 @@
|
||||
/filtering_audio
|
||||
/filtering_video
|
||||
/http_multiclient
|
||||
/hw_decode
|
||||
/metadata
|
||||
/muxing
|
||||
/pc-uninstalled
|
||||
|
||||
@@ -1,64 +1,49 @@
|
||||
EXAMPLES-$(CONFIG_AVIO_DIR_CMD_EXAMPLE) += avio_dir_cmd
|
||||
EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
|
||||
EXAMPLES-$(CONFIG_DECODE_AUDIO_EXAMPLE) += decode_audio
|
||||
EXAMPLES-$(CONFIG_DECODE_VIDEO_EXAMPLE) += decode_video
|
||||
EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
|
||||
EXAMPLES-$(CONFIG_ENCODE_AUDIO_EXAMPLE) += encode_audio
|
||||
EXAMPLES-$(CONFIG_ENCODE_VIDEO_EXAMPLE) += encode_video
|
||||
EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
|
||||
EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
|
||||
EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
|
||||
EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
|
||||
EXAMPLES-$(CONFIG_HTTP_MULTICLIENT_EXAMPLE) += http_multiclient
|
||||
EXAMPLES-$(CONFIG_HW_DECODE_EXAMPLE) += hw_decode
|
||||
EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
|
||||
EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
|
||||
EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
|
||||
EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
|
||||
EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
|
||||
EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
|
||||
EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
|
||||
EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
|
||||
EXAMPLES-$(CONFIG_VAAPI_ENCODE_EXAMPLE) += vaapi_encode
|
||||
EXAMPLES-$(CONFIG_VAAPI_TRANSCODE_EXAMPLE) += vaapi_transcode
|
||||
# use pkg-config for getting CFLAGS and LDLIBS
|
||||
FFMPEG_LIBS= libavdevice \
|
||||
libavformat \
|
||||
libavfilter \
|
||||
libavcodec \
|
||||
libswresample \
|
||||
libswscale \
|
||||
libavutil \
|
||||
|
||||
EXAMPLES := $(EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
EXAMPLES_G := $(EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))
|
||||
ALL_EXAMPLES := $(EXAMPLES) $(EXAMPLES-:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
ALL_EXAMPLES_G := $(EXAMPLES_G) $(EXAMPLES-:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))
|
||||
PROGS += $(EXAMPLES)
|
||||
CFLAGS += -Wall -g
|
||||
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
|
||||
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
|
||||
|
||||
EXAMPLE_MAKEFILE := $(SRC_PATH)/doc/examples/Makefile
|
||||
EXAMPLES_FILES := $(wildcard $(SRC_PATH)/doc/examples/*.c) $(SRC_PATH)/doc/examples/README $(EXAMPLE_MAKEFILE)
|
||||
EXAMPLES= avio_dir_cmd \
|
||||
avio_reading \
|
||||
decode_audio \
|
||||
decode_video \
|
||||
demuxing_decoding \
|
||||
encode_audio \
|
||||
encode_video \
|
||||
extract_mvs \
|
||||
filtering_video \
|
||||
filtering_audio \
|
||||
http_multiclient \
|
||||
metadata \
|
||||
muxing \
|
||||
remuxing \
|
||||
resampling_audio \
|
||||
scaling_video \
|
||||
transcode_aac \
|
||||
transcoding \
|
||||
|
||||
$(foreach P,$(EXAMPLES),$(eval OBJS-$(P:%$(PROGSSUF)$(EXESUF)=%) = $(P:%$(PROGSSUF)$(EXESUF)=%).o))
|
||||
$(EXAMPLES_G): %$(PROGSSUF)_g$(EXESUF): %.o
|
||||
OBJS=$(addsuffix .o,$(EXAMPLES))
|
||||
|
||||
examples: $(EXAMPLES)
|
||||
# the following examples make explicit use of the math library
|
||||
avcodec: LDLIBS += -lm
|
||||
encode_audio: LDLIBS += -lm
|
||||
muxing: LDLIBS += -lm
|
||||
resampling_audio: LDLIBS += -lm
|
||||
|
||||
$(EXAMPLES:%$(PROGSSUF)$(EXESUF)=%.o): | doc/examples
|
||||
OBJDIRS += doc/examples
|
||||
.phony: all clean-test clean
|
||||
|
||||
DOXY_INPUT += $(EXAMPLES:%$(PROGSSUF)$(EXESUF)=%.c)
|
||||
all: $(OBJS) $(EXAMPLES)
|
||||
|
||||
install: install-examples
|
||||
clean-test:
|
||||
$(RM) test*.pgm test.h264 test.mp2 test.sw test.mpg
|
||||
|
||||
install-examples: $(EXAMPLES_FILES)
|
||||
$(Q)mkdir -p "$(DATADIR)/examples"
|
||||
$(INSTALL) -m 644 $(EXAMPLES_FILES) "$(DATADIR)/examples"
|
||||
$(INSTALL) -m 644 $(EXAMPLE_MAKEFILE:%=%.example) "$(DATADIR)/examples/Makefile"
|
||||
|
||||
uninstall: uninstall-examples
|
||||
|
||||
uninstall-examples:
|
||||
$(RM) -r "$(DATADIR)/examples"
|
||||
|
||||
examplesclean:
|
||||
$(RM) $(ALL_EXAMPLES) $(ALL_EXAMPLES_G)
|
||||
$(RM) $(CLEANSUFFIXES:%=doc/examples/%)
|
||||
|
||||
docclean:: examplesclean
|
||||
|
||||
-include $(wildcard $(EXAMPLES:%$(PROGSSUF)$(EXESUF)=%.d))
|
||||
|
||||
.PHONY: examples
|
||||
clean: clean-test
|
||||
$(RM) $(EXAMPLES) $(OBJS)
|
||||
|
||||
@@ -1,50 +0,0 @@
|
||||
# use pkg-config for getting CFLAGS and LDLIBS
|
||||
FFMPEG_LIBS= libavdevice \
|
||||
libavformat \
|
||||
libavfilter \
|
||||
libavcodec \
|
||||
libswresample \
|
||||
libswscale \
|
||||
libavutil \
|
||||
|
||||
CFLAGS += -Wall -g
|
||||
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
|
||||
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
|
||||
|
||||
EXAMPLES= avio_dir_cmd \
|
||||
avio_reading \
|
||||
decode_audio \
|
||||
decode_video \
|
||||
demuxing_decoding \
|
||||
encode_audio \
|
||||
encode_video \
|
||||
extract_mvs \
|
||||
filtering_video \
|
||||
filtering_audio \
|
||||
http_multiclient \
|
||||
hw_decode \
|
||||
metadata \
|
||||
muxing \
|
||||
remuxing \
|
||||
resampling_audio \
|
||||
scaling_video \
|
||||
transcode_aac \
|
||||
transcoding \
|
||||
|
||||
OBJS=$(addsuffix .o,$(EXAMPLES))
|
||||
|
||||
# the following examples make explicit use of the math library
|
||||
avcodec: LDLIBS += -lm
|
||||
encode_audio: LDLIBS += -lm
|
||||
muxing: LDLIBS += -lm
|
||||
resampling_audio: LDLIBS += -lm
|
||||
|
||||
.phony: all clean-test clean
|
||||
|
||||
all: $(OBJS) $(EXAMPLES)
|
||||
|
||||
clean-test:
|
||||
$(RM) test*.pgm test.h264 test.mp2 test.sw test.mpg
|
||||
|
||||
clean: clean-test
|
||||
$(RM) $(EXAMPLES) $(OBJS)
|
||||
@@ -143,6 +143,8 @@ int main(int argc, char *argv[])
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* register codecs and formats and other lavf/lavc components*/
|
||||
av_register_all();
|
||||
avformat_network_init();
|
||||
|
||||
op = argv[1];
|
||||
|
||||
@@ -44,8 +44,6 @@ static int read_packet(void *opaque, uint8_t *buf, int buf_size)
|
||||
struct buffer_data *bd = (struct buffer_data *)opaque;
|
||||
buf_size = FFMIN(buf_size, bd->size);
|
||||
|
||||
if (!buf_size)
|
||||
return AVERROR_EOF;
|
||||
printf("ptr:%p size:%zu\n", bd->ptr, bd->size);
|
||||
|
||||
/* copy internal buffer data to buf */
|
||||
@@ -74,6 +72,9 @@ int main(int argc, char *argv[])
|
||||
}
|
||||
input_filename = argv[1];
|
||||
|
||||
/* register codecs and formats and other lavf/lavc components*/
|
||||
av_register_all();
|
||||
|
||||
/* slurp file content into buffer */
|
||||
ret = av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);
|
||||
if (ret < 0)
|
||||
|
||||
@@ -39,52 +39,15 @@
|
||||
#define AUDIO_INBUF_SIZE 20480
|
||||
#define AUDIO_REFILL_THRESH 4096
|
||||
|
||||
static void decode(AVCodecContext *dec_ctx, AVPacket *pkt, AVFrame *frame,
|
||||
FILE *outfile)
|
||||
{
|
||||
int i, ch;
|
||||
int ret, data_size;
|
||||
|
||||
/* send the packet with the compressed data to the decoder */
|
||||
ret = avcodec_send_packet(dec_ctx, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error submitting the packet to the decoder\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* read all the output frames (in general there may be any number of them */
|
||||
while (ret >= 0) {
|
||||
ret = avcodec_receive_frame(dec_ctx, frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
return;
|
||||
else if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding\n");
|
||||
exit(1);
|
||||
}
|
||||
data_size = av_get_bytes_per_sample(dec_ctx->sample_fmt);
|
||||
if (data_size < 0) {
|
||||
/* This should not occur, checking just for paranoia */
|
||||
fprintf(stderr, "Failed to calculate data size\n");
|
||||
exit(1);
|
||||
}
|
||||
for (i = 0; i < frame->nb_samples; i++)
|
||||
for (ch = 0; ch < dec_ctx->channels; ch++)
|
||||
fwrite(frame->data[ch] + data_size*i, 1, data_size, outfile);
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
const char *outfilename, *filename;
|
||||
const AVCodec *codec;
|
||||
AVCodecContext *c= NULL;
|
||||
AVCodecParserContext *parser = NULL;
|
||||
int len, ret;
|
||||
int len;
|
||||
FILE *f, *outfile;
|
||||
uint8_t inbuf[AUDIO_INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
|
||||
uint8_t *data;
|
||||
size_t data_size;
|
||||
AVPacket *pkt;
|
||||
AVPacket avpkt;
|
||||
AVFrame *decoded_frame = NULL;
|
||||
|
||||
if (argc <= 2) {
|
||||
@@ -94,7 +57,10 @@ int main(int argc, char **argv)
|
||||
filename = argv[1];
|
||||
outfilename = argv[2];
|
||||
|
||||
pkt = av_packet_alloc();
|
||||
/* register all the codecs */
|
||||
avcodec_register_all();
|
||||
|
||||
av_init_packet(&avpkt);
|
||||
|
||||
/* find the MPEG audio decoder */
|
||||
codec = avcodec_find_decoder(AV_CODEC_ID_MP2);
|
||||
@@ -103,12 +69,6 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
parser = av_parser_init(codec->id);
|
||||
if (!parser) {
|
||||
fprintf(stderr, "Parser not found\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
c = avcodec_alloc_context3(codec);
|
||||
if (!c) {
|
||||
fprintf(stderr, "Could not allocate audio codec context\n");
|
||||
@@ -133,10 +93,13 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
/* decode until eof */
|
||||
data = inbuf;
|
||||
data_size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
|
||||
avpkt.data = inbuf;
|
||||
avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
|
||||
|
||||
while (avpkt.size > 0) {
|
||||
int i, ch;
|
||||
int got_frame = 0;
|
||||
|
||||
while (data_size > 0) {
|
||||
if (!decoded_frame) {
|
||||
if (!(decoded_frame = av_frame_alloc())) {
|
||||
fprintf(stderr, "Could not allocate audio frame\n");
|
||||
@@ -144,41 +107,46 @@ int main(int argc, char **argv)
|
||||
}
|
||||
}
|
||||
|
||||
ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
|
||||
data, data_size,
|
||||
AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while parsing\n");
|
||||
len = avcodec_decode_audio4(c, decoded_frame, &got_frame, &avpkt);
|
||||
if (len < 0) {
|
||||
fprintf(stderr, "Error while decoding\n");
|
||||
exit(1);
|
||||
}
|
||||
data += ret;
|
||||
data_size -= ret;
|
||||
|
||||
if (pkt->size)
|
||||
decode(c, pkt, decoded_frame, outfile);
|
||||
|
||||
if (data_size < AUDIO_REFILL_THRESH) {
|
||||
memmove(inbuf, data, data_size);
|
||||
data = inbuf;
|
||||
len = fread(data + data_size, 1,
|
||||
AUDIO_INBUF_SIZE - data_size, f);
|
||||
if (got_frame) {
|
||||
/* if a frame has been decoded, output it */
|
||||
int data_size = av_get_bytes_per_sample(c->sample_fmt);
|
||||
if (data_size < 0) {
|
||||
/* This should not occur, checking just for paranoia */
|
||||
fprintf(stderr, "Failed to calculate data size\n");
|
||||
exit(1);
|
||||
}
|
||||
for (i=0; i<decoded_frame->nb_samples; i++)
|
||||
for (ch=0; ch<c->channels; ch++)
|
||||
fwrite(decoded_frame->data[ch] + data_size*i, 1, data_size, outfile);
|
||||
}
|
||||
avpkt.size -= len;
|
||||
avpkt.data += len;
|
||||
avpkt.dts =
|
||||
avpkt.pts = AV_NOPTS_VALUE;
|
||||
if (avpkt.size < AUDIO_REFILL_THRESH) {
|
||||
/* Refill the input buffer, to avoid trying to decode
|
||||
* incomplete frames. Instead of this, one could also use
|
||||
* a parser, or use a proper container format through
|
||||
* libavformat. */
|
||||
memmove(inbuf, avpkt.data, avpkt.size);
|
||||
avpkt.data = inbuf;
|
||||
len = fread(avpkt.data + avpkt.size, 1,
|
||||
AUDIO_INBUF_SIZE - avpkt.size, f);
|
||||
if (len > 0)
|
||||
data_size += len;
|
||||
avpkt.size += len;
|
||||
}
|
||||
}
|
||||
|
||||
/* flush the decoder */
|
||||
pkt->data = NULL;
|
||||
pkt->size = 0;
|
||||
decode(c, pkt, decoded_frame, outfile);
|
||||
|
||||
fclose(outfile);
|
||||
fclose(f);
|
||||
|
||||
avcodec_free_context(&c);
|
||||
av_parser_close(parser);
|
||||
av_frame_free(&decoded_frame);
|
||||
av_packet_free(&pkt);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -48,51 +48,44 @@ static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize,
|
||||
fclose(f);
|
||||
}
|
||||
|
||||
static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt,
|
||||
const char *filename)
|
||||
static int decode_write_frame(const char *outfilename, AVCodecContext *avctx,
|
||||
AVFrame *frame, int *frame_count, AVPacket *pkt, int last)
|
||||
{
|
||||
int len, got_frame;
|
||||
char buf[1024];
|
||||
int ret;
|
||||
|
||||
ret = avcodec_send_packet(dec_ctx, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error sending a packet for decoding\n");
|
||||
exit(1);
|
||||
len = avcodec_decode_video2(avctx, frame, &got_frame, pkt);
|
||||
if (len < 0) {
|
||||
fprintf(stderr, "Error while decoding frame %d\n", *frame_count);
|
||||
return len;
|
||||
}
|
||||
|
||||
while (ret >= 0) {
|
||||
ret = avcodec_receive_frame(dec_ctx, frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
return;
|
||||
else if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
printf("saving frame %3d\n", dec_ctx->frame_number);
|
||||
if (got_frame) {
|
||||
printf("Saving %sframe %3d\n", last ? "last " : "", *frame_count);
|
||||
fflush(stdout);
|
||||
|
||||
/* the picture is allocated by the decoder. no need to
|
||||
free it */
|
||||
snprintf(buf, sizeof(buf), "%s-%d", filename, dec_ctx->frame_number);
|
||||
/* the picture is allocated by the decoder, no need to free it */
|
||||
snprintf(buf, sizeof(buf), "%s-%d", outfilename, *frame_count);
|
||||
pgm_save(frame->data[0], frame->linesize[0],
|
||||
frame->width, frame->height, buf);
|
||||
(*frame_count)++;
|
||||
}
|
||||
if (pkt->data) {
|
||||
pkt->size -= len;
|
||||
pkt->data += len;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
const char *filename, *outfilename;
|
||||
const AVCodec *codec;
|
||||
AVCodecParserContext *parser;
|
||||
AVCodecContext *c= NULL;
|
||||
int frame_count;
|
||||
FILE *f;
|
||||
AVFrame *frame;
|
||||
uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
|
||||
uint8_t *data;
|
||||
size_t data_size;
|
||||
int ret;
|
||||
AVPacket *pkt;
|
||||
AVPacket avpkt;
|
||||
|
||||
if (argc <= 2) {
|
||||
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
@@ -101,9 +94,9 @@ int main(int argc, char **argv)
|
||||
filename = argv[1];
|
||||
outfilename = argv[2];
|
||||
|
||||
pkt = av_packet_alloc();
|
||||
if (!pkt)
|
||||
exit(1);
|
||||
avcodec_register_all();
|
||||
|
||||
av_init_packet(&avpkt);
|
||||
|
||||
/* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */
|
||||
memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);
|
||||
@@ -115,18 +108,15 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
parser = av_parser_init(codec->id);
|
||||
if (!parser) {
|
||||
fprintf(stderr, "parser not found\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
c = avcodec_alloc_context3(codec);
|
||||
if (!c) {
|
||||
fprintf(stderr, "Could not allocate video codec context\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (codec->capabilities & AV_CODEC_CAP_TRUNCATED)
|
||||
c->flags |= AV_CODEC_FLAG_TRUNCATED; // we do not send complete frames
|
||||
|
||||
/* For some codecs, such as msmpeg4 and mpeg4, width and height
|
||||
MUST be initialized there because this information is not
|
||||
available in the bitstream. */
|
||||
@@ -149,38 +139,44 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
while (!feof(f)) {
|
||||
/* read raw data from the input file */
|
||||
data_size = fread(inbuf, 1, INBUF_SIZE, f);
|
||||
if (!data_size)
|
||||
frame_count = 0;
|
||||
for (;;) {
|
||||
avpkt.size = fread(inbuf, 1, INBUF_SIZE, f);
|
||||
if (avpkt.size == 0)
|
||||
break;
|
||||
|
||||
/* use the parser to split the data into frames */
|
||||
data = inbuf;
|
||||
while (data_size > 0) {
|
||||
ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
|
||||
data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while parsing\n");
|
||||
exit(1);
|
||||
}
|
||||
data += ret;
|
||||
data_size -= ret;
|
||||
/* NOTE1: some codecs are stream based (mpegvideo, mpegaudio)
|
||||
and this is the only method to use them because you cannot
|
||||
know the compressed data size before analysing it.
|
||||
|
||||
if (pkt->size)
|
||||
decode(c, frame, pkt, outfilename);
|
||||
}
|
||||
BUT some other codecs (msmpeg4, mpeg4) are inherently frame
|
||||
based, so you must call them with all the data for one
|
||||
frame exactly. You must also initialize 'width' and
|
||||
'height' before initializing them. */
|
||||
|
||||
/* NOTE2: some codecs allow the raw parameters (frame size,
|
||||
sample rate) to be changed at any frame. We handle this, so
|
||||
you should also take care of it */
|
||||
|
||||
/* here, we use a stream based decoder (mpeg1video), so we
|
||||
feed decoder and see if it could decode a frame */
|
||||
avpkt.data = inbuf;
|
||||
while (avpkt.size > 0)
|
||||
if (decode_write_frame(outfilename, c, frame, &frame_count, &avpkt, 0) < 0)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* flush the decoder */
|
||||
decode(c, frame, NULL, outfilename);
|
||||
/* Some codecs, such as MPEG, transmit the I- and P-frame with a
|
||||
latency of one frame. You must do the following to have a
|
||||
chance to get the last frame of the video. */
|
||||
avpkt.data = NULL;
|
||||
avpkt.size = 0;
|
||||
decode_write_frame(outfilename, c, frame, &frame_count, &avpkt, 1);
|
||||
|
||||
fclose(f);
|
||||
|
||||
av_parser_close(parser);
|
||||
avcodec_free_context(&c);
|
||||
av_frame_free(&frame);
|
||||
av_packet_free(&pkt);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -252,6 +252,9 @@ int main (int argc, char **argv)
|
||||
video_dst_filename = argv[2];
|
||||
audio_dst_filename = argv[3];
|
||||
|
||||
/* register all formats and codecs */
|
||||
av_register_all();
|
||||
|
||||
/* open input file, and allocate format context */
|
||||
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
|
||||
fprintf(stderr, "Could not open source file %s\n", src_filename);
|
||||
|
||||
@@ -92,42 +92,14 @@ static int select_channel_layout(const AVCodec *codec)
|
||||
return best_ch_layout;
|
||||
}
|
||||
|
||||
static void encode(AVCodecContext *ctx, AVFrame *frame, AVPacket *pkt,
|
||||
FILE *output)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* send the frame for encoding */
|
||||
ret = avcodec_send_frame(ctx, frame);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error sending the frame to the encoder\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* read all the available output packets (in general there may be any
|
||||
* number of them */
|
||||
while (ret >= 0) {
|
||||
ret = avcodec_receive_packet(ctx, pkt);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
return;
|
||||
else if (ret < 0) {
|
||||
fprintf(stderr, "Error encoding audio frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
fwrite(pkt->data, 1, pkt->size, output);
|
||||
av_packet_unref(pkt);
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
const char *filename;
|
||||
const AVCodec *codec;
|
||||
AVCodecContext *c= NULL;
|
||||
AVFrame *frame;
|
||||
AVPacket *pkt;
|
||||
int i, j, k, ret;
|
||||
AVPacket pkt;
|
||||
int i, j, k, ret, got_output;
|
||||
FILE *f;
|
||||
uint16_t *samples;
|
||||
float t, tincr;
|
||||
@@ -138,6 +110,9 @@ int main(int argc, char **argv)
|
||||
}
|
||||
filename = argv[1];
|
||||
|
||||
/* register all the codecs */
|
||||
avcodec_register_all();
|
||||
|
||||
/* find the MP2 encoder */
|
||||
codec = avcodec_find_encoder(AV_CODEC_ID_MP2);
|
||||
if (!codec) {
|
||||
@@ -179,13 +154,6 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* packet for holding encoded output */
|
||||
pkt = av_packet_alloc();
|
||||
if (!pkt) {
|
||||
fprintf(stderr, "could not allocate the packet\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* frame containing input raw audio */
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
@@ -208,6 +176,10 @@ int main(int argc, char **argv)
|
||||
t = 0;
|
||||
tincr = 2 * M_PI * 440.0 / c->sample_rate;
|
||||
for (i = 0; i < 200; i++) {
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL; // packet data will be allocated by the encoder
|
||||
pkt.size = 0;
|
||||
|
||||
/* make sure the frame is writable -- makes a copy if the encoder
|
||||
* kept a reference internally */
|
||||
ret = av_frame_make_writable(frame);
|
||||
@@ -222,16 +194,34 @@ int main(int argc, char **argv)
|
||||
samples[2*j + k] = samples[2*j];
|
||||
t += tincr;
|
||||
}
|
||||
encode(c, frame, pkt, f);
|
||||
/* encode the samples */
|
||||
ret = avcodec_encode_audio2(c, &pkt, frame, &got_output);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error encoding audio frame\n");
|
||||
exit(1);
|
||||
}
|
||||
if (got_output) {
|
||||
fwrite(pkt.data, 1, pkt.size, f);
|
||||
av_packet_unref(&pkt);
|
||||
}
|
||||
}
|
||||
|
||||
/* flush the encoder */
|
||||
encode(c, NULL, pkt, f);
|
||||
/* get the delayed frames */
|
||||
for (got_output = 1; got_output; i++) {
|
||||
ret = avcodec_encode_audio2(c, &pkt, NULL, &got_output);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error encoding frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (got_output) {
|
||||
fwrite(pkt.data, 1, pkt.size, f);
|
||||
av_packet_unref(&pkt);
|
||||
}
|
||||
}
|
||||
fclose(f);
|
||||
|
||||
av_frame_free(&frame);
|
||||
av_packet_free(&pkt);
|
||||
avcodec_free_context(&c);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -36,45 +36,15 @@
|
||||
#include <libavutil/opt.h>
|
||||
#include <libavutil/imgutils.h>
|
||||
|
||||
static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt,
|
||||
FILE *outfile)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* send the frame to the encoder */
|
||||
if (frame)
|
||||
printf("Send frame %3"PRId64"\n", frame->pts);
|
||||
|
||||
ret = avcodec_send_frame(enc_ctx, frame);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error sending a frame for encoding\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
while (ret >= 0) {
|
||||
ret = avcodec_receive_packet(enc_ctx, pkt);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
return;
|
||||
else if (ret < 0) {
|
||||
fprintf(stderr, "Error during encoding\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
printf("Write packet %3"PRId64" (size=%5d)\n", pkt->pts, pkt->size);
|
||||
fwrite(pkt->data, 1, pkt->size, outfile);
|
||||
av_packet_unref(pkt);
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
const char *filename, *codec_name;
|
||||
const AVCodec *codec;
|
||||
AVCodecContext *c= NULL;
|
||||
int i, ret, x, y;
|
||||
int i, ret, x, y, got_output;
|
||||
FILE *f;
|
||||
AVFrame *frame;
|
||||
AVPacket *pkt;
|
||||
AVPacket pkt;
|
||||
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
|
||||
|
||||
if (argc <= 2) {
|
||||
@@ -84,10 +54,12 @@ int main(int argc, char **argv)
|
||||
filename = argv[1];
|
||||
codec_name = argv[2];
|
||||
|
||||
avcodec_register_all();
|
||||
|
||||
/* find the mpeg1video encoder */
|
||||
codec = avcodec_find_encoder_by_name(codec_name);
|
||||
if (!codec) {
|
||||
fprintf(stderr, "Codec '%s' not found\n", codec_name);
|
||||
fprintf(stderr, "Codec not found\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
@@ -97,10 +69,6 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
pkt = av_packet_alloc();
|
||||
if (!pkt)
|
||||
exit(1);
|
||||
|
||||
/* put sample parameters */
|
||||
c->bit_rate = 400000;
|
||||
/* resolution must be a multiple of two */
|
||||
@@ -124,9 +92,8 @@ int main(int argc, char **argv)
|
||||
av_opt_set(c->priv_data, "preset", "slow", 0);
|
||||
|
||||
/* open it */
|
||||
ret = avcodec_open2(c, codec, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open codec: %s\n", av_err2str(ret));
|
||||
if (avcodec_open2(c, codec, NULL) < 0) {
|
||||
fprintf(stderr, "Could not open codec\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
@@ -153,6 +120,10 @@ int main(int argc, char **argv)
|
||||
|
||||
/* encode 1 second of video */
|
||||
for (i = 0; i < 25; i++) {
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL; // packet data will be allocated by the encoder
|
||||
pkt.size = 0;
|
||||
|
||||
fflush(stdout);
|
||||
|
||||
/* make sure the frame data is writable */
|
||||
@@ -179,11 +150,35 @@ int main(int argc, char **argv)
|
||||
frame->pts = i;
|
||||
|
||||
/* encode the image */
|
||||
encode(c, frame, pkt, f);
|
||||
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error encoding frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (got_output) {
|
||||
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
|
||||
fwrite(pkt.data, 1, pkt.size, f);
|
||||
av_packet_unref(&pkt);
|
||||
}
|
||||
}
|
||||
|
||||
/* flush the encoder */
|
||||
encode(c, NULL, pkt, f);
|
||||
/* get the delayed frames */
|
||||
for (got_output = 1; got_output; i++) {
|
||||
fflush(stdout);
|
||||
|
||||
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error encoding frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (got_output) {
|
||||
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
|
||||
fwrite(pkt.data, 1, pkt.size, f);
|
||||
av_packet_unref(&pkt);
|
||||
}
|
||||
}
|
||||
|
||||
/* add sequence end code to have a real MPEG file */
|
||||
fwrite(endcode, 1, sizeof(endcode), f);
|
||||
@@ -191,7 +186,6 @@ int main(int argc, char **argv)
|
||||
|
||||
avcodec_free_context(&c);
|
||||
av_frame_free(&frame);
|
||||
av_packet_free(&pkt);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -31,26 +31,23 @@ static const char *src_filename = NULL;
|
||||
|
||||
static int video_stream_idx = -1;
|
||||
static AVFrame *frame = NULL;
|
||||
static AVPacket pkt;
|
||||
static int video_frame_count = 0;
|
||||
|
||||
static int decode_packet(const AVPacket *pkt)
|
||||
static int decode_packet(int *got_frame, int cached)
|
||||
{
|
||||
int ret = avcodec_send_packet(video_dec_ctx, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while sending a packet to the decoder: %s\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
int decoded = pkt.size;
|
||||
|
||||
while (ret >= 0) {
|
||||
ret = avcodec_receive_frame(video_dec_ctx, frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
|
||||
break;
|
||||
} else if (ret < 0) {
|
||||
fprintf(stderr, "Error while receiving a frame from the decoder: %s\n", av_err2str(ret));
|
||||
*got_frame = 0;
|
||||
|
||||
if (pkt.stream_index == video_stream_idx) {
|
||||
int ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (ret >= 0) {
|
||||
if (*got_frame) {
|
||||
int i;
|
||||
AVFrameSideData *sd;
|
||||
|
||||
@@ -61,16 +58,15 @@ static int decode_packet(const AVPacket *pkt)
|
||||
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
|
||||
const AVMotionVector *mv = &mvs[i];
|
||||
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
|
||||
video_frame_count, mv->source,
|
||||
mv->w, mv->h, mv->src_x, mv->src_y,
|
||||
mv->dst_x, mv->dst_y, mv->flags);
|
||||
video_frame_count, mv->source,
|
||||
mv->w, mv->h, mv->src_x, mv->src_y,
|
||||
mv->dst_x, mv->dst_y, mv->flags);
|
||||
}
|
||||
}
|
||||
av_frame_unref(frame);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
return decoded;
|
||||
}
|
||||
|
||||
static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
|
||||
@@ -120,8 +116,7 @@ static int open_codec_context(AVFormatContext *fmt_ctx, enum AVMediaType type)
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret = 0;
|
||||
AVPacket pkt = { 0 };
|
||||
int ret = 0, got_frame;
|
||||
|
||||
if (argc != 2) {
|
||||
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
|
||||
@@ -129,6 +124,8 @@ int main(int argc, char **argv)
|
||||
}
|
||||
src_filename = argv[1];
|
||||
|
||||
av_register_all();
|
||||
|
||||
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
|
||||
fprintf(stderr, "Could not open source file %s\n", src_filename);
|
||||
exit(1);
|
||||
@@ -158,17 +155,30 @@ int main(int argc, char **argv)
|
||||
|
||||
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
|
||||
|
||||
/* initialize packet, set data to NULL, let the demuxer fill it */
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
|
||||
/* read frames from the file */
|
||||
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
|
||||
if (pkt.stream_index == video_stream_idx)
|
||||
ret = decode_packet(&pkt);
|
||||
av_packet_unref(&pkt);
|
||||
if (ret < 0)
|
||||
break;
|
||||
AVPacket orig_pkt = pkt;
|
||||
do {
|
||||
ret = decode_packet(&got_frame, 0);
|
||||
if (ret < 0)
|
||||
break;
|
||||
pkt.data += ret;
|
||||
pkt.size -= ret;
|
||||
} while (pkt.size > 0);
|
||||
av_packet_unref(&orig_pkt);
|
||||
}
|
||||
|
||||
/* flush cached frames */
|
||||
decode_packet(NULL);
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
do {
|
||||
decode_packet(&got_frame, 1);
|
||||
} while (got_frame);
|
||||
|
||||
end:
|
||||
avcodec_free_context(&video_dec_ctx);
|
||||
|
||||
@@ -64,13 +64,13 @@ static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
|
||||
{
|
||||
AVFilterGraph *filter_graph;
|
||||
AVFilterContext *abuffer_ctx;
|
||||
const AVFilter *abuffer;
|
||||
AVFilter *abuffer;
|
||||
AVFilterContext *volume_ctx;
|
||||
const AVFilter *volume;
|
||||
AVFilter *volume;
|
||||
AVFilterContext *aformat_ctx;
|
||||
const AVFilter *aformat;
|
||||
AVFilter *aformat;
|
||||
AVFilterContext *abuffersink_ctx;
|
||||
const AVFilter *abuffersink;
|
||||
AVFilter *abuffersink;
|
||||
|
||||
AVDictionary *options_dict = NULL;
|
||||
uint8_t options_str[1024];
|
||||
@@ -289,6 +289,8 @@ int main(int argc, char *argv[])
|
||||
return 1;
|
||||
}
|
||||
|
||||
avfilter_register_all();
|
||||
|
||||
/* Allocate the frame we will be using to store the data. */
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
|
||||
@@ -32,6 +32,7 @@
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavfilter/avfiltergraph.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
@@ -89,8 +90,8 @@ static int init_filters(const char *filters_descr)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
const AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
|
||||
const AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
|
||||
AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
|
||||
AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
AVFilterInOut *inputs = avfilter_inout_alloc();
|
||||
static const enum AVSampleFormat out_sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };
|
||||
@@ -200,7 +201,7 @@ end:
|
||||
|
||||
static void print_frame(const AVFrame *frame)
|
||||
{
|
||||
const int n = frame->nb_samples * av_get_channel_layout_nb_channels(frame->channel_layout);
|
||||
const int n = frame->nb_samples * av_get_channel_layout_nb_channels(av_frame_get_channel_layout(frame));
|
||||
const uint16_t *p = (uint16_t*)frame->data[0];
|
||||
const uint16_t *p_end = p + n;
|
||||
|
||||
@@ -228,6 +229,9 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
if ((ret = open_input_file(argv[1])) < 0)
|
||||
goto end;
|
||||
if ((ret = init_filters(filter_descr)) < 0)
|
||||
|
||||
@@ -32,6 +32,7 @@
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavfilter/avfiltergraph.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
@@ -92,8 +93,8 @@ static int init_filters(const char *filters_descr)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
const AVFilter *buffersrc = avfilter_get_by_name("buffer");
|
||||
const AVFilter *buffersink = avfilter_get_by_name("buffersink");
|
||||
AVFilter *buffersrc = avfilter_get_by_name("buffer");
|
||||
AVFilter *buffersink = avfilter_get_by_name("buffersink");
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
AVFilterInOut *inputs = avfilter_inout_alloc();
|
||||
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
|
||||
@@ -222,6 +223,9 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
if ((ret = open_input_file(argv[1])) < 0)
|
||||
goto end;
|
||||
if ((ret = init_filters(filter_descr)) < 0)
|
||||
@@ -249,7 +253,7 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
if (ret >= 0) {
|
||||
frame->pts = frame->best_effort_timestamp;
|
||||
frame->pts = av_frame_get_best_effort_timestamp(frame);
|
||||
|
||||
/* push the decoded frame into the filtergraph */
|
||||
if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
|
||||
|
||||
@@ -114,6 +114,7 @@ int main(int argc, char **argv)
|
||||
in_uri = argv[1];
|
||||
out_uri = argv[2];
|
||||
|
||||
av_register_all();
|
||||
avformat_network_init();
|
||||
|
||||
if ((ret = av_dict_set(&options, "listen", "2", 0)) < 0) {
|
||||
|
||||
@@ -1,251 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2017 Jun Zhao
|
||||
* Copyright (c) 2017 Kaixuan Liu
|
||||
*
|
||||
* HW Acceleration API (video decoding) decode sample
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* HW-Accelerated decoding example.
|
||||
*
|
||||
* @example hw_decode.c
|
||||
* This example shows how to do HW-accelerated decoding with output
|
||||
* frames from the HW video surfaces.
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavutil/pixdesc.h>
|
||||
#include <libavutil/hwcontext.h>
|
||||
#include <libavutil/opt.h>
|
||||
#include <libavutil/avassert.h>
|
||||
#include <libavutil/imgutils.h>
|
||||
|
||||
static AVBufferRef *hw_device_ctx = NULL;
|
||||
static enum AVPixelFormat hw_pix_fmt;
|
||||
static FILE *output_file = NULL;
|
||||
|
||||
static int hw_decoder_init(AVCodecContext *ctx, const enum AVHWDeviceType type)
|
||||
{
|
||||
int err = 0;
|
||||
|
||||
if ((err = av_hwdevice_ctx_create(&hw_device_ctx, type,
|
||||
NULL, NULL, 0)) < 0) {
|
||||
fprintf(stderr, "Failed to create specified HW device.\n");
|
||||
return err;
|
||||
}
|
||||
ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static enum AVPixelFormat get_hw_format(AVCodecContext *ctx,
|
||||
const enum AVPixelFormat *pix_fmts)
|
||||
{
|
||||
const enum AVPixelFormat *p;
|
||||
|
||||
for (p = pix_fmts; *p != -1; p++) {
|
||||
if (*p == hw_pix_fmt)
|
||||
return *p;
|
||||
}
|
||||
|
||||
fprintf(stderr, "Failed to get HW surface format.\n");
|
||||
return AV_PIX_FMT_NONE;
|
||||
}
|
||||
|
||||
static int decode_write(AVCodecContext *avctx, AVPacket *packet)
|
||||
{
|
||||
AVFrame *frame = NULL, *sw_frame = NULL;
|
||||
AVFrame *tmp_frame = NULL;
|
||||
uint8_t *buffer = NULL;
|
||||
int size;
|
||||
int ret = 0;
|
||||
|
||||
ret = avcodec_send_packet(avctx, packet);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
while (1) {
|
||||
if (!(frame = av_frame_alloc()) || !(sw_frame = av_frame_alloc())) {
|
||||
fprintf(stderr, "Can not alloc frame\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ret = avcodec_receive_frame(avctx, frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
|
||||
av_frame_free(&frame);
|
||||
av_frame_free(&sw_frame);
|
||||
return 0;
|
||||
} else if (ret < 0) {
|
||||
fprintf(stderr, "Error while decoding\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if (frame->format == hw_pix_fmt) {
|
||||
/* retrieve data from GPU to CPU */
|
||||
if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) < 0) {
|
||||
fprintf(stderr, "Error transferring the data to system memory\n");
|
||||
goto fail;
|
||||
}
|
||||
tmp_frame = sw_frame;
|
||||
} else
|
||||
tmp_frame = frame;
|
||||
|
||||
size = av_image_get_buffer_size(tmp_frame->format, tmp_frame->width,
|
||||
tmp_frame->height, 1);
|
||||
buffer = av_malloc(size);
|
||||
if (!buffer) {
|
||||
fprintf(stderr, "Can not alloc buffer\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto fail;
|
||||
}
|
||||
ret = av_image_copy_to_buffer(buffer, size,
|
||||
(const uint8_t * const *)tmp_frame->data,
|
||||
(const int *)tmp_frame->linesize, tmp_frame->format,
|
||||
tmp_frame->width, tmp_frame->height, 1);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Can not copy image to buffer\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if ((ret = fwrite(buffer, 1, size, output_file)) < 0) {
|
||||
fprintf(stderr, "Failed to dump raw data.\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
fail:
|
||||
av_frame_free(&frame);
|
||||
av_frame_free(&sw_frame);
|
||||
av_freep(&buffer);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
AVFormatContext *input_ctx = NULL;
|
||||
int video_stream, ret;
|
||||
AVStream *video = NULL;
|
||||
AVCodecContext *decoder_ctx = NULL;
|
||||
AVCodec *decoder = NULL;
|
||||
AVPacket packet;
|
||||
enum AVHWDeviceType type;
|
||||
int i;
|
||||
|
||||
if (argc < 4) {
|
||||
fprintf(stderr, "Usage: %s <device type> <input file> <output file>\n", argv[0]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
type = av_hwdevice_find_type_by_name(argv[1]);
|
||||
if (type == AV_HWDEVICE_TYPE_NONE) {
|
||||
fprintf(stderr, "Device type %s is not supported.\n", argv[1]);
|
||||
fprintf(stderr, "Available device types:");
|
||||
while((type = av_hwdevice_iterate_types(type)) != AV_HWDEVICE_TYPE_NONE)
|
||||
fprintf(stderr, " %s", av_hwdevice_get_type_name(type));
|
||||
fprintf(stderr, "\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
if (avformat_open_input(&input_ctx, argv[2], NULL, NULL) != 0) {
|
||||
fprintf(stderr, "Cannot open input file '%s'\n", argv[2]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (avformat_find_stream_info(input_ctx, NULL) < 0) {
|
||||
fprintf(stderr, "Cannot find input stream information.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* find the video stream information */
|
||||
ret = av_find_best_stream(input_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Cannot find a video stream in the input file\n");
|
||||
return -1;
|
||||
}
|
||||
video_stream = ret;
|
||||
|
||||
for (i = 0;; i++) {
|
||||
const AVCodecHWConfig *config = avcodec_get_hw_config(decoder, i);
|
||||
if (!config) {
|
||||
fprintf(stderr, "Decoder %s does not support device type %s.\n",
|
||||
decoder->name, av_hwdevice_get_type_name(type));
|
||||
return -1;
|
||||
}
|
||||
if (config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX &&
|
||||
config->device_type == type) {
|
||||
hw_pix_fmt = config->pix_fmt;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
|
||||
return AVERROR(ENOMEM);
|
||||
|
||||
video = input_ctx->streams[video_stream];
|
||||
if (avcodec_parameters_to_context(decoder_ctx, video->codecpar) < 0)
|
||||
return -1;
|
||||
|
||||
decoder_ctx->get_format = get_hw_format;
|
||||
av_opt_set_int(decoder_ctx, "refcounted_frames", 1, 0);
|
||||
|
||||
if (hw_decoder_init(decoder_ctx, type) < 0)
|
||||
return -1;
|
||||
|
||||
if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0) {
|
||||
fprintf(stderr, "Failed to open codec for stream #%u\n", video_stream);
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* open the file to dump raw data */
|
||||
output_file = fopen(argv[3], "w+");
|
||||
|
||||
/* actual decoding and dump the raw data */
|
||||
while (ret >= 0) {
|
||||
if ((ret = av_read_frame(input_ctx, &packet)) < 0)
|
||||
break;
|
||||
|
||||
if (video_stream == packet.stream_index)
|
||||
ret = decode_write(decoder_ctx, &packet);
|
||||
|
||||
av_packet_unref(&packet);
|
||||
}
|
||||
|
||||
/* flush the decoder */
|
||||
packet.data = NULL;
|
||||
packet.size = 0;
|
||||
ret = decode_write(decoder_ctx, &packet);
|
||||
av_packet_unref(&packet);
|
||||
|
||||
if (output_file)
|
||||
fclose(output_file);
|
||||
avcodec_free_context(&decoder_ctx);
|
||||
avformat_close_input(&input_ctx);
|
||||
av_buffer_unref(&hw_device_ctx);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -44,6 +44,7 @@ int main (int argc, char **argv)
|
||||
return 1;
|
||||
}
|
||||
|
||||
av_register_all();
|
||||
if ((ret = avformat_open_input(&fmt_ctx, argv[1], NULL, NULL)))
|
||||
return ret;
|
||||
|
||||
|
||||
@@ -488,9 +488,9 @@ static AVFrame *get_video_frame(OutputStream *ost)
|
||||
}
|
||||
}
|
||||
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
|
||||
sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,
|
||||
ost->tmp_frame->linesize, 0, c->height, ost->frame->data,
|
||||
ost->frame->linesize);
|
||||
sws_scale(ost->sws_ctx,
|
||||
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
|
||||
0, c->height, ost->frame->data, ost->frame->linesize);
|
||||
} else {
|
||||
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
|
||||
}
|
||||
@@ -564,6 +564,9 @@ int main(int argc, char **argv)
|
||||
AVDictionary *opt = NULL;
|
||||
int i;
|
||||
|
||||
/* Initialize libavcodec, and register all codecs and formats. */
|
||||
av_register_all();
|
||||
|
||||
if (argc < 2) {
|
||||
printf("usage: %s output_file\n"
|
||||
"API example program to output a media file with libavformat.\n"
|
||||
|
||||
@@ -26,55 +26,185 @@
|
||||
*
|
||||
* @example qsvdec.c
|
||||
* This example shows how to do QSV-accelerated H.264 decoding with output
|
||||
* frames in the GPU video surfaces.
|
||||
* frames in the VA-API video surfaces.
|
||||
*/
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include <mfx/mfxvideo.h>
|
||||
|
||||
#include <va/va.h>
|
||||
#include <va/va_x11.h>
|
||||
#include <X11/Xlib.h>
|
||||
|
||||
#include "libavformat/avformat.h"
|
||||
#include "libavformat/avio.h"
|
||||
|
||||
#include "libavcodec/avcodec.h"
|
||||
#include "libavcodec/qsv.h"
|
||||
|
||||
#include "libavutil/buffer.h"
|
||||
#include "libavutil/error.h"
|
||||
#include "libavutil/hwcontext.h"
|
||||
#include "libavutil/hwcontext_qsv.h"
|
||||
#include "libavutil/mem.h"
|
||||
|
||||
typedef struct DecodeContext {
|
||||
AVBufferRef *hw_device_ref;
|
||||
mfxSession mfx_session;
|
||||
VADisplay va_dpy;
|
||||
|
||||
VASurfaceID *surfaces;
|
||||
mfxMemId *surface_ids;
|
||||
int *surface_used;
|
||||
int nb_surfaces;
|
||||
|
||||
mfxFrameInfo frame_info;
|
||||
} DecodeContext;
|
||||
|
||||
static mfxStatus frame_alloc(mfxHDL pthis, mfxFrameAllocRequest *req,
|
||||
mfxFrameAllocResponse *resp)
|
||||
{
|
||||
DecodeContext *decode = pthis;
|
||||
int err, i;
|
||||
|
||||
if (decode->surfaces) {
|
||||
fprintf(stderr, "Multiple allocation requests.\n");
|
||||
return MFX_ERR_MEMORY_ALLOC;
|
||||
}
|
||||
if (!(req->Type & MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET)) {
|
||||
fprintf(stderr, "Unsupported surface type: %d\n", req->Type);
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
if (req->Info.BitDepthLuma != 8 || req->Info.BitDepthChroma != 8 ||
|
||||
req->Info.Shift || req->Info.FourCC != MFX_FOURCC_NV12 ||
|
||||
req->Info.ChromaFormat != MFX_CHROMAFORMAT_YUV420) {
|
||||
fprintf(stderr, "Unsupported surface properties.\n");
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
decode->surfaces = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surfaces));
|
||||
decode->surface_ids = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surface_ids));
|
||||
decode->surface_used = av_mallocz_array(req->NumFrameSuggested, sizeof(*decode->surface_used));
|
||||
if (!decode->surfaces || !decode->surface_ids || !decode->surface_used)
|
||||
goto fail;
|
||||
|
||||
err = vaCreateSurfaces(decode->va_dpy, VA_RT_FORMAT_YUV420,
|
||||
req->Info.Width, req->Info.Height,
|
||||
decode->surfaces, req->NumFrameSuggested,
|
||||
NULL, 0);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error allocating VA surfaces\n");
|
||||
goto fail;
|
||||
}
|
||||
decode->nb_surfaces = req->NumFrameSuggested;
|
||||
|
||||
for (i = 0; i < decode->nb_surfaces; i++)
|
||||
decode->surface_ids[i] = &decode->surfaces[i];
|
||||
|
||||
resp->mids = decode->surface_ids;
|
||||
resp->NumFrameActual = decode->nb_surfaces;
|
||||
|
||||
decode->frame_info = req->Info;
|
||||
|
||||
return MFX_ERR_NONE;
|
||||
fail:
|
||||
av_freep(&decode->surfaces);
|
||||
av_freep(&decode->surface_ids);
|
||||
av_freep(&decode->surface_used);
|
||||
|
||||
return MFX_ERR_MEMORY_ALLOC;
|
||||
}
|
||||
|
||||
static mfxStatus frame_free(mfxHDL pthis, mfxFrameAllocResponse *resp)
|
||||
{
|
||||
return MFX_ERR_NONE;
|
||||
}
|
||||
|
||||
static mfxStatus frame_lock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
|
||||
{
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
static mfxStatus frame_unlock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
|
||||
{
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
static mfxStatus frame_get_hdl(mfxHDL pthis, mfxMemId mid, mfxHDL *hdl)
|
||||
{
|
||||
*hdl = mid;
|
||||
return MFX_ERR_NONE;
|
||||
}
|
||||
|
||||
static void free_surfaces(DecodeContext *decode)
|
||||
{
|
||||
if (decode->surfaces)
|
||||
vaDestroySurfaces(decode->va_dpy, decode->surfaces, decode->nb_surfaces);
|
||||
av_freep(&decode->surfaces);
|
||||
av_freep(&decode->surface_ids);
|
||||
av_freep(&decode->surface_used);
|
||||
decode->nb_surfaces = 0;
|
||||
}
|
||||
|
||||
static void free_buffer(void *opaque, uint8_t *data)
|
||||
{
|
||||
int *used = opaque;
|
||||
*used = 0;
|
||||
av_freep(&data);
|
||||
}
|
||||
|
||||
static int get_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
|
||||
{
|
||||
DecodeContext *decode = avctx->opaque;
|
||||
|
||||
mfxFrameSurface1 *surf;
|
||||
AVBufferRef *surf_buf;
|
||||
int idx;
|
||||
|
||||
for (idx = 0; idx < decode->nb_surfaces; idx++) {
|
||||
if (!decode->surface_used[idx])
|
||||
break;
|
||||
}
|
||||
if (idx == decode->nb_surfaces) {
|
||||
fprintf(stderr, "No free surfaces\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
surf = av_mallocz(sizeof(*surf));
|
||||
if (!surf)
|
||||
return AVERROR(ENOMEM);
|
||||
surf_buf = av_buffer_create((uint8_t*)surf, sizeof(*surf), free_buffer,
|
||||
&decode->surface_used[idx], AV_BUFFER_FLAG_READONLY);
|
||||
if (!surf_buf) {
|
||||
av_freep(&surf);
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
surf->Info = decode->frame_info;
|
||||
surf->Data.MemId = &decode->surfaces[idx];
|
||||
|
||||
frame->buf[0] = surf_buf;
|
||||
frame->data[3] = (uint8_t*)surf;
|
||||
|
||||
decode->surface_used[idx] = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
|
||||
{
|
||||
while (*pix_fmts != AV_PIX_FMT_NONE) {
|
||||
if (*pix_fmts == AV_PIX_FMT_QSV) {
|
||||
DecodeContext *decode = avctx->opaque;
|
||||
AVHWFramesContext *frames_ctx;
|
||||
AVQSVFramesContext *frames_hwctx;
|
||||
int ret;
|
||||
if (!avctx->hwaccel_context) {
|
||||
DecodeContext *decode = avctx->opaque;
|
||||
AVQSVContext *qsv = av_qsv_alloc_context();
|
||||
if (!qsv)
|
||||
return AV_PIX_FMT_NONE;
|
||||
|
||||
/* create a pool of surfaces to be used by the decoder */
|
||||
avctx->hw_frames_ctx = av_hwframe_ctx_alloc(decode->hw_device_ref);
|
||||
if (!avctx->hw_frames_ctx)
|
||||
return AV_PIX_FMT_NONE;
|
||||
frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data;
|
||||
frames_hwctx = frames_ctx->hwctx;
|
||||
qsv->session = decode->mfx_session;
|
||||
qsv->iopattern = MFX_IOPATTERN_OUT_VIDEO_MEMORY;
|
||||
|
||||
frames_ctx->format = AV_PIX_FMT_QSV;
|
||||
frames_ctx->sw_format = avctx->sw_pix_fmt;
|
||||
frames_ctx->width = FFALIGN(avctx->coded_width, 32);
|
||||
frames_ctx->height = FFALIGN(avctx->coded_height, 32);
|
||||
frames_ctx->initial_pool_size = 32;
|
||||
|
||||
frames_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
|
||||
|
||||
ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
|
||||
if (ret < 0)
|
||||
return AV_PIX_FMT_NONE;
|
||||
avctx->hwaccel_context = qsv;
|
||||
}
|
||||
|
||||
return AV_PIX_FMT_QSV;
|
||||
}
|
||||
@@ -88,47 +218,86 @@ static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
|
||||
}
|
||||
|
||||
static int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,
|
||||
AVFrame *frame, AVFrame *sw_frame,
|
||||
AVPacket *pkt, AVIOContext *output_ctx)
|
||||
AVFrame *frame, AVPacket *pkt,
|
||||
AVIOContext *output_ctx)
|
||||
{
|
||||
int ret = 0;
|
||||
int got_frame = 1;
|
||||
|
||||
ret = avcodec_send_packet(decoder_ctx, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
while (ret >= 0) {
|
||||
int i, j;
|
||||
|
||||
ret = avcodec_receive_frame(decoder_ctx, frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
break;
|
||||
else if (ret < 0) {
|
||||
while (pkt->size > 0 || (!pkt->data && got_frame)) {
|
||||
ret = avcodec_decode_video2(decoder_ctx, frame, &got_frame, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
pkt->data += ret;
|
||||
pkt->size -= ret;
|
||||
|
||||
/* A real program would do something useful with the decoded frame here.
|
||||
* We just retrieve the raw data and write it to a file, which is rather
|
||||
* useless but pedagogic. */
|
||||
ret = av_hwframe_transfer_data(sw_frame, frame, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error transferring the data to system memory\n");
|
||||
goto fail;
|
||||
}
|
||||
if (got_frame) {
|
||||
mfxFrameSurface1 *surf = (mfxFrameSurface1*)frame->data[3];
|
||||
VASurfaceID surface = *(VASurfaceID*)surf->Data.MemId;
|
||||
|
||||
for (i = 0; i < FF_ARRAY_ELEMS(sw_frame->data) && sw_frame->data[i]; i++)
|
||||
for (j = 0; j < (sw_frame->height >> (i > 0)); j++)
|
||||
avio_write(output_ctx, sw_frame->data[i] + j * sw_frame->linesize[i], sw_frame->width);
|
||||
VAImageFormat img_fmt = {
|
||||
.fourcc = VA_FOURCC_NV12,
|
||||
.byte_order = VA_LSB_FIRST,
|
||||
.bits_per_pixel = 8,
|
||||
.depth = 8,
|
||||
};
|
||||
|
||||
VAImage img;
|
||||
|
||||
VAStatus err;
|
||||
uint8_t *data;
|
||||
int i, j;
|
||||
|
||||
img.buf = VA_INVALID_ID;
|
||||
img.image_id = VA_INVALID_ID;
|
||||
|
||||
err = vaCreateImage(decode->va_dpy, &img_fmt,
|
||||
frame->width, frame->height, &img);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error creating an image: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
err = vaGetImage(decode->va_dpy, surface, 0, 0,
|
||||
frame->width, frame->height,
|
||||
img.image_id);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error getting an image: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
err = vaMapBuffer(decode->va_dpy, img.buf, (void**)&data);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error mapping the image buffer: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
for (i = 0; i < img.num_planes; i++)
|
||||
for (j = 0; j < (img.height >> (i > 0)); j++)
|
||||
avio_write(output_ctx, data + img.offsets[i] + j * img.pitches[i], img.width);
|
||||
|
||||
fail:
|
||||
av_frame_unref(sw_frame);
|
||||
av_frame_unref(frame);
|
||||
if (img.buf != VA_INVALID_ID)
|
||||
vaUnmapBuffer(decode->va_dpy, img.buf);
|
||||
if (img.image_id != VA_INVALID_ID)
|
||||
vaDestroyImage(decode->va_dpy, img.image_id);
|
||||
av_frame_unref(frame);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -142,13 +311,30 @@ int main(int argc, char **argv)
|
||||
const AVCodec *decoder;
|
||||
|
||||
AVPacket pkt = { 0 };
|
||||
AVFrame *frame = NULL, *sw_frame = NULL;
|
||||
AVFrame *frame = NULL;
|
||||
|
||||
DecodeContext decode = { NULL };
|
||||
|
||||
Display *dpy = NULL;
|
||||
int va_ver_major, va_ver_minor;
|
||||
|
||||
mfxIMPL mfx_impl = MFX_IMPL_AUTO_ANY;
|
||||
mfxVersion mfx_ver = { { 1, 1 } };
|
||||
|
||||
mfxFrameAllocator frame_allocator = {
|
||||
.pthis = &decode,
|
||||
.Alloc = frame_alloc,
|
||||
.Lock = frame_lock,
|
||||
.Unlock = frame_unlock,
|
||||
.GetHDL = frame_get_hdl,
|
||||
.Free = frame_free,
|
||||
};
|
||||
|
||||
AVIOContext *output_ctx = NULL;
|
||||
|
||||
int ret, i;
|
||||
int ret, i, err;
|
||||
|
||||
av_register_all();
|
||||
|
||||
if (argc < 3) {
|
||||
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
@@ -176,13 +362,34 @@ int main(int argc, char **argv)
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* open the hardware device */
|
||||
ret = av_hwdevice_ctx_create(&decode.hw_device_ref, AV_HWDEVICE_TYPE_QSV,
|
||||
"auto", NULL, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Cannot open the hardware device\n");
|
||||
/* initialize VA-API */
|
||||
dpy = XOpenDisplay(NULL);
|
||||
if (!dpy) {
|
||||
fprintf(stderr, "Cannot open the X display\n");
|
||||
goto finish;
|
||||
}
|
||||
decode.va_dpy = vaGetDisplay(dpy);
|
||||
if (!decode.va_dpy) {
|
||||
fprintf(stderr, "Cannot open the VA display\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
err = vaInitialize(decode.va_dpy, &va_ver_major, &va_ver_minor);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Cannot initialize VA: %s\n", vaErrorStr(err));
|
||||
goto finish;
|
||||
}
|
||||
fprintf(stderr, "Initialized VA v%d.%d\n", va_ver_major, va_ver_minor);
|
||||
|
||||
/* initialize an MFX session */
|
||||
err = MFXInit(mfx_impl, &mfx_ver, &decode.mfx_session);
|
||||
if (err != MFX_ERR_NONE) {
|
||||
fprintf(stderr, "Error initializing an MFX session\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
MFXVideoCORE_SetHandle(decode.mfx_session, MFX_HANDLE_VA_DISPLAY, decode.va_dpy);
|
||||
MFXVideoCORE_SetFrameAllocator(decode.mfx_session, &frame_allocator);
|
||||
|
||||
/* initialize the decoder */
|
||||
decoder = avcodec_find_decoder_by_name("h264_qsv");
|
||||
@@ -208,8 +415,10 @@ int main(int argc, char **argv)
|
||||
video_st->codecpar->extradata_size);
|
||||
decoder_ctx->extradata_size = video_st->codecpar->extradata_size;
|
||||
}
|
||||
decoder_ctx->refcounted_frames = 1;
|
||||
|
||||
decoder_ctx->opaque = &decode;
|
||||
decoder_ctx->get_buffer2 = get_buffer;
|
||||
decoder_ctx->get_format = get_format;
|
||||
|
||||
ret = avcodec_open2(decoder_ctx, NULL, NULL);
|
||||
@@ -225,9 +434,8 @@ int main(int argc, char **argv)
|
||||
goto finish;
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
sw_frame = av_frame_alloc();
|
||||
if (!frame || !sw_frame) {
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto finish;
|
||||
}
|
||||
@@ -239,7 +447,7 @@ int main(int argc, char **argv)
|
||||
break;
|
||||
|
||||
if (pkt.stream_index == video_st->index)
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
|
||||
|
||||
av_packet_unref(&pkt);
|
||||
}
|
||||
@@ -247,7 +455,7 @@ int main(int argc, char **argv)
|
||||
/* flush the decoder */
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, sw_frame, &pkt, output_ctx);
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
|
||||
|
||||
finish:
|
||||
if (ret < 0) {
|
||||
@@ -259,11 +467,19 @@ finish:
|
||||
avformat_close_input(&input_ctx);
|
||||
|
||||
av_frame_free(&frame);
|
||||
av_frame_free(&sw_frame);
|
||||
|
||||
if (decoder_ctx)
|
||||
av_freep(&decoder_ctx->hwaccel_context);
|
||||
avcodec_free_context(&decoder_ctx);
|
||||
|
||||
av_buffer_unref(&decode.hw_device_ref);
|
||||
free_surfaces(&decode);
|
||||
|
||||
if (decode.mfx_session)
|
||||
MFXClose(decode.mfx_session);
|
||||
if (decode.va_dpy)
|
||||
vaTerminate(decode.va_dpy);
|
||||
if (dpy)
|
||||
XCloseDisplay(dpy);
|
||||
|
||||
avio_close(output_ctx);
|
||||
|
||||
|
||||
@@ -65,6 +65,8 @@ int main(int argc, char **argv)
|
||||
in_filename = argv[1];
|
||||
out_filename = argv[2];
|
||||
|
||||
av_register_all();
|
||||
|
||||
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
|
||||
fprintf(stderr, "Could not open input file '%s'", in_filename);
|
||||
goto end;
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2018 Andreas Unterweger
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
@@ -10,7 +8,7 @@
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
@@ -20,11 +18,10 @@
|
||||
|
||||
/**
|
||||
* @file
|
||||
* Simple audio converter
|
||||
* simple audio converter
|
||||
*
|
||||
* @example transcode_aac.c
|
||||
* Convert an input audio file to AAC in an MP4 container using FFmpeg.
|
||||
* Formats other than MP4 are supported based on the output file extension.
|
||||
* @author Andreas Unterweger (dustsigns@gmail.com)
|
||||
*/
|
||||
|
||||
@@ -43,18 +40,12 @@
|
||||
|
||||
#include "libswresample/swresample.h"
|
||||
|
||||
/* The output bit rate in bit/s */
|
||||
/** The output bit rate in kbit/s */
|
||||
#define OUTPUT_BIT_RATE 96000
|
||||
/* The number of output channels */
|
||||
/** The number of output channels */
|
||||
#define OUTPUT_CHANNELS 2
|
||||
|
||||
/**
|
||||
* Open an input file and the required decoder.
|
||||
* @param filename File to be opened
|
||||
* @param[out] input_format_context Format context of opened file
|
||||
* @param[out] input_codec_context Codec context of opened file
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Open an input file and the required decoder. */
|
||||
static int open_input_file(const char *filename,
|
||||
AVFormatContext **input_format_context,
|
||||
AVCodecContext **input_codec_context)
|
||||
@@ -63,7 +54,7 @@ static int open_input_file(const char *filename,
|
||||
AVCodec *input_codec;
|
||||
int error;
|
||||
|
||||
/* Open the input file to read from it. */
|
||||
/** Open the input file to read from it. */
|
||||
if ((error = avformat_open_input(input_format_context, filename, NULL,
|
||||
NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open input file '%s' (error '%s')\n",
|
||||
@@ -72,7 +63,7 @@ static int open_input_file(const char *filename,
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Get information on the input file (number of streams etc.). */
|
||||
/** Get information on the input file (number of streams etc.). */
|
||||
if ((error = avformat_find_stream_info(*input_format_context, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open find stream info (error '%s')\n",
|
||||
av_err2str(error));
|
||||
@@ -80,7 +71,7 @@ static int open_input_file(const char *filename,
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Make sure that there is only one stream in the input file. */
|
||||
/** Make sure that there is only one stream in the input file. */
|
||||
if ((*input_format_context)->nb_streams != 1) {
|
||||
fprintf(stderr, "Expected one audio input stream, but found %d\n",
|
||||
(*input_format_context)->nb_streams);
|
||||
@@ -88,14 +79,14 @@ static int open_input_file(const char *filename,
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/* Find a decoder for the audio stream. */
|
||||
/** Find a decoder for the audio stream. */
|
||||
if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codecpar->codec_id))) {
|
||||
fprintf(stderr, "Could not find input codec\n");
|
||||
avformat_close_input(input_format_context);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/* Allocate a new decoding context. */
|
||||
/** allocate a new decoding context */
|
||||
avctx = avcodec_alloc_context3(input_codec);
|
||||
if (!avctx) {
|
||||
fprintf(stderr, "Could not allocate a decoding context\n");
|
||||
@@ -103,7 +94,7 @@ static int open_input_file(const char *filename,
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* Initialize the stream parameters with demuxer information. */
|
||||
/** initialize the stream parameters with demuxer information */
|
||||
error = avcodec_parameters_to_context(avctx, (*input_format_context)->streams[0]->codecpar);
|
||||
if (error < 0) {
|
||||
avformat_close_input(input_format_context);
|
||||
@@ -111,7 +102,7 @@ static int open_input_file(const char *filename,
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Open the decoder for the audio stream to use it later. */
|
||||
/** Open the decoder for the audio stream to use it later. */
|
||||
if ((error = avcodec_open2(avctx, input_codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open input codec (error '%s')\n",
|
||||
av_err2str(error));
|
||||
@@ -120,7 +111,7 @@ static int open_input_file(const char *filename,
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Save the decoder context for easier access later. */
|
||||
/** Save the decoder context for easier access later. */
|
||||
*input_codec_context = avctx;
|
||||
|
||||
return 0;
|
||||
@@ -130,11 +121,6 @@ static int open_input_file(const char *filename,
|
||||
* Open an output file and the required encoder.
|
||||
* Also set some basic encoder parameters.
|
||||
* Some of these parameters are based on the input file's parameters.
|
||||
* @param filename File to be opened
|
||||
* @param input_codec_context Codec context of input file
|
||||
* @param[out] output_format_context Format context of output file
|
||||
* @param[out] output_codec_context Codec context of output file
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
static int open_output_file(const char *filename,
|
||||
AVCodecContext *input_codec_context,
|
||||
@@ -147,7 +133,7 @@ static int open_output_file(const char *filename,
|
||||
AVCodec *output_codec = NULL;
|
||||
int error;
|
||||
|
||||
/* Open the output file to write to it. */
|
||||
/** Open the output file to write to it. */
|
||||
if ((error = avio_open(&output_io_context, filename,
|
||||
AVIO_FLAG_WRITE)) < 0) {
|
||||
fprintf(stderr, "Could not open output file '%s' (error '%s')\n",
|
||||
@@ -155,35 +141,32 @@ static int open_output_file(const char *filename,
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Create a new format context for the output container format. */
|
||||
/** Create a new format context for the output container format. */
|
||||
if (!(*output_format_context = avformat_alloc_context())) {
|
||||
fprintf(stderr, "Could not allocate output format context\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* Associate the output file (pointer) with the container format context. */
|
||||
/** Associate the output file (pointer) with the container format context. */
|
||||
(*output_format_context)->pb = output_io_context;
|
||||
|
||||
/* Guess the desired container format based on the file extension. */
|
||||
/** Guess the desired container format based on the file extension. */
|
||||
if (!((*output_format_context)->oformat = av_guess_format(NULL, filename,
|
||||
NULL))) {
|
||||
fprintf(stderr, "Could not find output file format\n");
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
if (!((*output_format_context)->url = av_strdup(filename))) {
|
||||
fprintf(stderr, "Could not allocate url.\n");
|
||||
error = AVERROR(ENOMEM);
|
||||
goto cleanup;
|
||||
}
|
||||
av_strlcpy((*output_format_context)->filename, filename,
|
||||
sizeof((*output_format_context)->filename));
|
||||
|
||||
/* Find the encoder to be used by its name. */
|
||||
/** Find the encoder to be used by its name. */
|
||||
if (!(output_codec = avcodec_find_encoder(AV_CODEC_ID_AAC))) {
|
||||
fprintf(stderr, "Could not find an AAC encoder.\n");
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* Create a new audio stream in the output file container. */
|
||||
/** Create a new audio stream in the output file container. */
|
||||
if (!(stream = avformat_new_stream(*output_format_context, NULL))) {
|
||||
fprintf(stderr, "Could not create new stream\n");
|
||||
error = AVERROR(ENOMEM);
|
||||
@@ -197,27 +180,31 @@ static int open_output_file(const char *filename,
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* Set the basic encoder parameters.
|
||||
* The input file's sample rate is used to avoid a sample rate conversion. */
|
||||
/**
|
||||
* Set the basic encoder parameters.
|
||||
* The input file's sample rate is used to avoid a sample rate conversion.
|
||||
*/
|
||||
avctx->channels = OUTPUT_CHANNELS;
|
||||
avctx->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
|
||||
avctx->sample_rate = input_codec_context->sample_rate;
|
||||
avctx->sample_fmt = output_codec->sample_fmts[0];
|
||||
avctx->bit_rate = OUTPUT_BIT_RATE;
|
||||
|
||||
/* Allow the use of the experimental AAC encoder. */
|
||||
/** Allow the use of the experimental AAC encoder */
|
||||
avctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
|
||||
|
||||
/* Set the sample rate for the container. */
|
||||
/** Set the sample rate for the container. */
|
||||
stream->time_base.den = input_codec_context->sample_rate;
|
||||
stream->time_base.num = 1;
|
||||
|
||||
/* Some container formats (like MP4) require global headers to be present.
|
||||
* Mark the encoder so that it behaves accordingly. */
|
||||
/**
|
||||
* Some container formats (like MP4) require global headers to be present
|
||||
* Mark the encoder so that it behaves accordingly.
|
||||
*/
|
||||
if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
avctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
|
||||
|
||||
/* Open the encoder for the audio stream to use it later. */
|
||||
/** Open the encoder for the audio stream to use it later. */
|
||||
if ((error = avcodec_open2(avctx, output_codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open output codec (error '%s')\n",
|
||||
av_err2str(error));
|
||||
@@ -230,7 +217,7 @@ static int open_output_file(const char *filename,
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* Save the encoder context for easier access later. */
|
||||
/** Save the encoder context for easier access later. */
|
||||
*output_codec_context = avctx;
|
||||
|
||||
return 0;
|
||||
@@ -243,23 +230,16 @@ cleanup:
|
||||
return error < 0 ? error : AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize one data packet for reading or writing.
|
||||
* @param packet Packet to be initialized
|
||||
*/
|
||||
/** Initialize one data packet for reading or writing. */
|
||||
static void init_packet(AVPacket *packet)
|
||||
{
|
||||
av_init_packet(packet);
|
||||
/* Set the packet data and size so that it is recognized as being empty. */
|
||||
/** Set the packet data and size so that it is recognized as being empty. */
|
||||
packet->data = NULL;
|
||||
packet->size = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize one audio frame for reading from the input file.
|
||||
* @param[out] frame Frame to be initialized
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Initialize one audio frame for reading from the input file */
|
||||
static int init_input_frame(AVFrame **frame)
|
||||
{
|
||||
if (!(*frame = av_frame_alloc())) {
|
||||
@@ -273,10 +253,6 @@ static int init_input_frame(AVFrame **frame)
|
||||
* Initialize the audio resampler based on the input and output codec settings.
|
||||
* If the input and output sample formats differ, a conversion is required
|
||||
* libswresample takes care of this, but requires initialization.
|
||||
* @param input_codec_context Codec context of the input file
|
||||
* @param output_codec_context Codec context of the output file
|
||||
* @param[out] resample_context Resample context for the required conversion
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
static int init_resampler(AVCodecContext *input_codec_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
@@ -284,7 +260,7 @@ static int init_resampler(AVCodecContext *input_codec_context,
|
||||
{
|
||||
int error;
|
||||
|
||||
/*
|
||||
/**
|
||||
* Create a resampler context for the conversion.
|
||||
* Set the conversion parameters.
|
||||
* Default channel layouts based on the number of channels
|
||||
@@ -303,14 +279,14 @@ static int init_resampler(AVCodecContext *input_codec_context,
|
||||
fprintf(stderr, "Could not allocate resample context\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
/*
|
||||
/**
|
||||
* Perform a sanity check so that the number of converted samples is
|
||||
* not greater than the number of samples to be converted.
|
||||
* If the sample rates differ, this case has to be handled differently
|
||||
*/
|
||||
av_assert0(output_codec_context->sample_rate == input_codec_context->sample_rate);
|
||||
|
||||
/* Open the resampler with the specified parameters. */
|
||||
/** Open the resampler with the specified parameters. */
|
||||
if ((error = swr_init(*resample_context)) < 0) {
|
||||
fprintf(stderr, "Could not open resample context\n");
|
||||
swr_free(resample_context);
|
||||
@@ -319,15 +295,10 @@ static int init_resampler(AVCodecContext *input_codec_context,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize a FIFO buffer for the audio samples to be encoded.
|
||||
* @param[out] fifo Sample buffer
|
||||
* @param output_codec_context Codec context of the output file
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Initialize a FIFO buffer for the audio samples to be encoded. */
|
||||
static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)
|
||||
{
|
||||
/* Create the FIFO buffer based on the specified output sample format. */
|
||||
/** Create the FIFO buffer based on the specified output sample format. */
|
||||
if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt,
|
||||
output_codec_context->channels, 1))) {
|
||||
fprintf(stderr, "Could not allocate FIFO\n");
|
||||
@@ -336,11 +307,7 @@ static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write the header of the output file container.
|
||||
* @param output_format_context Format context of the output file
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Write the header of the output file container. */
|
||||
static int write_output_file_header(AVFormatContext *output_format_context)
|
||||
{
|
||||
int error;
|
||||
@@ -352,32 +319,20 @@ static int write_output_file_header(AVFormatContext *output_format_context)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Decode one audio frame from the input file.
|
||||
* @param frame Audio frame to be decoded
|
||||
* @param input_format_context Format context of the input file
|
||||
* @param input_codec_context Codec context of the input file
|
||||
* @param[out] data_present Indicates whether data has been decoded
|
||||
* @param[out] finished Indicates whether the end of file has
|
||||
* been reached and all data has been
|
||||
* decoded. If this flag is false, there
|
||||
* is more data to be decoded, i.e., this
|
||||
* function has to be called again.
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Decode one audio frame from the input file. */
|
||||
static int decode_audio_frame(AVFrame *frame,
|
||||
AVFormatContext *input_format_context,
|
||||
AVCodecContext *input_codec_context,
|
||||
int *data_present, int *finished)
|
||||
{
|
||||
/* Packet used for temporary storage. */
|
||||
/** Packet used for temporary storage. */
|
||||
AVPacket input_packet;
|
||||
int error;
|
||||
init_packet(&input_packet);
|
||||
|
||||
/* Read one audio frame from the input file into a temporary packet. */
|
||||
/** Read one audio frame from the input file into a temporary packet. */
|
||||
if ((error = av_read_frame(input_format_context, &input_packet)) < 0) {
|
||||
/* If we are at the end of the file, flush the decoder below. */
|
||||
/** If we are at the end of the file, flush the decoder below. */
|
||||
if (error == AVERROR_EOF)
|
||||
*finished = 1;
|
||||
else {
|
||||
@@ -387,52 +342,34 @@ static int decode_audio_frame(AVFrame *frame,
|
||||
}
|
||||
}
|
||||
|
||||
/* Send the audio frame stored in the temporary packet to the decoder.
|
||||
* The input audio stream decoder is used to do this. */
|
||||
if ((error = avcodec_send_packet(input_codec_context, &input_packet)) < 0) {
|
||||
fprintf(stderr, "Could not send packet for decoding (error '%s')\n",
|
||||
/**
|
||||
* Decode the audio frame stored in the temporary packet.
|
||||
* The input audio stream decoder is used to do this.
|
||||
* If we are at the end of the file, pass an empty packet to the decoder
|
||||
* to flush it.
|
||||
*/
|
||||
if ((error = avcodec_decode_audio4(input_codec_context, frame,
|
||||
data_present, &input_packet)) < 0) {
|
||||
fprintf(stderr, "Could not decode frame (error '%s')\n",
|
||||
av_err2str(error));
|
||||
av_packet_unref(&input_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Receive one frame from the decoder. */
|
||||
error = avcodec_receive_frame(input_codec_context, frame);
|
||||
/* If the decoder asks for more data to be able to decode a frame,
|
||||
* return indicating that no data is present. */
|
||||
if (error == AVERROR(EAGAIN)) {
|
||||
error = 0;
|
||||
goto cleanup;
|
||||
/* If the end of the input file is reached, stop decoding. */
|
||||
} else if (error == AVERROR_EOF) {
|
||||
*finished = 1;
|
||||
error = 0;
|
||||
goto cleanup;
|
||||
} else if (error < 0) {
|
||||
fprintf(stderr, "Could not decode frame (error '%s')\n",
|
||||
av_err2str(error));
|
||||
goto cleanup;
|
||||
/* Default case: Return decoded data. */
|
||||
} else {
|
||||
*data_present = 1;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
cleanup:
|
||||
/**
|
||||
* If the decoder has not been flushed completely, we are not finished,
|
||||
* so that this function has to be called again.
|
||||
*/
|
||||
if (*finished && *data_present)
|
||||
*finished = 0;
|
||||
av_packet_unref(&input_packet);
|
||||
return error;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize a temporary storage for the specified number of audio samples.
|
||||
* The conversion requires temporary storage due to the different format.
|
||||
* The number of audio samples to be allocated is specified in frame_size.
|
||||
* @param[out] converted_input_samples Array of converted samples. The
|
||||
* dimensions are reference, channel
|
||||
* (for multi-channel audio), sample.
|
||||
* @param output_codec_context Codec context of the output file
|
||||
* @param frame_size Number of samples to be converted in
|
||||
* each round
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
static int init_converted_samples(uint8_t ***converted_input_samples,
|
||||
AVCodecContext *output_codec_context,
|
||||
@@ -440,7 +377,8 @@ static int init_converted_samples(uint8_t ***converted_input_samples,
|
||||
{
|
||||
int error;
|
||||
|
||||
/* Allocate as many pointers as there are audio channels.
|
||||
/**
|
||||
* Allocate as many pointers as there are audio channels.
|
||||
* Each pointer will later point to the audio samples of the corresponding
|
||||
* channels (although it may be NULL for interleaved formats).
|
||||
*/
|
||||
@@ -450,8 +388,10 @@ static int init_converted_samples(uint8_t ***converted_input_samples,
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* Allocate memory for the samples of all channels in one consecutive
|
||||
* block for convenience. */
|
||||
/**
|
||||
* Allocate memory for the samples of all channels in one consecutive
|
||||
* block for convenience.
|
||||
*/
|
||||
if ((error = av_samples_alloc(*converted_input_samples, NULL,
|
||||
output_codec_context->channels,
|
||||
frame_size,
|
||||
@@ -468,15 +408,8 @@ static int init_converted_samples(uint8_t ***converted_input_samples,
|
||||
|
||||
/**
|
||||
* Convert the input audio samples into the output sample format.
|
||||
* The conversion happens on a per-frame basis, the size of which is
|
||||
* specified by frame_size.
|
||||
* @param input_data Samples to be decoded. The dimensions are
|
||||
* channel (for multi-channel audio), sample.
|
||||
* @param[out] converted_data Converted samples. The dimensions are channel
|
||||
* (for multi-channel audio), sample.
|
||||
* @param frame_size Number of samples to be converted
|
||||
* @param resample_context Resample context for the conversion
|
||||
* @return Error code (0 if successful)
|
||||
* The conversion happens on a per-frame basis, the size of which is specified
|
||||
* by frame_size.
|
||||
*/
|
||||
static int convert_samples(const uint8_t **input_data,
|
||||
uint8_t **converted_data, const int frame_size,
|
||||
@@ -484,7 +417,7 @@ static int convert_samples(const uint8_t **input_data,
|
||||
{
|
||||
int error;
|
||||
|
||||
/* Convert the samples using the resampler. */
|
||||
/** Convert the samples using the resampler. */
|
||||
if ((error = swr_convert(resample_context,
|
||||
converted_data, frame_size,
|
||||
input_data , frame_size)) < 0) {
|
||||
@@ -496,28 +429,23 @@ static int convert_samples(const uint8_t **input_data,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add converted input audio samples to the FIFO buffer for later processing.
|
||||
* @param fifo Buffer to add the samples to
|
||||
* @param converted_input_samples Samples to be added. The dimensions are channel
|
||||
* (for multi-channel audio), sample.
|
||||
* @param frame_size Number of samples to be converted
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Add converted input audio samples to the FIFO buffer for later processing. */
|
||||
static int add_samples_to_fifo(AVAudioFifo *fifo,
|
||||
uint8_t **converted_input_samples,
|
||||
const int frame_size)
|
||||
{
|
||||
int error;
|
||||
|
||||
/* Make the FIFO as large as it needs to be to hold both,
|
||||
* the old and the new samples. */
|
||||
/**
|
||||
* Make the FIFO as large as it needs to be to hold both,
|
||||
* the old and the new samples.
|
||||
*/
|
||||
if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) < 0) {
|
||||
fprintf(stderr, "Could not reallocate FIFO\n");
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Store the new samples in the FIFO buffer. */
|
||||
/** Store the new samples in the FIFO buffer. */
|
||||
if (av_audio_fifo_write(fifo, (void **)converted_input_samples,
|
||||
frame_size) < frame_size) {
|
||||
fprintf(stderr, "Could not write data to FIFO\n");
|
||||
@@ -527,20 +455,8 @@ static int add_samples_to_fifo(AVAudioFifo *fifo,
|
||||
}
|
||||
|
||||
/**
|
||||
* Read one audio frame from the input file, decode, convert and store
|
||||
* Read one audio frame from the input file, decodes, converts and stores
|
||||
* it in the FIFO buffer.
|
||||
* @param fifo Buffer used for temporary storage
|
||||
* @param input_format_context Format context of the input file
|
||||
* @param input_codec_context Codec context of the input file
|
||||
* @param output_codec_context Codec context of the output file
|
||||
* @param resampler_context Resample context for the conversion
|
||||
* @param[out] finished Indicates whether the end of file has
|
||||
* been reached and all data has been
|
||||
* decoded. If this flag is false,
|
||||
* there is more data to be decoded,
|
||||
* i.e., this function has to be called
|
||||
* again.
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
static int read_decode_convert_and_store(AVAudioFifo *fifo,
|
||||
AVFormatContext *input_format_context,
|
||||
@@ -549,41 +465,45 @@ static int read_decode_convert_and_store(AVAudioFifo *fifo,
|
||||
SwrContext *resampler_context,
|
||||
int *finished)
|
||||
{
|
||||
/* Temporary storage of the input samples of the frame read from the file. */
|
||||
/** Temporary storage of the input samples of the frame read from the file. */
|
||||
AVFrame *input_frame = NULL;
|
||||
/* Temporary storage for the converted input samples. */
|
||||
/** Temporary storage for the converted input samples. */
|
||||
uint8_t **converted_input_samples = NULL;
|
||||
int data_present = 0;
|
||||
int data_present;
|
||||
int ret = AVERROR_EXIT;
|
||||
|
||||
/* Initialize temporary storage for one input frame. */
|
||||
/** Initialize temporary storage for one input frame. */
|
||||
if (init_input_frame(&input_frame))
|
||||
goto cleanup;
|
||||
/* Decode one frame worth of audio samples. */
|
||||
/** Decode one frame worth of audio samples. */
|
||||
if (decode_audio_frame(input_frame, input_format_context,
|
||||
input_codec_context, &data_present, finished))
|
||||
goto cleanup;
|
||||
/* If we are at the end of the file and there are no more samples
|
||||
/**
|
||||
* If we are at the end of the file and there are no more samples
|
||||
* in the decoder which are delayed, we are actually finished.
|
||||
* This must not be treated as an error. */
|
||||
if (*finished) {
|
||||
* This must not be treated as an error.
|
||||
*/
|
||||
if (*finished && !data_present) {
|
||||
ret = 0;
|
||||
goto cleanup;
|
||||
}
|
||||
/* If there is decoded data, convert and store it. */
|
||||
/** If there is decoded data, convert and store it */
|
||||
if (data_present) {
|
||||
/* Initialize the temporary storage for the converted input samples. */
|
||||
/** Initialize the temporary storage for the converted input samples. */
|
||||
if (init_converted_samples(&converted_input_samples, output_codec_context,
|
||||
input_frame->nb_samples))
|
||||
goto cleanup;
|
||||
|
||||
/* Convert the input samples to the desired output sample format.
|
||||
* This requires a temporary storage provided by converted_input_samples. */
|
||||
/**
|
||||
* Convert the input samples to the desired output sample format.
|
||||
* This requires a temporary storage provided by converted_input_samples.
|
||||
*/
|
||||
if (convert_samples((const uint8_t**)input_frame->extended_data, converted_input_samples,
|
||||
input_frame->nb_samples, resampler_context))
|
||||
goto cleanup;
|
||||
|
||||
/* Add the converted input samples to the FIFO buffer for later processing. */
|
||||
/** Add the converted input samples to the FIFO buffer for later processing. */
|
||||
if (add_samples_to_fifo(fifo, converted_input_samples,
|
||||
input_frame->nb_samples))
|
||||
goto cleanup;
|
||||
@@ -604,10 +524,6 @@ cleanup:
|
||||
/**
|
||||
* Initialize one input frame for writing to the output file.
|
||||
* The frame will be exactly frame_size samples large.
|
||||
* @param[out] frame Frame to be initialized
|
||||
* @param output_codec_context Codec context of the output file
|
||||
* @param frame_size Size of the frame
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
static int init_output_frame(AVFrame **frame,
|
||||
AVCodecContext *output_codec_context,
|
||||
@@ -615,24 +531,28 @@ static int init_output_frame(AVFrame **frame,
|
||||
{
|
||||
int error;
|
||||
|
||||
/* Create a new frame to store the audio samples. */
|
||||
/** Create a new frame to store the audio samples. */
|
||||
if (!(*frame = av_frame_alloc())) {
|
||||
fprintf(stderr, "Could not allocate output frame\n");
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/* Set the frame's parameters, especially its size and format.
|
||||
/**
|
||||
* Set the frame's parameters, especially its size and format.
|
||||
* av_frame_get_buffer needs this to allocate memory for the
|
||||
* audio samples of the frame.
|
||||
* Default channel layouts based on the number of channels
|
||||
* are assumed for simplicity. */
|
||||
* are assumed for simplicity.
|
||||
*/
|
||||
(*frame)->nb_samples = frame_size;
|
||||
(*frame)->channel_layout = output_codec_context->channel_layout;
|
||||
(*frame)->format = output_codec_context->sample_fmt;
|
||||
(*frame)->sample_rate = output_codec_context->sample_rate;
|
||||
|
||||
/* Allocate the samples of the created frame. This call will make
|
||||
* sure that the audio frame can hold as many samples as specified. */
|
||||
/**
|
||||
* Allocate the samples of the created frame. This call will make
|
||||
* sure that the audio frame can hold as many samples as specified.
|
||||
*/
|
||||
if ((error = av_frame_get_buffer(*frame, 0)) < 0) {
|
||||
fprintf(stderr, "Could not allocate output frame samples (error '%s')\n",
|
||||
av_err2str(error));
|
||||
@@ -643,114 +563,87 @@ static int init_output_frame(AVFrame **frame,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Global timestamp for the audio frames. */
|
||||
/** Global timestamp for the audio frames */
|
||||
static int64_t pts = 0;
|
||||
|
||||
/**
|
||||
* Encode one frame worth of audio to the output file.
|
||||
* @param frame Samples to be encoded
|
||||
* @param output_format_context Format context of the output file
|
||||
* @param output_codec_context Codec context of the output file
|
||||
* @param[out] data_present Indicates whether data has been
|
||||
* encoded
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Encode one frame worth of audio to the output file. */
|
||||
static int encode_audio_frame(AVFrame *frame,
|
||||
AVFormatContext *output_format_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
int *data_present)
|
||||
{
|
||||
/* Packet used for temporary storage. */
|
||||
/** Packet used for temporary storage. */
|
||||
AVPacket output_packet;
|
||||
int error;
|
||||
init_packet(&output_packet);
|
||||
|
||||
/* Set a timestamp based on the sample rate for the container. */
|
||||
/** Set a timestamp based on the sample rate for the container. */
|
||||
if (frame) {
|
||||
frame->pts = pts;
|
||||
pts += frame->nb_samples;
|
||||
}
|
||||
|
||||
/* Send the audio frame stored in the temporary packet to the encoder.
|
||||
* The output audio stream encoder is used to do this. */
|
||||
error = avcodec_send_frame(output_codec_context, frame);
|
||||
/* The encoder signals that it has nothing more to encode. */
|
||||
if (error == AVERROR_EOF) {
|
||||
error = 0;
|
||||
goto cleanup;
|
||||
} else if (error < 0) {
|
||||
fprintf(stderr, "Could not send packet for encoding (error '%s')\n",
|
||||
/**
|
||||
* Encode the audio frame and store it in the temporary packet.
|
||||
* The output audio stream encoder is used to do this.
|
||||
*/
|
||||
if ((error = avcodec_encode_audio2(output_codec_context, &output_packet,
|
||||
frame, data_present)) < 0) {
|
||||
fprintf(stderr, "Could not encode frame (error '%s')\n",
|
||||
av_err2str(error));
|
||||
av_packet_unref(&output_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Receive one encoded frame from the encoder. */
|
||||
error = avcodec_receive_packet(output_codec_context, &output_packet);
|
||||
/* If the encoder asks for more data to be able to provide an
|
||||
* encoded frame, return indicating that no data is present. */
|
||||
if (error == AVERROR(EAGAIN)) {
|
||||
error = 0;
|
||||
goto cleanup;
|
||||
/* If the last frame has been encoded, stop encoding. */
|
||||
} else if (error == AVERROR_EOF) {
|
||||
error = 0;
|
||||
goto cleanup;
|
||||
} else if (error < 0) {
|
||||
fprintf(stderr, "Could not encode frame (error '%s')\n",
|
||||
av_err2str(error));
|
||||
goto cleanup;
|
||||
/* Default case: Return encoded data. */
|
||||
} else {
|
||||
*data_present = 1;
|
||||
/** Write one audio frame from the temporary packet to the output file. */
|
||||
if (*data_present) {
|
||||
if ((error = av_write_frame(output_format_context, &output_packet)) < 0) {
|
||||
fprintf(stderr, "Could not write frame (error '%s')\n",
|
||||
av_err2str(error));
|
||||
av_packet_unref(&output_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
av_packet_unref(&output_packet);
|
||||
}
|
||||
|
||||
/* Write one audio frame from the temporary packet to the output file. */
|
||||
if (*data_present &&
|
||||
(error = av_write_frame(output_format_context, &output_packet)) < 0) {
|
||||
fprintf(stderr, "Could not write frame (error '%s')\n",
|
||||
av_err2str(error));
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
cleanup:
|
||||
av_packet_unref(&output_packet);
|
||||
return error;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load one audio frame from the FIFO buffer, encode and write it to the
|
||||
* output file.
|
||||
* @param fifo Buffer used for temporary storage
|
||||
* @param output_format_context Format context of the output file
|
||||
* @param output_codec_context Codec context of the output file
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
static int load_encode_and_write(AVAudioFifo *fifo,
|
||||
AVFormatContext *output_format_context,
|
||||
AVCodecContext *output_codec_context)
|
||||
{
|
||||
/* Temporary storage of the output samples of the frame written to the file. */
|
||||
/** Temporary storage of the output samples of the frame written to the file. */
|
||||
AVFrame *output_frame;
|
||||
/* Use the maximum number of possible samples per frame.
|
||||
/**
|
||||
* Use the maximum number of possible samples per frame.
|
||||
* If there is less than the maximum possible frame size in the FIFO
|
||||
* buffer use this number. Otherwise, use the maximum possible frame size. */
|
||||
* buffer use this number. Otherwise, use the maximum possible frame size
|
||||
*/
|
||||
const int frame_size = FFMIN(av_audio_fifo_size(fifo),
|
||||
output_codec_context->frame_size);
|
||||
int data_written;
|
||||
|
||||
/* Initialize temporary storage for one output frame. */
|
||||
/** Initialize temporary storage for one output frame. */
|
||||
if (init_output_frame(&output_frame, output_codec_context, frame_size))
|
||||
return AVERROR_EXIT;
|
||||
|
||||
/* Read as many samples from the FIFO buffer as required to fill the frame.
|
||||
* The samples are stored in the frame temporarily. */
|
||||
/**
|
||||
* Read as many samples from the FIFO buffer as required to fill the frame.
|
||||
* The samples are stored in the frame temporarily.
|
||||
*/
|
||||
if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) < frame_size) {
|
||||
fprintf(stderr, "Could not read data from FIFO\n");
|
||||
av_frame_free(&output_frame);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/* Encode one frame worth of audio samples. */
|
||||
/** Encode one frame worth of audio samples. */
|
||||
if (encode_audio_frame(output_frame, output_format_context,
|
||||
output_codec_context, &data_written)) {
|
||||
av_frame_free(&output_frame);
|
||||
@@ -760,11 +653,7 @@ static int load_encode_and_write(AVAudioFifo *fifo,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write the trailer of the output file container.
|
||||
* @param output_format_context Format context of the output file
|
||||
* @return Error code (0 if successful)
|
||||
*/
|
||||
/** Write the trailer of the output file container. */
|
||||
static int write_output_file_trailer(AVFormatContext *output_format_context)
|
||||
{
|
||||
int error;
|
||||
@@ -776,6 +665,7 @@ static int write_output_file_trailer(AVFormatContext *output_format_context)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Convert an audio file to an AAC file in an MP4 container. */
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
AVFormatContext *input_format_context = NULL, *output_format_context = NULL;
|
||||
@@ -784,75 +674,90 @@ int main(int argc, char **argv)
|
||||
AVAudioFifo *fifo = NULL;
|
||||
int ret = AVERROR_EXIT;
|
||||
|
||||
if (argc != 3) {
|
||||
if (argc < 3) {
|
||||
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* Open the input file for reading. */
|
||||
/** Register all codecs and formats so that they can be used. */
|
||||
av_register_all();
|
||||
/** Open the input file for reading. */
|
||||
if (open_input_file(argv[1], &input_format_context,
|
||||
&input_codec_context))
|
||||
goto cleanup;
|
||||
/* Open the output file for writing. */
|
||||
/** Open the output file for writing. */
|
||||
if (open_output_file(argv[2], input_codec_context,
|
||||
&output_format_context, &output_codec_context))
|
||||
goto cleanup;
|
||||
/* Initialize the resampler to be able to convert audio sample formats. */
|
||||
/** Initialize the resampler to be able to convert audio sample formats. */
|
||||
if (init_resampler(input_codec_context, output_codec_context,
|
||||
&resample_context))
|
||||
goto cleanup;
|
||||
/* Initialize the FIFO buffer to store audio samples to be encoded. */
|
||||
/** Initialize the FIFO buffer to store audio samples to be encoded. */
|
||||
if (init_fifo(&fifo, output_codec_context))
|
||||
goto cleanup;
|
||||
/* Write the header of the output file container. */
|
||||
/** Write the header of the output file container. */
|
||||
if (write_output_file_header(output_format_context))
|
||||
goto cleanup;
|
||||
|
||||
/* Loop as long as we have input samples to read or output samples
|
||||
* to write; abort as soon as we have neither. */
|
||||
/**
|
||||
* Loop as long as we have input samples to read or output samples
|
||||
* to write; abort as soon as we have neither.
|
||||
*/
|
||||
while (1) {
|
||||
/* Use the encoder's desired frame size for processing. */
|
||||
/** Use the encoder's desired frame size for processing. */
|
||||
const int output_frame_size = output_codec_context->frame_size;
|
||||
int finished = 0;
|
||||
|
||||
/* Make sure that there is one frame worth of samples in the FIFO
|
||||
/**
|
||||
* Make sure that there is one frame worth of samples in the FIFO
|
||||
* buffer so that the encoder can do its work.
|
||||
* Since the decoder's and the encoder's frame size may differ, we
|
||||
* need to FIFO buffer to store as many frames worth of input samples
|
||||
* that they make up at least one frame worth of output samples. */
|
||||
* that they make up at least one frame worth of output samples.
|
||||
*/
|
||||
while (av_audio_fifo_size(fifo) < output_frame_size) {
|
||||
/* Decode one frame worth of audio samples, convert it to the
|
||||
* output sample format and put it into the FIFO buffer. */
|
||||
/**
|
||||
* Decode one frame worth of audio samples, convert it to the
|
||||
* output sample format and put it into the FIFO buffer.
|
||||
*/
|
||||
if (read_decode_convert_and_store(fifo, input_format_context,
|
||||
input_codec_context,
|
||||
output_codec_context,
|
||||
resample_context, &finished))
|
||||
goto cleanup;
|
||||
|
||||
/* If we are at the end of the input file, we continue
|
||||
* encoding the remaining audio samples to the output file. */
|
||||
/**
|
||||
* If we are at the end of the input file, we continue
|
||||
* encoding the remaining audio samples to the output file.
|
||||
*/
|
||||
if (finished)
|
||||
break;
|
||||
}
|
||||
|
||||
/* If we have enough samples for the encoder, we encode them.
|
||||
/**
|
||||
* If we have enough samples for the encoder, we encode them.
|
||||
* At the end of the file, we pass the remaining samples to
|
||||
* the encoder. */
|
||||
* the encoder.
|
||||
*/
|
||||
while (av_audio_fifo_size(fifo) >= output_frame_size ||
|
||||
(finished && av_audio_fifo_size(fifo) > 0))
|
||||
/* Take one frame worth of audio samples from the FIFO buffer,
|
||||
* encode it and write it to the output file. */
|
||||
/**
|
||||
* Take one frame worth of audio samples from the FIFO buffer,
|
||||
* encode it and write it to the output file.
|
||||
*/
|
||||
if (load_encode_and_write(fifo, output_format_context,
|
||||
output_codec_context))
|
||||
goto cleanup;
|
||||
|
||||
/* If we are at the end of the input file and have encoded
|
||||
* all remaining samples, we can exit this loop and finish. */
|
||||
/**
|
||||
* If we are at the end of the input file and have encoded
|
||||
* all remaining samples, we can exit this loop and finish.
|
||||
*/
|
||||
if (finished) {
|
||||
int data_written;
|
||||
/* Flush the encoder as it may have delayed frames. */
|
||||
/** Flush the encoder as it may have delayed frames. */
|
||||
do {
|
||||
data_written = 0;
|
||||
if (encode_audio_frame(NULL, output_format_context,
|
||||
output_codec_context, &data_written))
|
||||
goto cleanup;
|
||||
@@ -861,7 +766,7 @@ int main(int argc, char **argv)
|
||||
}
|
||||
}
|
||||
|
||||
/* Write the trailer of the output file container. */
|
||||
/** Write the trailer of the output file container. */
|
||||
if (write_output_file_trailer(output_format_context))
|
||||
goto cleanup;
|
||||
ret = 0;
|
||||
|
||||
@@ -30,6 +30,7 @@
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavfilter/avfiltergraph.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
@@ -227,8 +228,8 @@ static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
const AVFilter *buffersrc = NULL;
|
||||
const AVFilter *buffersink = NULL;
|
||||
AVFilter *buffersrc = NULL;
|
||||
AVFilter *buffersink = NULL;
|
||||
AVFilterContext *buffersrc_ctx = NULL;
|
||||
AVFilterContext *buffersink_ctx = NULL;
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
@@ -517,6 +518,9 @@ int main(int argc, char **argv)
|
||||
return 1;
|
||||
}
|
||||
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
if ((ret = open_input_file(argv[1])) < 0)
|
||||
goto end;
|
||||
if ((ret = open_output_file(argv[2])) < 0)
|
||||
@@ -554,7 +558,7 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
if (got_frame) {
|
||||
frame->pts = frame->best_effort_timestamp;
|
||||
frame->pts = av_frame_get_best_effort_timestamp(frame);
|
||||
ret = filter_encode_write_frame(frame, stream_index);
|
||||
av_frame_free(&frame);
|
||||
if (ret < 0)
|
||||
|
||||
@@ -1,222 +0,0 @@
|
||||
/*
|
||||
* Video Acceleration API (video encoding) encode sample
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* Intel VAAPI-accelerated encoding example.
|
||||
*
|
||||
* @example vaapi_encode.c
|
||||
* This example shows how to do VAAPI-accelerated encoding. now only support NV12
|
||||
* raw file, usage like: vaapi_encode 1920 1080 input.yuv output.h264
|
||||
*
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <errno.h>
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavutil/pixdesc.h>
|
||||
#include <libavutil/hwcontext.h>
|
||||
|
||||
static int width, height;
|
||||
static AVBufferRef *hw_device_ctx = NULL;
|
||||
|
||||
static int set_hwframe_ctx(AVCodecContext *ctx, AVBufferRef *hw_device_ctx)
|
||||
{
|
||||
AVBufferRef *hw_frames_ref;
|
||||
AVHWFramesContext *frames_ctx = NULL;
|
||||
int err = 0;
|
||||
|
||||
if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx))) {
|
||||
fprintf(stderr, "Failed to create VAAPI frame context.\n");
|
||||
return -1;
|
||||
}
|
||||
frames_ctx = (AVHWFramesContext *)(hw_frames_ref->data);
|
||||
frames_ctx->format = AV_PIX_FMT_VAAPI;
|
||||
frames_ctx->sw_format = AV_PIX_FMT_NV12;
|
||||
frames_ctx->width = width;
|
||||
frames_ctx->height = height;
|
||||
frames_ctx->initial_pool_size = 20;
|
||||
if ((err = av_hwframe_ctx_init(hw_frames_ref)) < 0) {
|
||||
fprintf(stderr, "Failed to initialize VAAPI frame context."
|
||||
"Error code: %s\n",av_err2str(err));
|
||||
av_buffer_unref(&hw_frames_ref);
|
||||
return err;
|
||||
}
|
||||
ctx->hw_frames_ctx = av_buffer_ref(hw_frames_ref);
|
||||
if (!ctx->hw_frames_ctx)
|
||||
err = AVERROR(ENOMEM);
|
||||
|
||||
av_buffer_unref(&hw_frames_ref);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int encode_write(AVCodecContext *avctx, AVFrame *frame, FILE *fout)
|
||||
{
|
||||
int ret = 0;
|
||||
AVPacket enc_pkt;
|
||||
|
||||
av_init_packet(&enc_pkt);
|
||||
enc_pkt.data = NULL;
|
||||
enc_pkt.size = 0;
|
||||
|
||||
if ((ret = avcodec_send_frame(avctx, frame)) < 0) {
|
||||
fprintf(stderr, "Error code: %s\n", av_err2str(ret));
|
||||
goto end;
|
||||
}
|
||||
while (1) {
|
||||
ret = avcodec_receive_packet(avctx, &enc_pkt);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
enc_pkt.stream_index = 0;
|
||||
ret = fwrite(enc_pkt.data, enc_pkt.size, 1, fout);
|
||||
av_packet_unref(&enc_pkt);
|
||||
}
|
||||
|
||||
end:
|
||||
ret = ((ret == AVERROR(EAGAIN)) ? 0 : -1);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
int size, err;
|
||||
FILE *fin = NULL, *fout = NULL;
|
||||
AVFrame *sw_frame = NULL, *hw_frame = NULL;
|
||||
AVCodecContext *avctx = NULL;
|
||||
AVCodec *codec = NULL;
|
||||
const char *enc_name = "h264_vaapi";
|
||||
|
||||
if (argc < 5) {
|
||||
fprintf(stderr, "Usage: %s <width> <height> <input file> <output file>\n", argv[0]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
width = atoi(argv[1]);
|
||||
height = atoi(argv[2]);
|
||||
size = width * height;
|
||||
|
||||
if (!(fin = fopen(argv[3], "r"))) {
|
||||
fprintf(stderr, "Fail to open input file : %s\n", strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
if (!(fout = fopen(argv[4], "w+b"))) {
|
||||
fprintf(stderr, "Fail to open output file : %s\n", strerror(errno));
|
||||
err = -1;
|
||||
goto close;
|
||||
}
|
||||
|
||||
err = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI,
|
||||
NULL, NULL, 0);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Failed to create a VAAPI device. Error code: %s\n", av_err2str(err));
|
||||
goto close;
|
||||
}
|
||||
|
||||
if (!(codec = avcodec_find_encoder_by_name(enc_name))) {
|
||||
fprintf(stderr, "Could not find encoder.\n");
|
||||
err = -1;
|
||||
goto close;
|
||||
}
|
||||
|
||||
if (!(avctx = avcodec_alloc_context3(codec))) {
|
||||
err = AVERROR(ENOMEM);
|
||||
goto close;
|
||||
}
|
||||
|
||||
avctx->width = width;
|
||||
avctx->height = height;
|
||||
avctx->time_base = (AVRational){1, 25};
|
||||
avctx->framerate = (AVRational){25, 1};
|
||||
avctx->sample_aspect_ratio = (AVRational){1, 1};
|
||||
avctx->pix_fmt = AV_PIX_FMT_VAAPI;
|
||||
|
||||
/* set hw_frames_ctx for encoder's AVCodecContext */
|
||||
if ((err = set_hwframe_ctx(avctx, hw_device_ctx)) < 0) {
|
||||
fprintf(stderr, "Failed to set hwframe context.\n");
|
||||
goto close;
|
||||
}
|
||||
|
||||
if ((err = avcodec_open2(avctx, codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Cannot open video encoder codec. Error code: %s\n", av_err2str(err));
|
||||
goto close;
|
||||
}
|
||||
|
||||
while (1) {
|
||||
if (!(sw_frame = av_frame_alloc())) {
|
||||
err = AVERROR(ENOMEM);
|
||||
goto close;
|
||||
}
|
||||
/* read data into software frame, and transfer them into hw frame */
|
||||
sw_frame->width = width;
|
||||
sw_frame->height = height;
|
||||
sw_frame->format = AV_PIX_FMT_NV12;
|
||||
if ((err = av_frame_get_buffer(sw_frame, 32)) < 0)
|
||||
goto close;
|
||||
if ((err = fread((uint8_t*)(sw_frame->data[0]), size, 1, fin)) <= 0)
|
||||
break;
|
||||
if ((err = fread((uint8_t*)(sw_frame->data[1]), size/2, 1, fin)) <= 0)
|
||||
break;
|
||||
|
||||
if (!(hw_frame = av_frame_alloc())) {
|
||||
err = AVERROR(ENOMEM);
|
||||
goto close;
|
||||
}
|
||||
if ((err = av_hwframe_get_buffer(avctx->hw_frames_ctx, hw_frame, 0)) < 0) {
|
||||
fprintf(stderr, "Error code: %s.\n", av_err2str(err));
|
||||
goto close;
|
||||
}
|
||||
if (!hw_frame->hw_frames_ctx) {
|
||||
err = AVERROR(ENOMEM);
|
||||
goto close;
|
||||
}
|
||||
if ((err = av_hwframe_transfer_data(hw_frame, sw_frame, 0)) < 0) {
|
||||
fprintf(stderr, "Error while transferring frame data to surface."
|
||||
"Error code: %s.\n", av_err2str(err));
|
||||
goto close;
|
||||
}
|
||||
|
||||
if ((err = (encode_write(avctx, hw_frame, fout))) < 0) {
|
||||
fprintf(stderr, "Failed to encode.\n");
|
||||
goto close;
|
||||
}
|
||||
av_frame_free(&hw_frame);
|
||||
av_frame_free(&sw_frame);
|
||||
}
|
||||
|
||||
/* flush encoder */
|
||||
err = encode_write(avctx, NULL, fout);
|
||||
if (err == AVERROR_EOF)
|
||||
err = 0;
|
||||
|
||||
close:
|
||||
if (fin)
|
||||
fclose(fin);
|
||||
if (fout)
|
||||
fclose(fout);
|
||||
av_frame_free(&sw_frame);
|
||||
av_frame_free(&hw_frame);
|
||||
avcodec_free_context(&avctx);
|
||||
av_buffer_unref(&hw_device_ctx);
|
||||
|
||||
return err;
|
||||
}
|
||||
@@ -1,304 +0,0 @@
|
||||
/*
|
||||
* Video Acceleration API (video transcoding) transcode sample
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* Intel VAAPI-accelerated transcoding example.
|
||||
*
|
||||
* @example vaapi_transcode.c
|
||||
* This example shows how to do VAAPI-accelerated transcoding.
|
||||
* Usage: vaapi_transcode input_stream codec output_stream
|
||||
* e.g: - vaapi_transcode input.mp4 h264_vaapi output_h264.mp4
|
||||
* - vaapi_transcode input.mp4 vp9_vaapi output_vp9.ivf
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <errno.h>
|
||||
|
||||
#include <libavutil/hwcontext.h>
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
|
||||
static AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
|
||||
static AVBufferRef *hw_device_ctx = NULL;
|
||||
static AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;
|
||||
static int video_stream = -1;
|
||||
static AVStream *ost;
|
||||
static int initialized = 0;
|
||||
|
||||
static enum AVPixelFormat get_vaapi_format(AVCodecContext *ctx,
|
||||
const enum AVPixelFormat *pix_fmts)
|
||||
{
|
||||
const enum AVPixelFormat *p;
|
||||
|
||||
for (p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) {
|
||||
if (*p == AV_PIX_FMT_VAAPI)
|
||||
return *p;
|
||||
}
|
||||
|
||||
fprintf(stderr, "Unable to decode this file using VA-API.\n");
|
||||
return AV_PIX_FMT_NONE;
|
||||
}
|
||||
|
||||
static int open_input_file(const char *filename)
|
||||
{
|
||||
int ret;
|
||||
AVCodec *decoder = NULL;
|
||||
AVStream *video = NULL;
|
||||
|
||||
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
|
||||
fprintf(stderr, "Cannot open input file '%s', Error code: %s\n",
|
||||
filename, av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
|
||||
fprintf(stderr, "Cannot find input stream information. Error code: %s\n",
|
||||
av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Cannot find a video stream in the input file. "
|
||||
"Error code: %s\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
video_stream = ret;
|
||||
|
||||
if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
|
||||
return AVERROR(ENOMEM);
|
||||
|
||||
video = ifmt_ctx->streams[video_stream];
|
||||
if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {
|
||||
fprintf(stderr, "avcodec_parameters_to_context error. Error code: %s\n",
|
||||
av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
|
||||
if (!decoder_ctx->hw_device_ctx) {
|
||||
fprintf(stderr, "A hardware device reference create failed.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
decoder_ctx->get_format = get_vaapi_format;
|
||||
|
||||
if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)
|
||||
fprintf(stderr, "Failed to open codec for decoding. Error code: %s\n",
|
||||
av_err2str(ret));
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int encode_write(AVFrame *frame)
|
||||
{
|
||||
int ret = 0;
|
||||
AVPacket enc_pkt;
|
||||
|
||||
av_init_packet(&enc_pkt);
|
||||
enc_pkt.data = NULL;
|
||||
enc_pkt.size = 0;
|
||||
|
||||
if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {
|
||||
fprintf(stderr, "Error during encoding. Error code: %s\n", av_err2str(ret));
|
||||
goto end;
|
||||
}
|
||||
while (1) {
|
||||
ret = avcodec_receive_packet(encoder_ctx, &enc_pkt);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
enc_pkt.stream_index = 0;
|
||||
av_packet_rescale_ts(&enc_pkt, ifmt_ctx->streams[video_stream]->time_base,
|
||||
ofmt_ctx->streams[0]->time_base);
|
||||
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error during writing data to output file. "
|
||||
"Error code: %s\n", av_err2str(ret));
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
end:
|
||||
if (ret == AVERROR_EOF)
|
||||
return 0;
|
||||
ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int dec_enc(AVPacket *pkt, AVCodec *enc_codec)
|
||||
{
|
||||
AVFrame *frame;
|
||||
int ret = 0;
|
||||
|
||||
ret = avcodec_send_packet(decoder_ctx, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding. Error code: %s\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
while (ret >= 0) {
|
||||
if (!(frame = av_frame_alloc()))
|
||||
return AVERROR(ENOMEM);
|
||||
|
||||
ret = avcodec_receive_frame(decoder_ctx, frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
|
||||
av_frame_free(&frame);
|
||||
return 0;
|
||||
} else if (ret < 0) {
|
||||
fprintf(stderr, "Error while decoding. Error code: %s\n", av_err2str(ret));
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if (!initialized) {
|
||||
/* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.
|
||||
Only after we get a decoded frame, can we obtain its hw_frames_ctx */
|
||||
encoder_ctx->hw_frames_ctx = av_buffer_ref(decoder_ctx->hw_frames_ctx);
|
||||
if (!encoder_ctx->hw_frames_ctx) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto fail;
|
||||
}
|
||||
/* set AVCodecContext Parameters for encoder, here we keep them stay
|
||||
* the same as decoder.
|
||||
* xxx: now the the sample can't handle resolution change case.
|
||||
*/
|
||||
encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);
|
||||
encoder_ctx->pix_fmt = AV_PIX_FMT_VAAPI;
|
||||
encoder_ctx->width = decoder_ctx->width;
|
||||
encoder_ctx->height = decoder_ctx->height;
|
||||
|
||||
if ((ret = avcodec_open2(encoder_ctx, enc_codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Failed to open encode codec. Error code: %s\n",
|
||||
av_err2str(ret));
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {
|
||||
fprintf(stderr, "Failed to allocate stream for output format.\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ost->time_base = encoder_ctx->time_base;
|
||||
ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Failed to copy the stream parameters. "
|
||||
"Error code: %s\n", av_err2str(ret));
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* write the stream header */
|
||||
if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {
|
||||
fprintf(stderr, "Error while writing stream header. "
|
||||
"Error code: %s\n", av_err2str(ret));
|
||||
goto fail;
|
||||
}
|
||||
|
||||
initialized = 1;
|
||||
}
|
||||
|
||||
if ((ret = encode_write(frame)) < 0)
|
||||
fprintf(stderr, "Error during encoding and writing.\n");
|
||||
|
||||
fail:
|
||||
av_frame_free(&frame);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret = 0;
|
||||
AVPacket dec_pkt;
|
||||
AVCodec *enc_codec;
|
||||
|
||||
if (argc != 4) {
|
||||
fprintf(stderr, "Usage: %s <input file> <encode codec> <output file>\n"
|
||||
"The output format is guessed according to the file extension.\n"
|
||||
"\n", argv[0]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI, NULL, NULL, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Failed to create a VAAPI device. Error code: %s\n", av_err2str(ret));
|
||||
return -1;
|
||||
}
|
||||
|
||||
if ((ret = open_input_file(argv[1])) < 0)
|
||||
goto end;
|
||||
|
||||
if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {
|
||||
fprintf(stderr, "Could not find encoder '%s'\n", argv[2]);
|
||||
ret = -1;
|
||||
goto end;
|
||||
}
|
||||
|
||||
if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {
|
||||
fprintf(stderr, "Failed to deduce output format from file extension. Error code: "
|
||||
"%s\n", av_err2str(ret));
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Cannot open output file. "
|
||||
"Error code: %s\n", av_err2str(ret));
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* read all packets and only transcoding video */
|
||||
while (ret >= 0) {
|
||||
if ((ret = av_read_frame(ifmt_ctx, &dec_pkt)) < 0)
|
||||
break;
|
||||
|
||||
if (video_stream == dec_pkt.stream_index)
|
||||
ret = dec_enc(&dec_pkt, enc_codec);
|
||||
|
||||
av_packet_unref(&dec_pkt);
|
||||
}
|
||||
|
||||
/* flush decoder */
|
||||
dec_pkt.data = NULL;
|
||||
dec_pkt.size = 0;
|
||||
ret = dec_enc(&dec_pkt, enc_codec);
|
||||
av_packet_unref(&dec_pkt);
|
||||
|
||||
/* flush encoder */
|
||||
ret = encode_write(NULL);
|
||||
|
||||
/* write the trailer for output stream */
|
||||
av_write_trailer(ofmt_ctx);
|
||||
|
||||
end:
|
||||
avformat_close_input(&ifmt_ctx);
|
||||
avformat_close_input(&ofmt_ctx);
|
||||
avcodec_free_context(&decoder_ctx);
|
||||
avcodec_free_context(&encoder_ctx);
|
||||
av_buffer_unref(&hw_device_ctx);
|
||||
return ret;
|
||||
}
|
||||
73
doc/faq.texi
73
doc/faq.texi
@@ -385,7 +385,7 @@ mkfifo intermediate2.mpg
|
||||
ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null &
|
||||
ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null &
|
||||
cat intermediate1.mpg intermediate2.mpg |\
|
||||
ffmpeg -f mpeg -i - -c:v mpeg4 -c:a libmp3lame output.avi
|
||||
ffmpeg -f mpeg -i - -c:v mpeg4 -acodec libmp3lame output.avi
|
||||
@end example
|
||||
|
||||
@subsection Concatenating using raw audio and video
|
||||
@@ -407,13 +407,13 @@ mkfifo temp2.a
|
||||
mkfifo temp2.v
|
||||
mkfifo all.a
|
||||
mkfifo all.v
|
||||
ffmpeg -i input1.flv -vn -f u16le -c:a pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null &
|
||||
ffmpeg -i input2.flv -vn -f u16le -c:a pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null &
|
||||
ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null &
|
||||
ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null &
|
||||
ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null &
|
||||
@{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} &
|
||||
cat temp1.a temp2.a > all.a &
|
||||
cat temp1.v temp2.v > all.v &
|
||||
ffmpeg -f u16le -c:a pcm_s16le -ac 2 -ar 44100 -i all.a \
|
||||
ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
|
||||
-f yuv4mpegpipe -i all.v \
|
||||
-y output.flv
|
||||
rm temp[12].[av] all.[av]
|
||||
@@ -501,71 +501,6 @@ ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut
|
||||
ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
|
||||
@end example
|
||||
|
||||
@anchor{background task}
|
||||
@section How do I run ffmpeg as a background task?
|
||||
|
||||
ffmpeg normally checks the console input, for entries like "q" to stop
|
||||
and "?" to give help, while performing operations. ffmpeg does not have a way of
|
||||
detecting when it is running as a background task.
|
||||
When it checks the console input, that can cause the process running ffmpeg
|
||||
in the background to suspend.
|
||||
|
||||
To prevent those input checks, allowing ffmpeg to run as a background task,
|
||||
use the @url{ffmpeg.html#stdin-option, @code{-nostdin} option}
|
||||
in the ffmpeg invocation. This is effective whether you run ffmpeg in a shell
|
||||
or invoke ffmpeg in its own process via an operating system API.
|
||||
|
||||
As an alternative, when you are running ffmpeg in a shell, you can redirect
|
||||
standard input to @code{/dev/null} (on Linux and Mac OS)
|
||||
or @code{NUL} (on Windows). You can do this redirect either
|
||||
on the ffmpeg invocation, or from a shell script which calls ffmpeg.
|
||||
|
||||
For example:
|
||||
|
||||
@example
|
||||
ffmpeg -nostdin -i INPUT OUTPUT
|
||||
@end example
|
||||
|
||||
or (on Linux, Mac OS, and other UNIX-like shells):
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT OUTPUT </dev/null
|
||||
@end example
|
||||
|
||||
or (on Windows):
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT OUTPUT <NUL
|
||||
@end example
|
||||
|
||||
@section How do I prevent ffmpeg from suspending with a message like @emph{suspended (tty output)}?
|
||||
|
||||
If you run ffmpeg in the background, you may find that its process suspends.
|
||||
There may be a message like @emph{suspended (tty output)}. The question is how
|
||||
to prevent the process from being suspended.
|
||||
|
||||
For example:
|
||||
|
||||
@example
|
||||
% ffmpeg -i INPUT OUTPUT &> ~/tmp/log.txt &
|
||||
[1] 93352
|
||||
%
|
||||
[1] + suspended (tty output) ffmpeg -i INPUT OUTPUT &>
|
||||
@end example
|
||||
|
||||
The message "tty output" notwithstanding, the problem here is that
|
||||
ffmpeg normally checks the console input when it runs. The operating system
|
||||
detects this, and suspends the process until you can bring it to the
|
||||
foreground and attend to it.
|
||||
|
||||
The solution is to use the right techniques to tell ffmpeg not to consult
|
||||
console input. You can use the
|
||||
@url{ffmpeg.html#stdin-option, @code{-nostdin} option},
|
||||
or redirect standard input with @code{< /dev/null}.
|
||||
See FAQ
|
||||
@ref{background task, @emph{How do I run ffmpeg as a background task?}}
|
||||
for details.
|
||||
|
||||
@chapter Development
|
||||
|
||||
@section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
|
||||
|
||||
@@ -147,26 +147,6 @@ process.
|
||||
The only thing left is to automate the execution of the fate.sh script and
|
||||
the synchronisation of the samples directory.
|
||||
|
||||
@chapter Uploading new samples to the fate suite
|
||||
|
||||
This is for developers who have an account on the fate suite server.
|
||||
If you upload new samples, please make sure they are as small as possible,
|
||||
space on each client, network bandwidth and so on benefit from smaller test cases.
|
||||
Also keep in mind older checkouts use existing sample files, that means in
|
||||
practice generally do not replace, remove or overwrite files as it likely would
|
||||
break older checkouts or releases.
|
||||
|
||||
@example
|
||||
#First update your local samples copy:
|
||||
rsync -vauL --chmod=Dg+s,Duo+x,ug+rw,o+r,o-w,+X fate-suite.ffmpeg.org:/home/samples/fate-suite/ ~/fate-suite
|
||||
|
||||
#Then do a dry run checking what would be uploaded:
|
||||
rsync -vanL --no-g --chmod=Dg+s,Duo+x,ug+rw,o+r,o-w,+X ~/fate-suite/ fate-suite.ffmpeg.org:/home/samples/fate-suite
|
||||
|
||||
#Upload the files:
|
||||
rsync -vaL --no-g --chmod=Dg+s,Duo+x,ug+rw,o+r,o-w,+X ~/fate-suite/ fate-suite.ffmpeg.org:/home/samples/fate-suite
|
||||
@end example
|
||||
|
||||
|
||||
@chapter FATE makefile targets and variables
|
||||
|
||||
|
||||
@@ -6,7 +6,6 @@ workdir= # directory in which to do all the work
|
||||
#fate_recv="ssh -T fate@fate.ffmpeg.org" # command to submit report
|
||||
comment= # optional description
|
||||
build_only= # set to "yes" for a compile-only instance that skips tests
|
||||
ignore_tests=
|
||||
|
||||
# the following are optional and map to configure options
|
||||
arch=
|
||||
@@ -27,7 +26,5 @@ extra_conf= # extra configure options not covered above
|
||||
|
||||
#make= # name of GNU make if not 'make'
|
||||
makeopts= # extra options passed to 'make'
|
||||
#makeopts_fate= # extra options passed to 'make' when running tests,
|
||||
# defaulting to makeopts above if this is not set
|
||||
#tar= # command to create a tar archive from its arguments on stdout,
|
||||
# defaults to 'tar c'
|
||||
|
||||
@@ -26,12 +26,12 @@ bitstream level modifications without performing decoding.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libavcodec.html,libavcodec}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libavcodec(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libavcodec(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -23,12 +23,12 @@ the libavcodec library.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libavcodec.html,libavcodec}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libavcodec(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libavcodec(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -23,12 +23,12 @@ libavdevice library.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libavdevice.html,libavdevice}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libavdevice(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libavdevice(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -23,12 +23,12 @@ libavfilter library.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libavfilter.html,libavfilter}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libavfilter(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libavfilter(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -23,12 +23,12 @@ provided by the libavformat library.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libavformat.html,libavformat}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libavformat(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libavformat(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -23,12 +23,12 @@ libavformat library.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libavformat.html,libavformat}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libavformat(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libavformat(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -25,12 +25,12 @@ and convert audio format and packing layout.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libswresample.html,libswresample}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libswresample(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libswresample(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -24,12 +24,12 @@ image rescaling and pixel format conversion.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libswscale.html,libswscale}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libswscale(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libswscale(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@@ -23,12 +23,12 @@ by the libavutil library.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{libavutil.html,libavutil}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), libavutil(3)
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1), libavutil(3)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
213
doc/ffmpeg.texi
213
doc/ffmpeg.texi
@@ -289,8 +289,8 @@ see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1)
|
||||
|
||||
-to and -t are mutually exclusive and -t has priority.
|
||||
|
||||
@item -to @var{position} (@emph{input/output})
|
||||
Stop writing the output or reading the input at @var{position}.
|
||||
@item -to @var{position} (@emph{output})
|
||||
Stop writing the output at @var{position}.
|
||||
@var{position} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
@@ -413,10 +413,6 @@ they do not conflict with the standard, as in:
|
||||
ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
|
||||
@end example
|
||||
|
||||
@item -dn (@emph{output})
|
||||
Disable data recording. For full manual control see the @code{-map}
|
||||
option.
|
||||
|
||||
@item -dframes @var{number} (@emph{output})
|
||||
Set the number of data frames to output. This is an obsolete alias for
|
||||
@code{-frames:d}, which you should use instead.
|
||||
@@ -474,7 +470,6 @@ the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
|
||||
consists of only alphanumeric characters. The last key of a sequence of
|
||||
progress information is always "progress".
|
||||
|
||||
@anchor{stdin option}
|
||||
@item -stdin
|
||||
Enable interaction on standard input. On by default unless standard input is
|
||||
used as an input. To explicitly disable interaction you need to specify
|
||||
@@ -575,8 +570,7 @@ stored at container level, but not the aspect ratio stored in encoded
|
||||
frames, if it exists.
|
||||
|
||||
@item -vn (@emph{output})
|
||||
Disable video recording. For full manual control see the @code{-map}
|
||||
option.
|
||||
Disable video recording.
|
||||
|
||||
@item -vcodec @var{codec} (@emph{output})
|
||||
Set the video codec. This is an alias for @code{-codec:v}.
|
||||
@@ -721,104 +715,6 @@ would be more efficient.
|
||||
When doing stream copy, copy also non-key frames found at the
|
||||
beginning.
|
||||
|
||||
@item -init_hw_device @var{type}[=@var{name}][:@var{device}[,@var{key=value}...]]
|
||||
Initialise a new hardware device of type @var{type} called @var{name}, using the
|
||||
given device parameters.
|
||||
If no name is specified it will receive a default name of the form "@var{type}%d".
|
||||
|
||||
The meaning of @var{device} and the following arguments depends on the
|
||||
device type:
|
||||
@table @option
|
||||
|
||||
@item cuda
|
||||
@var{device} is the number of the CUDA device.
|
||||
|
||||
@item dxva2
|
||||
@var{device} is the number of the Direct3D 9 display adapter.
|
||||
|
||||
@item vaapi
|
||||
@var{device} is either an X11 display name or a DRM render node.
|
||||
If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY})
|
||||
and then the first DRM render node (@emph{/dev/dri/renderD128}).
|
||||
|
||||
@item vdpau
|
||||
@var{device} is an X11 display name.
|
||||
If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY}).
|
||||
|
||||
@item qsv
|
||||
@var{device} selects a value in @samp{MFX_IMPL_*}. Allowed values are:
|
||||
@table @option
|
||||
@item auto
|
||||
@item sw
|
||||
@item hw
|
||||
@item auto_any
|
||||
@item hw_any
|
||||
@item hw2
|
||||
@item hw3
|
||||
@item hw4
|
||||
@end table
|
||||
If not specified, @samp{auto_any} is used.
|
||||
(Note that it may be easier to achieve the desired result for QSV by creating the
|
||||
platform-appropriate subdevice (@samp{dxva2} or @samp{vaapi}) and then deriving a
|
||||
QSV device from that.)
|
||||
|
||||
@item opencl
|
||||
@var{device} selects the platform and device as @emph{platform_index.device_index}.
|
||||
|
||||
The set of devices can also be filtered using the key-value pairs to find only
|
||||
devices matching particular platform or device strings.
|
||||
|
||||
The strings usable as filters are:
|
||||
@table @option
|
||||
@item platform_profile
|
||||
@item platform_version
|
||||
@item platform_name
|
||||
@item platform_vendor
|
||||
@item platform_extensions
|
||||
@item device_name
|
||||
@item device_vendor
|
||||
@item driver_version
|
||||
@item device_version
|
||||
@item device_profile
|
||||
@item device_extensions
|
||||
@item device_type
|
||||
@end table
|
||||
|
||||
The indices and filters must together uniquely select a device.
|
||||
|
||||
Examples:
|
||||
@table @emph
|
||||
@item -init_hw_device opencl:0.1
|
||||
Choose the second device on the first platform.
|
||||
|
||||
@item -init_hw_device opencl:,device_name=Foo9000
|
||||
Choose the device with a name containing the string @emph{Foo9000}.
|
||||
|
||||
@item -init_hw_device opencl:1,device_type=gpu,device_extensions=cl_khr_fp16
|
||||
Choose the GPU device on the second platform supporting the @emph{cl_khr_fp16}
|
||||
extension.
|
||||
@end table
|
||||
|
||||
@end table
|
||||
|
||||
@item -init_hw_device @var{type}[=@var{name}]@@@var{source}
|
||||
Initialise a new hardware device of type @var{type} called @var{name},
|
||||
deriving it from the existing device with the name @var{source}.
|
||||
|
||||
@item -init_hw_device list
|
||||
List all hardware device types supported in this build of ffmpeg.
|
||||
|
||||
@item -filter_hw_device @var{name}
|
||||
Pass the hardware device called @var{name} to all filters in any filter graph.
|
||||
This can be used to set the device to upload to with the @code{hwupload} filter,
|
||||
or the device to map to with the @code{hwmap} filter. Other filters may also
|
||||
make use of this parameter when they require a hardware device. Note that this
|
||||
is typically only required when the input is not already in hardware frames -
|
||||
when it is, filters will derive the device they require from the context of the
|
||||
frames they receive as input.
|
||||
|
||||
This is a global setting, so all filters will receive the same device.
|
||||
|
||||
@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
|
||||
Use hardware acceleration to decode the matching stream(s). The allowed values
|
||||
of @var{hwaccel} are:
|
||||
@@ -829,15 +725,15 @@ Do not use any hardware acceleration (the default).
|
||||
@item auto
|
||||
Automatically select the hardware acceleration method.
|
||||
|
||||
@item vda
|
||||
Use Apple VDA hardware acceleration.
|
||||
|
||||
@item vdpau
|
||||
Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
|
||||
|
||||
@item dxva2
|
||||
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
|
||||
|
||||
@item vaapi
|
||||
Use VAAPI (Video Acceleration API) hardware acceleration.
|
||||
|
||||
@item qsv
|
||||
Use the Intel QuickSync Video acceleration for video transcoding.
|
||||
|
||||
@@ -861,11 +757,33 @@ useful for testing.
|
||||
@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
|
||||
Select a device to use for hardware acceleration.
|
||||
|
||||
This option only makes sense when the @option{-hwaccel} option is also specified.
|
||||
It can either refer to an existing device created with @option{-init_hw_device}
|
||||
by name, or it can create a new device as if
|
||||
@samp{-init_hw_device} @var{type}:@var{hwaccel_device}
|
||||
were called immediately before.
|
||||
This option only makes sense when the @option{-hwaccel} option is also
|
||||
specified. Its exact meaning depends on the specific hardware acceleration
|
||||
method chosen.
|
||||
|
||||
@table @option
|
||||
@item vdpau
|
||||
For VDPAU, this option specifies the X11 display/screen to use. If this option
|
||||
is not specified, the value of the @var{DISPLAY} environment variable is used
|
||||
|
||||
@item dxva2
|
||||
For DXVA2, this option should contain the number of the display adapter to use.
|
||||
If this option is not specified, the default adapter is used.
|
||||
|
||||
@item qsv
|
||||
For QSV, this option corresponds to the values of MFX_IMPL_* . Allowed values
|
||||
are:
|
||||
@table @option
|
||||
@item auto
|
||||
@item sw
|
||||
@item hw
|
||||
@item auto_any
|
||||
@item hw_any
|
||||
@item hw2
|
||||
@item hw3
|
||||
@item hw4
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@item -hwaccels
|
||||
List all hardware acceleration methods supported in this build of ffmpeg.
|
||||
@@ -891,8 +809,7 @@ default to the number of input audio channels. For input streams
|
||||
this option only makes sense for audio grabbing devices and raw demuxers
|
||||
and is mapped to the corresponding demuxer options.
|
||||
@item -an (@emph{output})
|
||||
Disable audio recording. For full manual control see the @code{-map}
|
||||
option.
|
||||
Disable audio recording.
|
||||
@item -acodec @var{codec} (@emph{input/output})
|
||||
Set the audio codec. This is an alias for @code{-codec:a}.
|
||||
@item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
|
||||
@@ -927,8 +844,7 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use
|
||||
@item -scodec @var{codec} (@emph{input/output})
|
||||
Set the subtitle codec. This is an alias for @code{-codec:s}.
|
||||
@item -sn (@emph{output})
|
||||
Disable subtitle recording. For full manual control see the @code{-map}
|
||||
option.
|
||||
Disable subtitle recording.
|
||||
@item -sbsf @var{bitstream_filter}
|
||||
Deprecated, see -bsf
|
||||
@end table
|
||||
@@ -977,7 +893,7 @@ It disables matching streams from already created mappings.
|
||||
A trailing @code{?} after the stream index will allow the map to be
|
||||
optional: if the map matches no streams the map will be ignored instead
|
||||
of failing. Note the map will still fail if an invalid input file index
|
||||
is used; such as if the map refers to a non-existent input.
|
||||
is used; such as if the map refers to a non-existant input.
|
||||
|
||||
An alternative @var{[linklabel]} form will map outputs from complex filter
|
||||
graphs (see the @option{-filter_complex} option) to the output file.
|
||||
@@ -1038,7 +954,7 @@ such streams is attempted.
|
||||
Allow input streams with unknown type to be copied instead of failing if copying
|
||||
such streams is attempted.
|
||||
|
||||
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][?][:@var{output_file_id}.@var{stream_specifier}]
|
||||
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
|
||||
Map an audio channel from a given input to an output. If
|
||||
@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
|
||||
be mapped on all the audio streams.
|
||||
@@ -1047,10 +963,6 @@ Using "-1" instead of
|
||||
@var{input_file_id}.@var{stream_specifier}.@var{channel_id} will map a muted
|
||||
channel.
|
||||
|
||||
A trailing @code{?} will allow the map_channel to be
|
||||
optional: if the map_channel matches no channel the map_channel will be ignored instead
|
||||
of failing.
|
||||
|
||||
For example, assuming @var{INPUT} is a stereo audio file, you can switch the
|
||||
two audio channels with the following command:
|
||||
@example
|
||||
@@ -1098,13 +1010,6 @@ video stream), you can use the following command:
|
||||
ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
|
||||
@end example
|
||||
|
||||
To map the first two audio channels from the first input, and using the
|
||||
trailing @code{?}, ignore the audio channel mapping if the first input is
|
||||
mono instead of stereo:
|
||||
@example
|
||||
ffmpeg -i INPUT -map_channel 0.0.0 -map_channel 0.0.1? OUTPUT
|
||||
@end example
|
||||
|
||||
@item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
|
||||
Set metadata information of the next output file from @var{infile}. Note that
|
||||
those are file indices (zero-based), not filenames.
|
||||
@@ -1174,6 +1079,10 @@ loss).
|
||||
By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
|
||||
This option will slow down the reading of the input(s) to the native frame rate
|
||||
of the input(s). It is useful for real-time output (e.g. live streaming).
|
||||
@item -loop_input
|
||||
Loop over the input stream. Currently it works only for image
|
||||
streams. This option is used for automatic FFserver testing.
|
||||
This option is deprecated, use -loop 1.
|
||||
@item -loop_output @var{number_of_times}
|
||||
Repeatedly loop output for formats that support looping such as animated GIF
|
||||
(0 will loop the output infinitely).
|
||||
@@ -1267,32 +1176,6 @@ Try to make the choice automatically, in order to generate a sane output.
|
||||
|
||||
Default value is -1.
|
||||
|
||||
@item -enc_time_base[:@var{stream_specifier}] @var{timebase} (@emph{output,per-stream})
|
||||
Set the encoder timebase. @var{timebase} is a floating point number,
|
||||
and can assume one of the following values:
|
||||
|
||||
@table @option
|
||||
@item 0
|
||||
Assign a default value according to the media type.
|
||||
|
||||
For video - use 1/framerate, for audio - use 1/samplerate.
|
||||
|
||||
@item -1
|
||||
Use the input stream timebase when possible.
|
||||
|
||||
If an input stream is not available, the default timebase will be used.
|
||||
|
||||
@item >0
|
||||
Use the provided number as the timebase.
|
||||
|
||||
This field can be provided as a ratio of two integers (e.g. 1:24, 1:48000)
|
||||
or as a floating point number (e.g. 0.04166, 2.0833e-5)
|
||||
@end table
|
||||
|
||||
Default value is 0.
|
||||
|
||||
@item -bitexact (@emph{input/output})
|
||||
Enable bitexact mode for (de)muxer and (de/en)coder
|
||||
@item -shortest (@emph{output})
|
||||
Finish encoding when the shortest input stream ends.
|
||||
@item -dts_delta_threshold
|
||||
@@ -1415,6 +1298,16 @@ file or device. With low latency / high rate live streams, packets may be
|
||||
discarded if they are not read in a timely manner; raising this value can
|
||||
avoid it.
|
||||
|
||||
@item -override_ffserver (@emph{global})
|
||||
Overrides the input specifications from @command{ffserver}. Using this
|
||||
option you can map any input stream to @command{ffserver} and control
|
||||
many aspects of the encoding from @command{ffmpeg}. Without this
|
||||
option @command{ffmpeg} will transmit to @command{ffserver} what is
|
||||
requested by @command{ffserver}.
|
||||
|
||||
The option is intended for cases where features are needed that cannot be
|
||||
specified to @command{ffserver} but can be to @command{ffmpeg}.
|
||||
|
||||
@item -sdp_file @var{file} (@emph{global})
|
||||
Print sdp information for an output stream to @var{file}.
|
||||
This allows dumping sdp information when at least one output isn't an
|
||||
@@ -1769,7 +1662,7 @@ ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
|
||||
@ifset config-not-all
|
||||
@url{ffmpeg-all.html,ffmpeg-all},
|
||||
@end ifset
|
||||
@url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@url{ffmpeg-resampler.html,ffmpeg-resampler},
|
||||
@@ -1788,7 +1681,7 @@ ffmpeg(1),
|
||||
@ifset config-not-all
|
||||
ffmpeg-all(1),
|
||||
@end ifset
|
||||
ffplay(1), ffprobe(1),
|
||||
ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
|
||||
|
||||
@@ -291,7 +291,7 @@ Toggle full screen.
|
||||
@ifset config-not-all
|
||||
@url{ffplay-all.html,ffmpeg-all},
|
||||
@end ifset
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@url{ffmpeg-resampler.html,ffmpeg-resampler},
|
||||
@@ -310,7 +310,7 @@ ffplay(1),
|
||||
@ifset config-not-all
|
||||
ffplay-all(1),
|
||||
@end ifset
|
||||
ffmpeg(1), ffprobe(1),
|
||||
ffmpeg(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
|
||||
|
||||
@@ -471,7 +471,7 @@ Perform no escaping.
|
||||
@end table
|
||||
|
||||
@item print_section, p
|
||||
Print the section name at the beginning of each line if the value is
|
||||
Print the section name at the begin of each line if the value is
|
||||
@code{1}, disable it with value set to @code{0}. Default value is
|
||||
@code{1}.
|
||||
|
||||
@@ -653,7 +653,7 @@ DV, GXF and AVI timecodes are available in format metadata
|
||||
@ifset config-not-all
|
||||
@url{ffprobe-all.html,ffprobe-all},
|
||||
@end ifset
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@url{ffmpeg-resampler.html,ffmpeg-resampler},
|
||||
@@ -672,7 +672,7 @@ ffprobe(1),
|
||||
@ifset config-not-all
|
||||
ffprobe-all(1),
|
||||
@end ifset
|
||||
ffmpeg(1), ffplay(1),
|
||||
ffmpeg(1), ffplay(1), ffserver(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
|
||||
|
||||
@@ -120,11 +120,6 @@
|
||||
<xsd:attribute name="interlaced_frame" type="xsd:int" />
|
||||
<xsd:attribute name="top_field_first" type="xsd:int" />
|
||||
<xsd:attribute name="repeat_pict" type="xsd:int" />
|
||||
<xsd:attribute name="color_range" type="xsd:string"/>
|
||||
<xsd:attribute name="color_space" type="xsd:string"/>
|
||||
<xsd:attribute name="color_primaries" type="xsd:string"/>
|
||||
<xsd:attribute name="color_transfer" type="xsd:string"/>
|
||||
<xsd:attribute name="chroma_location" type="xsd:string"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="logsType">
|
||||
|
||||
372
doc/ffserver.conf
Normal file
372
doc/ffserver.conf
Normal file
@@ -0,0 +1,372 @@
|
||||
# Port on which the server is listening. You must select a different
|
||||
# port from your standard HTTP web server if it is running on the same
|
||||
# computer.
|
||||
HTTPPort 8090
|
||||
|
||||
# Address on which the server is bound. Only useful if you have
|
||||
# several network interfaces.
|
||||
HTTPBindAddress 0.0.0.0
|
||||
|
||||
# Number of simultaneous HTTP connections that can be handled. It has
|
||||
# to be defined *before* the MaxClients parameter, since it defines the
|
||||
# MaxClients maximum limit.
|
||||
MaxHTTPConnections 2000
|
||||
|
||||
# Number of simultaneous requests that can be handled. Since FFServer
|
||||
# is very fast, it is more likely that you will want to leave this high
|
||||
# and use MaxBandwidth, below.
|
||||
MaxClients 1000
|
||||
|
||||
# This the maximum amount of kbit/sec that you are prepared to
|
||||
# consume when streaming to clients.
|
||||
MaxBandwidth 1000
|
||||
|
||||
# Access log file (uses standard Apache log file format)
|
||||
# '-' is the standard output.
|
||||
CustomLog -
|
||||
|
||||
##################################################################
|
||||
# Definition of the live feeds. Each live feed contains one video
|
||||
# and/or audio sequence coming from an ffmpeg encoder or another
|
||||
# ffserver. This sequence may be encoded simultaneously with several
|
||||
# codecs at several resolutions.
|
||||
|
||||
<Feed feed1.ffm>
|
||||
|
||||
# You must use 'ffmpeg' to send a live feed to ffserver. In this
|
||||
# example, you can type:
|
||||
#
|
||||
# ffmpeg http://localhost:8090/feed1.ffm
|
||||
|
||||
# ffserver can also do time shifting. It means that it can stream any
|
||||
# previously recorded live stream. The request should contain:
|
||||
# "http://xxxx?date=[YYYY-MM-DDT][[HH:]MM:]SS[.m...]".You must specify
|
||||
# a path where the feed is stored on disk. You also specify the
|
||||
# maximum size of the feed, where zero means unlimited. Default:
|
||||
# File=/tmp/feed_name.ffm FileMaxSize=5M
|
||||
File /tmp/feed1.ffm
|
||||
FileMaxSize 200K
|
||||
|
||||
# You could specify
|
||||
# ReadOnlyFile /saved/specialvideo.ffm
|
||||
# This marks the file as readonly and it will not be deleted or updated.
|
||||
|
||||
# Specify launch in order to start ffmpeg automatically.
|
||||
# First ffmpeg must be defined with an appropriate path if needed,
|
||||
# after that options can follow, but avoid adding the http:// field
|
||||
#Launch ffmpeg
|
||||
|
||||
# Only allow connections from localhost to the feed.
|
||||
ACL allow 127.0.0.1
|
||||
|
||||
</Feed>
|
||||
|
||||
|
||||
##################################################################
|
||||
# Now you can define each stream which will be generated from the
|
||||
# original audio and video stream. Each format has a filename (here
|
||||
# 'test1.mpg'). FFServer will send this stream when answering a
|
||||
# request containing this filename.
|
||||
|
||||
<Stream test1.mpg>
|
||||
|
||||
# coming from live feed 'feed1'
|
||||
Feed feed1.ffm
|
||||
|
||||
# Format of the stream : you can choose among:
|
||||
# mpeg : MPEG-1 multiplexed video and audio
|
||||
# mpegvideo : only MPEG-1 video
|
||||
# mp2 : MPEG-2 audio (use AudioCodec to select layer 2 and 3 codec)
|
||||
# ogg : Ogg format (Vorbis audio codec)
|
||||
# rm : RealNetworks-compatible stream. Multiplexed audio and video.
|
||||
# ra : RealNetworks-compatible stream. Audio only.
|
||||
# mpjpeg : Multipart JPEG (works with Netscape without any plugin)
|
||||
# jpeg : Generate a single JPEG image.
|
||||
# mjpeg : Generate a M-JPEG stream.
|
||||
# asf : ASF compatible streaming (Windows Media Player format).
|
||||
# swf : Macromedia Flash compatible stream
|
||||
# avi : AVI format (MPEG-4 video, MPEG audio sound)
|
||||
Format mpeg
|
||||
|
||||
# Bitrate for the audio stream. Codecs usually support only a few
|
||||
# different bitrates.
|
||||
AudioBitRate 32
|
||||
|
||||
# Number of audio channels: 1 = mono, 2 = stereo
|
||||
AudioChannels 1
|
||||
|
||||
# Sampling frequency for audio. When using low bitrates, you should
|
||||
# lower this frequency to 22050 or 11025. The supported frequencies
|
||||
# depend on the selected audio codec.
|
||||
AudioSampleRate 44100
|
||||
|
||||
# Bitrate for the video stream
|
||||
VideoBitRate 64
|
||||
|
||||
# Ratecontrol buffer size
|
||||
VideoBufferSize 40
|
||||
|
||||
# Number of frames per second
|
||||
VideoFrameRate 3
|
||||
|
||||
# Size of the video frame: WxH (default: 160x128)
|
||||
# The following abbreviations are defined: sqcif, qcif, cif, 4cif, qqvga,
|
||||
# qvga, vga, svga, xga, uxga, qxga, sxga, qsxga, hsxga, wvga, wxga, wsxga,
|
||||
# wuxga, woxga, wqsxga, wquxga, whsxga, whuxga, cga, ega, hd480, hd720,
|
||||
# hd1080
|
||||
VideoSize 160x128
|
||||
|
||||
# Transmit only intra frames (useful for low bitrates, but kills frame rate).
|
||||
#VideoIntraOnly
|
||||
|
||||
# If non-intra only, an intra frame is transmitted every VideoGopSize
|
||||
# frames. Video synchronization can only begin at an intra frame.
|
||||
VideoGopSize 12
|
||||
|
||||
# More MPEG-4 parameters
|
||||
# VideoHighQuality
|
||||
# Video4MotionVector
|
||||
|
||||
# Choose your codecs:
|
||||
#AudioCodec mp2
|
||||
#VideoCodec mpeg1video
|
||||
|
||||
# Suppress audio
|
||||
#NoAudio
|
||||
|
||||
# Suppress video
|
||||
#NoVideo
|
||||
|
||||
#VideoQMin 3
|
||||
#VideoQMax 31
|
||||
|
||||
# Set this to the number of seconds backwards in time to start. Note that
|
||||
# most players will buffer 5-10 seconds of video, and also you need to allow
|
||||
# for a keyframe to appear in the data stream.
|
||||
#Preroll 15
|
||||
|
||||
# ACL:
|
||||
|
||||
# You can allow ranges of addresses (or single addresses)
|
||||
#ACL ALLOW <first address> <last address>
|
||||
|
||||
# You can deny ranges of addresses (or single addresses)
|
||||
#ACL DENY <first address> <last address>
|
||||
|
||||
# You can repeat the ACL allow/deny as often as you like. It is on a per
|
||||
# stream basis. The first match defines the action. If there are no matches,
|
||||
# then the default is the inverse of the last ACL statement.
|
||||
#
|
||||
# Thus 'ACL allow localhost' only allows access from localhost.
|
||||
# 'ACL deny 1.0.0.0 1.255.255.255' would deny the whole of network 1 and
|
||||
# allow everybody else.
|
||||
|
||||
</Stream>
|
||||
|
||||
|
||||
##################################################################
|
||||
# Example streams
|
||||
|
||||
|
||||
# Multipart JPEG
|
||||
|
||||
#<Stream test.mjpg>
|
||||
#Feed feed1.ffm
|
||||
#Format mpjpeg
|
||||
#VideoFrameRate 2
|
||||
#VideoIntraOnly
|
||||
#NoAudio
|
||||
#Strict -1
|
||||
#</Stream>
|
||||
|
||||
|
||||
# Single JPEG
|
||||
|
||||
#<Stream test.jpg>
|
||||
#Feed feed1.ffm
|
||||
#Format jpeg
|
||||
#VideoFrameRate 2
|
||||
#VideoIntraOnly
|
||||
##VideoSize 352x240
|
||||
#NoAudio
|
||||
#Strict -1
|
||||
#</Stream>
|
||||
|
||||
|
||||
# Flash
|
||||
|
||||
#<Stream test.swf>
|
||||
#Feed feed1.ffm
|
||||
#Format swf
|
||||
#VideoFrameRate 2
|
||||
#VideoIntraOnly
|
||||
#NoAudio
|
||||
#</Stream>
|
||||
|
||||
|
||||
# ASF compatible
|
||||
|
||||
<Stream test.asf>
|
||||
Feed feed1.ffm
|
||||
Format asf
|
||||
VideoFrameRate 15
|
||||
VideoSize 352x240
|
||||
VideoBitRate 256
|
||||
VideoBufferSize 40
|
||||
VideoGopSize 30
|
||||
AudioBitRate 64
|
||||
StartSendOnKey
|
||||
</Stream>
|
||||
|
||||
|
||||
# MP3 audio
|
||||
|
||||
#<Stream test.mp3>
|
||||
#Feed feed1.ffm
|
||||
#Format mp2
|
||||
#AudioCodec mp3
|
||||
#AudioBitRate 64
|
||||
#AudioChannels 1
|
||||
#AudioSampleRate 44100
|
||||
#NoVideo
|
||||
#</Stream>
|
||||
|
||||
|
||||
# Ogg Vorbis audio
|
||||
|
||||
#<Stream test.ogg>
|
||||
#Feed feed1.ffm
|
||||
#Metadata title "Stream title"
|
||||
#AudioBitRate 64
|
||||
#AudioChannels 2
|
||||
#AudioSampleRate 44100
|
||||
#NoVideo
|
||||
#</Stream>
|
||||
|
||||
|
||||
# Real with audio only at 32 kbits
|
||||
|
||||
#<Stream test.ra>
|
||||
#Feed feed1.ffm
|
||||
#Format rm
|
||||
#AudioBitRate 32
|
||||
#NoVideo
|
||||
#NoAudio
|
||||
#</Stream>
|
||||
|
||||
|
||||
# Real with audio and video at 64 kbits
|
||||
|
||||
#<Stream test.rm>
|
||||
#Feed feed1.ffm
|
||||
#Format rm
|
||||
#AudioBitRate 32
|
||||
#VideoBitRate 128
|
||||
#VideoFrameRate 25
|
||||
#VideoGopSize 25
|
||||
#NoAudio
|
||||
#</Stream>
|
||||
|
||||
|
||||
##################################################################
|
||||
# A stream coming from a file: you only need to set the input
|
||||
# filename and optionally a new format. Supported conversions:
|
||||
# AVI -> ASF
|
||||
|
||||
#<Stream file.rm>
|
||||
#File "/usr/local/httpd/htdocs/tlive.rm"
|
||||
#NoAudio
|
||||
#</Stream>
|
||||
|
||||
#<Stream file.asf>
|
||||
#File "/usr/local/httpd/htdocs/test.asf"
|
||||
#NoAudio
|
||||
#Metadata author "Me"
|
||||
#Metadata copyright "Super MegaCorp"
|
||||
#Metadata title "Test stream from disk"
|
||||
#Metadata comment "Test comment"
|
||||
#</Stream>
|
||||
|
||||
|
||||
##################################################################
|
||||
# RTSP examples
|
||||
#
|
||||
# You can access this stream with the RTSP URL:
|
||||
# rtsp://localhost:5454/test1-rtsp.mpg
|
||||
#
|
||||
# A non-standard RTSP redirector is also created. Its URL is:
|
||||
# http://localhost:8090/test1-rtsp.rtsp
|
||||
|
||||
#<Stream test1-rtsp.mpg>
|
||||
#Format rtp
|
||||
#File "/usr/local/httpd/htdocs/test1.mpg"
|
||||
#</Stream>
|
||||
|
||||
|
||||
# Transcode an incoming live feed to another live feed,
|
||||
# using libx264 and video presets
|
||||
|
||||
#<Stream live.h264>
|
||||
#Format rtp
|
||||
#Feed feed1.ffm
|
||||
#VideoCodec libx264
|
||||
#VideoFrameRate 24
|
||||
#VideoBitRate 100
|
||||
#VideoSize 480x272
|
||||
#AVPresetVideo default
|
||||
#AVPresetVideo baseline
|
||||
#AVOptionVideo flags +global_header
|
||||
#
|
||||
#AudioCodec aac
|
||||
#AudioBitRate 32
|
||||
#AudioChannels 2
|
||||
#AudioSampleRate 22050
|
||||
#AVOptionAudio flags +global_header
|
||||
#</Stream>
|
||||
|
||||
##################################################################
|
||||
# SDP/multicast examples
|
||||
#
|
||||
# If you want to send your stream in multicast, you must set the
|
||||
# multicast address with MulticastAddress. The port and the TTL can
|
||||
# also be set.
|
||||
#
|
||||
# An SDP file is automatically generated by ffserver by adding the
|
||||
# 'sdp' extension to the stream name (here
|
||||
# http://localhost:8090/test1-sdp.sdp). You should usually give this
|
||||
# file to your player to play the stream.
|
||||
#
|
||||
# The 'NoLoop' option can be used to avoid looping when the stream is
|
||||
# terminated.
|
||||
|
||||
#<Stream test1-sdp.mpg>
|
||||
#Format rtp
|
||||
#File "/usr/local/httpd/htdocs/test1.mpg"
|
||||
#MulticastAddress 224.124.0.1
|
||||
#MulticastPort 5000
|
||||
#MulticastTTL 16
|
||||
#NoLoop
|
||||
#</Stream>
|
||||
|
||||
|
||||
##################################################################
|
||||
# Special streams
|
||||
|
||||
# Server status
|
||||
|
||||
<Stream stat.html>
|
||||
Format status
|
||||
|
||||
# Only allow local people to get the status
|
||||
ACL allow localhost
|
||||
ACL allow 192.168.0.0 192.168.255.255
|
||||
|
||||
#FaviconURL http://pond1.gladstonefamily.net:8080/favicon.ico
|
||||
</Stream>
|
||||
|
||||
|
||||
# Redirect index.html to the appropriate site
|
||||
|
||||
<Redirect index.html>
|
||||
URL http://www.ffmpeg.org/
|
||||
</Redirect>
|
||||
923
doc/ffserver.texi
Normal file
923
doc/ffserver.texi
Normal file
@@ -0,0 +1,923 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffserver Documentation
|
||||
@titlepage
|
||||
@center @titlefont{ffserver Documentation}
|
||||
@end titlepage
|
||||
|
||||
@top
|
||||
|
||||
@contents
|
||||
|
||||
@chapter Synopsis
|
||||
|
||||
ffserver [@var{options}]
|
||||
|
||||
@chapter Description
|
||||
@c man begin DESCRIPTION
|
||||
|
||||
@command{ffserver} is a streaming server for both audio and video.
|
||||
It supports several live feeds, streaming from files and time shifting
|
||||
on live feeds. You can seek to positions in the past on each live
|
||||
feed, provided you specify a big enough feed storage.
|
||||
|
||||
@command{ffserver} is configured through a configuration file, which
|
||||
is read at startup. If not explicitly specified, it will read from
|
||||
@file{/etc/ffserver.conf}.
|
||||
|
||||
@command{ffserver} receives prerecorded files or FFM streams from some
|
||||
@command{ffmpeg} instance as input, then streams them over
|
||||
RTP/RTSP/HTTP.
|
||||
|
||||
An @command{ffserver} instance will listen on some port as specified
|
||||
in the configuration file. You can launch one or more instances of
|
||||
@command{ffmpeg} and send one or more FFM streams to the port where
|
||||
ffserver is expecting to receive them. Alternately, you can make
|
||||
@command{ffserver} launch such @command{ffmpeg} instances at startup.
|
||||
|
||||
Input streams are called feeds, and each one is specified by a
|
||||
@code{<Feed>} section in the configuration file.
|
||||
|
||||
For each feed you can have different output streams in various
|
||||
formats, each one specified by a @code{<Stream>} section in the
|
||||
configuration file.
|
||||
|
||||
@chapter Detailed description
|
||||
|
||||
@command{ffserver} works by forwarding streams encoded by
|
||||
@command{ffmpeg}, or pre-recorded streams which are read from disk.
|
||||
|
||||
Precisely, @command{ffserver} acts as an HTTP server, accepting POST
|
||||
requests from @command{ffmpeg} to acquire the stream to publish, and
|
||||
serving RTSP clients or HTTP clients GET requests with the stream
|
||||
media content.
|
||||
|
||||
A feed is an @ref{FFM} stream created by @command{ffmpeg}, and sent to
|
||||
a port where @command{ffserver} is listening.
|
||||
|
||||
Each feed is identified by a unique name, corresponding to the name
|
||||
of the resource published on @command{ffserver}, and is configured by
|
||||
a dedicated @code{Feed} section in the configuration file.
|
||||
|
||||
The feed publish URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{http_port}/@var{feed_name}
|
||||
@end example
|
||||
|
||||
where @var{ffserver_ip_address} is the IP address of the machine where
|
||||
@command{ffserver} is installed, @var{http_port} is the port number of
|
||||
the HTTP server (configured through the @option{HTTPPort} option), and
|
||||
@var{feed_name} is the name of the corresponding feed defined in the
|
||||
configuration file.
|
||||
|
||||
Each feed is associated to a file which is stored on disk. This stored
|
||||
file is used to send pre-recorded data to a player as fast as
|
||||
possible when new content is added in real-time to the stream.
|
||||
|
||||
A "live-stream" or "stream" is a resource published by
|
||||
@command{ffserver}, and made accessible through the HTTP protocol to
|
||||
clients.
|
||||
|
||||
A stream can be connected to a feed, or to a file. In the first case,
|
||||
the published stream is forwarded from the corresponding feed
|
||||
generated by a running instance of @command{ffmpeg}, in the second
|
||||
case the stream is read from a pre-recorded file.
|
||||
|
||||
Each stream is identified by a unique name, corresponding to the name
|
||||
of the resource served by @command{ffserver}, and is configured by
|
||||
a dedicated @code{Stream} section in the configuration file.
|
||||
|
||||
The stream access HTTP URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{http_port}/@var{stream_name}[@var{options}]
|
||||
@end example
|
||||
|
||||
The stream access RTSP URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{rtsp_port}/@var{stream_name}[@var{options}]
|
||||
@end example
|
||||
|
||||
@var{stream_name} is the name of the corresponding stream defined in
|
||||
the configuration file. @var{options} is a list of options specified
|
||||
after the URL which affects how the stream is served by
|
||||
@command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP
|
||||
and RTSP ports configured with the options @var{HTTPPort} and
|
||||
@var{RTSPPort} respectively.
|
||||
|
||||
In case the stream is associated to a feed, the encoding parameters
|
||||
must be configured in the stream configuration. They are sent to
|
||||
@command{ffmpeg} when setting up the encoding. This allows
|
||||
@command{ffserver} to define the encoding parameters used by
|
||||
the @command{ffmpeg} encoders.
|
||||
|
||||
The @command{ffmpeg} @option{override_ffserver} commandline option
|
||||
allows one to override the encoding parameters set by the server.
|
||||
|
||||
Multiple streams can be connected to the same feed.
|
||||
|
||||
For example, you can have a situation described by the following
|
||||
graph:
|
||||
|
||||
@verbatim
|
||||
_________ __________
|
||||
| | | |
|
||||
ffmpeg 1 -----| feed 1 |-----| stream 1 |
|
||||
\ |_________|\ |__________|
|
||||
\ \
|
||||
\ \ __________
|
||||
\ \ | |
|
||||
\ \| stream 2 |
|
||||
\ |__________|
|
||||
\
|
||||
\ _________ __________
|
||||
\ | | | |
|
||||
\| feed 2 |-----| stream 3 |
|
||||
|_________| |__________|
|
||||
|
||||
_________ __________
|
||||
| | | |
|
||||
ffmpeg 2 -----| feed 3 |-----| stream 4 |
|
||||
|_________| |__________|
|
||||
|
||||
_________ __________
|
||||
| | | |
|
||||
| file 1 |-----| stream 5 |
|
||||
|_________| |__________|
|
||||
|
||||
@end verbatim
|
||||
|
||||
@anchor{FFM}
|
||||
@section FFM, FFM2 formats
|
||||
|
||||
FFM and FFM2 are formats used by ffserver. They allow storing a wide variety of
|
||||
video and audio streams and encoding options, and can store a moving time segment
|
||||
of an infinite movie or a whole movie.
|
||||
|
||||
FFM is version specific, and there is limited compatibility of FFM files
|
||||
generated by one version of ffmpeg/ffserver and another version of
|
||||
ffmpeg/ffserver. It may work but it is not guaranteed to work.
|
||||
|
||||
FFM2 is extensible while maintaining compatibility and should work between
|
||||
differing versions of tools. FFM2 is the default.
|
||||
|
||||
@section Status stream
|
||||
|
||||
@command{ffserver} supports an HTTP interface which exposes the
|
||||
current status of the server.
|
||||
|
||||
Simply point your browser to the address of the special status stream
|
||||
specified in the configuration file.
|
||||
|
||||
For example if you have:
|
||||
@example
|
||||
<Stream status.html>
|
||||
Format status
|
||||
|
||||
# Only allow local people to get the status
|
||||
ACL allow localhost
|
||||
ACL allow 192.168.0.0 192.168.255.255
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
then the server will post a page with the status information when
|
||||
the special stream @file{status.html} is requested.
|
||||
|
||||
@section How do I make it work?
|
||||
|
||||
As a simple test, just run the following two command lines where INPUTFILE
|
||||
is some file which you can decode with ffmpeg:
|
||||
|
||||
@example
|
||||
ffserver -f doc/ffserver.conf &
|
||||
ffmpeg -i INPUTFILE http://localhost:8090/feed1.ffm
|
||||
@end example
|
||||
|
||||
At this point you should be able to go to your Windows machine and fire up
|
||||
Windows Media Player (WMP). Go to Open URL and enter
|
||||
|
||||
@example
|
||||
http://<linuxbox>:8090/test.asf
|
||||
@end example
|
||||
|
||||
You should (after a short delay) see video and hear audio.
|
||||
|
||||
WARNING: trying to stream test1.mpg doesn't work with WMP as it tries to
|
||||
transfer the entire file before starting to play.
|
||||
The same is true of AVI files.
|
||||
|
||||
You should edit the @file{ffserver.conf} file to suit your needs (in
|
||||
terms of frame rates etc). Then install @command{ffserver} and
|
||||
@command{ffmpeg}, write a script to start them up, and off you go.
|
||||
|
||||
@section What else can it do?
|
||||
|
||||
You can replay video from .ffm files that was recorded earlier.
|
||||
However, there are a number of caveats, including the fact that the
|
||||
ffserver parameters must match the original parameters used to record the
|
||||
file. If they do not, then ffserver deletes the file before recording into it.
|
||||
(Now that I write this, it seems broken).
|
||||
|
||||
You can fiddle with many of the codec choices and encoding parameters, and
|
||||
there are a bunch more parameters that you cannot control. Post a message
|
||||
to the mailing list if there are some 'must have' parameters. Look in
|
||||
ffserver.conf for a list of the currently available controls.
|
||||
|
||||
It will automatically generate the ASX or RAM files that are often used
|
||||
in browsers. These files are actually redirections to the underlying ASF
|
||||
or RM file. The reason for this is that the browser often fetches the
|
||||
entire file before starting up the external viewer. The redirection files
|
||||
are very small and can be transferred quickly. [The stream itself is
|
||||
often 'infinite' and thus the browser tries to download it and never
|
||||
finishes.]
|
||||
|
||||
@section Tips
|
||||
|
||||
* When you connect to a live stream, most players (WMP, RA, etc) want to
|
||||
buffer a certain number of seconds of material so that they can display the
|
||||
signal continuously. However, ffserver (by default) starts sending data
|
||||
in realtime. This means that there is a pause of a few seconds while the
|
||||
buffering is being done by the player. The good news is that this can be
|
||||
cured by adding a '?buffer=5' to the end of the URL. This means that the
|
||||
stream should start 5 seconds in the past -- and so the first 5 seconds
|
||||
of the stream are sent as fast as the network will allow. It will then
|
||||
slow down to real time. This noticeably improves the startup experience.
|
||||
|
||||
You can also add a 'Preroll 15' statement into the ffserver.conf that will
|
||||
add the 15 second prebuffering on all requests that do not otherwise
|
||||
specify a time. In addition, ffserver will skip frames until a key_frame
|
||||
is found. This further reduces the startup delay by not transferring data
|
||||
that will be discarded.
|
||||
|
||||
@section Why does the ?buffer / Preroll stop working after a time?
|
||||
|
||||
It turns out that (on my machine at least) the number of frames successfully
|
||||
grabbed is marginally less than the number that ought to be grabbed. This
|
||||
means that the timestamp in the encoded data stream gets behind realtime.
|
||||
This means that if you say 'Preroll 10', then when the stream gets 10
|
||||
or more seconds behind, there is no Preroll left.
|
||||
|
||||
Fixing this requires a change in the internals of how timestamps are
|
||||
handled.
|
||||
|
||||
@section Does the @code{?date=} stuff work.
|
||||
|
||||
Yes (subject to the limitation outlined above). Also note that whenever you
|
||||
start ffserver, it deletes the ffm file (if any parameters have changed),
|
||||
thus wiping out what you had recorded before.
|
||||
|
||||
The format of the @code{?date=xxxxxx} is fairly flexible. You should use one
|
||||
of the following formats (the 'T' is literal):
|
||||
|
||||
@example
|
||||
* YYYY-MM-DDTHH:MM:SS (localtime)
|
||||
* YYYY-MM-DDTHH:MM:SSZ (UTC)
|
||||
@end example
|
||||
|
||||
You can omit the YYYY-MM-DD, and then it refers to the current day. However
|
||||
note that @samp{?date=16:00:00} refers to 16:00 on the current day -- this
|
||||
may be in the future and so is unlikely to be useful.
|
||||
|
||||
You use this by adding the ?date= to the end of the URL for the stream.
|
||||
For example: @samp{http://localhost:8080/test.asf?date=2002-07-26T23:05:00}.
|
||||
@c man end
|
||||
|
||||
@chapter Options
|
||||
@c man begin OPTIONS
|
||||
|
||||
@include fftools-common-opts.texi
|
||||
|
||||
@section Main options
|
||||
|
||||
@table @option
|
||||
@item -f @var{configfile}
|
||||
Read configuration file @file{configfile}. If not specified it will
|
||||
read by default from @file{/etc/ffserver.conf}.
|
||||
|
||||
@item -n
|
||||
Enable no-launch mode. This option disables all the @code{Launch}
|
||||
directives within the various @code{<Feed>} sections. Since
|
||||
@command{ffserver} will not launch any @command{ffmpeg} instances, you
|
||||
will have to launch them manually.
|
||||
|
||||
@item -d
|
||||
Enable debug mode. This option increases log verbosity, and directs
|
||||
log messages to stdout. When specified, the @option{CustomLog} option
|
||||
is ignored.
|
||||
@end table
|
||||
|
||||
@chapter Configuration file syntax
|
||||
|
||||
@command{ffserver} reads a configuration file containing global
|
||||
options and settings for each stream and feed.
|
||||
|
||||
The configuration file consists of global options and dedicated
|
||||
sections, which must be introduced by "<@var{SECTION_NAME}
|
||||
@var{ARGS}>" on a separate line and must be terminated by a line in
|
||||
the form "</@var{SECTION_NAME}>". @var{ARGS} is optional.
|
||||
|
||||
Currently the following sections are recognized: @samp{Feed},
|
||||
@samp{Stream}, @samp{Redirect}.
|
||||
|
||||
A line starting with @code{#} is ignored and treated as a comment.
|
||||
|
||||
Name of options and sections are case-insensitive.
|
||||
|
||||
@section ACL syntax
|
||||
An ACL (Access Control List) specifies the address which are allowed
|
||||
to access a given stream, or to write a given feed.
|
||||
|
||||
It accepts the folling forms
|
||||
@itemize
|
||||
@item
|
||||
Allow/deny access to @var{address}.
|
||||
@example
|
||||
ACL ALLOW <address>
|
||||
ACL DENY <address>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Allow/deny access to ranges of addresses from @var{first_address} to
|
||||
@var{last_address}.
|
||||
@example
|
||||
ACL ALLOW <first_address> <last_address>
|
||||
ACL DENY <first_address> <last_address>
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
You can repeat the ACL allow/deny as often as you like. It is on a per
|
||||
stream basis. The first match defines the action. If there are no matches,
|
||||
then the default is the inverse of the last ACL statement.
|
||||
|
||||
Thus 'ACL allow localhost' only allows access from localhost.
|
||||
'ACL deny 1.0.0.0 1.255.255.255' would deny the whole of network 1 and
|
||||
allow everybody else.
|
||||
|
||||
@section Global options
|
||||
@table @option
|
||||
@item HTTPPort @var{port_number}
|
||||
@item Port @var{port_number}
|
||||
@item RTSPPort @var{port_number}
|
||||
|
||||
@var{HTTPPort} sets the HTTP server listening TCP port number,
|
||||
@var{RTSPPort} sets the RTSP server listening TCP port number.
|
||||
|
||||
@var{Port} is the equivalent of @var{HTTPPort} and is deprecated.
|
||||
|
||||
You must select a different port from your standard HTTP web server if
|
||||
it is running on the same computer.
|
||||
|
||||
If not specified, no corresponding server will be created.
|
||||
|
||||
@item HTTPBindAddress @var{ip_address}
|
||||
@item BindAddress @var{ip_address}
|
||||
@item RTSPBindAddress @var{ip_address}
|
||||
Set address on which the HTTP/RTSP server is bound. Only useful if you
|
||||
have several network interfaces.
|
||||
|
||||
@var{BindAddress} is the equivalent of @var{HTTPBindAddress} and is
|
||||
deprecated.
|
||||
|
||||
@item MaxHTTPConnections @var{n}
|
||||
Set number of simultaneous HTTP connections that can be handled. It
|
||||
has to be defined @emph{before} the @option{MaxClients} parameter,
|
||||
since it defines the @option{MaxClients} maximum limit.
|
||||
|
||||
Default value is 2000.
|
||||
|
||||
@item MaxClients @var{n}
|
||||
Set number of simultaneous requests that can be handled. Since
|
||||
@command{ffserver} is very fast, it is more likely that you will want
|
||||
to leave this high and use @option{MaxBandwidth}.
|
||||
|
||||
Default value is 5.
|
||||
|
||||
@item MaxBandwidth @var{kbps}
|
||||
Set the maximum amount of kbit/sec that you are prepared to consume
|
||||
when streaming to clients.
|
||||
|
||||
Default value is 1000.
|
||||
|
||||
@item CustomLog @var{filename}
|
||||
Set access log file (uses standard Apache log file format). '-' is the
|
||||
standard output.
|
||||
|
||||
If not specified @command{ffserver} will produce no log.
|
||||
|
||||
In case the commandline option @option{-d} is specified this option is
|
||||
ignored, and the log is written to standard output.
|
||||
|
||||
@item NoDaemon
|
||||
Set no-daemon mode. This option is currently ignored since now
|
||||
@command{ffserver} will always work in no-daemon mode, and is
|
||||
deprecated.
|
||||
|
||||
@item UseDefaults
|
||||
@item NoDefaults
|
||||
Control whether default codec options are used for the all streams or not.
|
||||
Each stream may overwrite this setting for its own. Default is @var{UseDefaults}.
|
||||
The lastest occurrence overrides previous if multiple definitions.
|
||||
@end table
|
||||
|
||||
@section Feed section
|
||||
|
||||
A Feed section defines a feed provided to @command{ffserver}.
|
||||
|
||||
Each live feed contains one video and/or audio sequence coming from an
|
||||
@command{ffmpeg} encoder or another @command{ffserver}. This sequence
|
||||
may be encoded simultaneously with several codecs at several
|
||||
resolutions.
|
||||
|
||||
A feed instance specification is introduced by a line in the form:
|
||||
@example
|
||||
<Feed FEED_FILENAME>
|
||||
@end example
|
||||
|
||||
where @var{FEED_FILENAME} specifies the unique name of the FFM stream.
|
||||
|
||||
The following options are recognized within a Feed section.
|
||||
|
||||
@table @option
|
||||
@item File @var{filename}
|
||||
@item ReadOnlyFile @var{filename}
|
||||
Set the path where the feed file is stored on disk.
|
||||
|
||||
If not specified, the @file{/tmp/FEED.ffm} is assumed, where
|
||||
@var{FEED} is the feed name.
|
||||
|
||||
If @option{ReadOnlyFile} is used the file is marked as read-only and
|
||||
it will not be deleted or updated.
|
||||
|
||||
@item Truncate
|
||||
Truncate the feed file, rather than appending to it. By default
|
||||
@command{ffserver} will append data to the file, until the maximum
|
||||
file size value is reached (see @option{FileMaxSize} option).
|
||||
|
||||
@item FileMaxSize @var{size}
|
||||
Set maximum size of the feed file in bytes. 0 means unlimited. The
|
||||
postfixes @code{K} (2^10), @code{M} (2^20), and @code{G} (2^30) are
|
||||
recognized.
|
||||
|
||||
Default value is 5M.
|
||||
|
||||
@item Launch @var{args}
|
||||
Launch an @command{ffmpeg} command when creating @command{ffserver}.
|
||||
|
||||
@var{args} must be a sequence of arguments to be provided to an
|
||||
@command{ffmpeg} instance. The first provided argument is ignored, and
|
||||
it is replaced by a path with the same dirname of the @command{ffserver}
|
||||
instance, followed by the remaining argument and terminated with a
|
||||
path corresponding to the feed.
|
||||
|
||||
When the launched process exits, @command{ffserver} will launch
|
||||
another program instance.
|
||||
|
||||
In case you need a more complex @command{ffmpeg} configuration,
|
||||
e.g. if you need to generate multiple FFM feeds with a single
|
||||
@command{ffmpeg} instance, you should launch @command{ffmpeg} by hand.
|
||||
|
||||
This option is ignored in case the commandline option @option{-n} is
|
||||
specified.
|
||||
|
||||
@item ACL @var{spec}
|
||||
Specify the list of IP address which are allowed or denied to write
|
||||
the feed. Multiple ACL options can be specified.
|
||||
@end table
|
||||
|
||||
@section Stream section
|
||||
|
||||
A Stream section defines a stream provided by @command{ffserver}, and
|
||||
identified by a single name.
|
||||
|
||||
The stream is sent when answering a request containing the stream
|
||||
name.
|
||||
|
||||
A stream section must be introduced by the line:
|
||||
@example
|
||||
<Stream STREAM_NAME>
|
||||
@end example
|
||||
|
||||
where @var{STREAM_NAME} specifies the unique name of the stream.
|
||||
|
||||
The following options are recognized within a Stream section.
|
||||
|
||||
Encoding options are marked with the @emph{encoding} tag, and they are
|
||||
used to set the encoding parameters, and are mapped to libavcodec
|
||||
encoding options. Not all encoding options are supported, in
|
||||
particular it is not possible to set encoder private options. In order
|
||||
to override the encoding options specified by @command{ffserver}, you
|
||||
can use the @command{ffmpeg} @option{override_ffserver} commandline
|
||||
option.
|
||||
|
||||
Only one of the @option{Feed} and @option{File} options should be set.
|
||||
|
||||
@table @option
|
||||
@item Feed @var{feed_name}
|
||||
Set the input feed. @var{feed_name} must correspond to an existing
|
||||
feed defined in a @code{Feed} section.
|
||||
|
||||
When this option is set, encoding options are used to setup the
|
||||
encoding operated by the remote @command{ffmpeg} process.
|
||||
|
||||
@item File @var{filename}
|
||||
Set the filename of the pre-recorded input file to stream.
|
||||
|
||||
When this option is set, encoding options are ignored and the input
|
||||
file content is re-streamed as is.
|
||||
|
||||
@item Format @var{format_name}
|
||||
Set the format of the output stream.
|
||||
|
||||
Must be the name of a format recognized by FFmpeg. If set to
|
||||
@samp{status}, it is treated as a status stream.
|
||||
|
||||
@item InputFormat @var{format_name}
|
||||
Set input format. If not specified, it is automatically guessed.
|
||||
|
||||
@item Preroll @var{n}
|
||||
Set this to the number of seconds backwards in time to start. Note that
|
||||
most players will buffer 5-10 seconds of video, and also you need to allow
|
||||
for a keyframe to appear in the data stream.
|
||||
|
||||
Default value is 0.
|
||||
|
||||
@item StartSendOnKey
|
||||
Do not send stream until it gets the first key frame. By default
|
||||
@command{ffserver} will send data immediately.
|
||||
|
||||
@item MaxTime @var{n}
|
||||
Set the number of seconds to run. This value set the maximum duration
|
||||
of the stream a client will be able to receive.
|
||||
|
||||
A value of 0 means that no limit is set on the stream duration.
|
||||
|
||||
@item ACL @var{spec}
|
||||
Set ACL for the stream.
|
||||
|
||||
@item DynamicACL @var{spec}
|
||||
|
||||
@item RTSPOption @var{option}
|
||||
|
||||
@item MulticastAddress @var{address}
|
||||
|
||||
@item MulticastPort @var{port}
|
||||
|
||||
@item MulticastTTL @var{integer}
|
||||
|
||||
@item NoLoop
|
||||
|
||||
@item FaviconURL @var{url}
|
||||
Set favicon (favourite icon) for the server status page. It is ignored
|
||||
for regular streams.
|
||||
|
||||
@item Author @var{value}
|
||||
@item Comment @var{value}
|
||||
@item Copyright @var{value}
|
||||
@item Title @var{value}
|
||||
Set metadata corresponding to the option. All these options are
|
||||
deprecated in favor of @option{Metadata}.
|
||||
|
||||
@item Metadata @var{key} @var{value}
|
||||
Set metadata value on the output stream.
|
||||
|
||||
@item UseDefaults
|
||||
@item NoDefaults
|
||||
Control whether default codec options are used for the stream or not.
|
||||
Default is @var{UseDefaults} unless disabled globally.
|
||||
|
||||
@item NoAudio
|
||||
@item NoVideo
|
||||
Suppress audio/video.
|
||||
|
||||
@item AudioCodec @var{codec_name} (@emph{encoding,audio})
|
||||
Set audio codec.
|
||||
|
||||
@item AudioBitRate @var{rate} (@emph{encoding,audio})
|
||||
Set bitrate for the audio stream in kbits per second.
|
||||
|
||||
@item AudioChannels @var{n} (@emph{encoding,audio})
|
||||
Set number of audio channels.
|
||||
|
||||
@item AudioSampleRate @var{n} (@emph{encoding,audio})
|
||||
Set sampling frequency for audio. When using low bitrates, you should
|
||||
lower this frequency to 22050 or 11025. The supported frequencies
|
||||
depend on the selected audio codec.
|
||||
|
||||
@item AVOptionAudio [@var{codec}:]@var{option} @var{value} (@emph{encoding,audio})
|
||||
Set generic or private option for audio stream.
|
||||
Private option must be prefixed with codec name or codec must be defined before.
|
||||
|
||||
@item AVPresetAudio @var{preset} (@emph{encoding,audio})
|
||||
Set preset for audio stream.
|
||||
|
||||
@item VideoCodec @var{codec_name} (@emph{encoding,video})
|
||||
Set video codec.
|
||||
|
||||
@item VideoBitRate @var{n} (@emph{encoding,video})
|
||||
Set bitrate for the video stream in kbits per second.
|
||||
|
||||
@item VideoBitRateRange @var{range} (@emph{encoding,video})
|
||||
Set video bitrate range.
|
||||
|
||||
A range must be specified in the form @var{minrate}-@var{maxrate}, and
|
||||
specifies the @option{minrate} and @option{maxrate} encoding options
|
||||
expressed in kbits per second.
|
||||
|
||||
@item VideoBitRateRangeTolerance @var{n} (@emph{encoding,video})
|
||||
Set video bitrate tolerance in kbits per second.
|
||||
|
||||
@item PixelFormat @var{pixel_format} (@emph{encoding,video})
|
||||
Set video pixel format.
|
||||
|
||||
@item Debug @var{integer} (@emph{encoding,video})
|
||||
Set video @option{debug} encoding option.
|
||||
|
||||
@item Strict @var{integer} (@emph{encoding,video})
|
||||
Set video @option{strict} encoding option.
|
||||
|
||||
@item VideoBufferSize @var{n} (@emph{encoding,video})
|
||||
Set ratecontrol buffer size, expressed in KB.
|
||||
|
||||
@item VideoFrameRate @var{n} (@emph{encoding,video})
|
||||
Set number of video frames per second.
|
||||
|
||||
@item VideoSize (@emph{encoding,video})
|
||||
Set size of the video frame, must be an abbreviation or in the form
|
||||
@var{W}x@var{H}. See @ref{video size syntax,,the Video size section
|
||||
in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
Default value is @code{160x128}.
|
||||
|
||||
@item VideoIntraOnly (@emph{encoding,video})
|
||||
Transmit only intra frames (useful for low bitrates, but kills frame rate).
|
||||
|
||||
@item VideoGopSize @var{n} (@emph{encoding,video})
|
||||
If non-intra only, an intra frame is transmitted every VideoGopSize
|
||||
frames. Video synchronization can only begin at an intra frame.
|
||||
|
||||
@item VideoTag @var{tag} (@emph{encoding,video})
|
||||
Set video tag.
|
||||
|
||||
@item VideoHighQuality (@emph{encoding,video})
|
||||
@item Video4MotionVector (@emph{encoding,video})
|
||||
|
||||
@item BitExact (@emph{encoding,video})
|
||||
Set bitexact encoding flag.
|
||||
|
||||
@item IdctSimple (@emph{encoding,video})
|
||||
Set simple IDCT algorithm.
|
||||
|
||||
@item Qscale @var{n} (@emph{encoding,video})
|
||||
Enable constant quality encoding, and set video qscale (quantization
|
||||
scale) value, expressed in @var{n} QP units.
|
||||
|
||||
@item VideoQMin @var{n} (@emph{encoding,video})
|
||||
@item VideoQMax @var{n} (@emph{encoding,video})
|
||||
Set video qmin/qmax.
|
||||
|
||||
@item VideoQDiff @var{integer} (@emph{encoding,video})
|
||||
Set video @option{qdiff} encoding option.
|
||||
|
||||
@item LumiMask @var{float} (@emph{encoding,video})
|
||||
@item DarkMask @var{float} (@emph{encoding,video})
|
||||
Set @option{lumi_mask}/@option{dark_mask} encoding options.
|
||||
|
||||
@item AVOptionVideo [@var{codec}:]@var{option} @var{value} (@emph{encoding,video})
|
||||
Set generic or private option for video stream.
|
||||
Private option must be prefixed with codec name or codec must be defined before.
|
||||
|
||||
@item AVPresetVideo @var{preset} (@emph{encoding,video})
|
||||
Set preset for video stream.
|
||||
|
||||
@var{preset} must be the path of a preset file.
|
||||
@end table
|
||||
|
||||
@subsection Server status stream
|
||||
|
||||
A server status stream is a special stream which is used to show
|
||||
statistics about the @command{ffserver} operations.
|
||||
|
||||
It must be specified setting the option @option{Format} to
|
||||
@samp{status}.
|
||||
|
||||
@section Redirect section
|
||||
|
||||
A redirect section specifies where to redirect the requested URL to
|
||||
another page.
|
||||
|
||||
A redirect section must be introduced by the line:
|
||||
@example
|
||||
<Redirect NAME>
|
||||
@end example
|
||||
|
||||
where @var{NAME} is the name of the page which should be redirected.
|
||||
|
||||
It only accepts the option @option{URL}, which specify the redirection
|
||||
URL.
|
||||
|
||||
@chapter Stream examples
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Multipart JPEG
|
||||
@example
|
||||
<Stream test.mjpg>
|
||||
Feed feed1.ffm
|
||||
Format mpjpeg
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
NoAudio
|
||||
Strict -1
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Single JPEG
|
||||
@example
|
||||
<Stream test.jpg>
|
||||
Feed feed1.ffm
|
||||
Format jpeg
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
VideoSize 352x240
|
||||
NoAudio
|
||||
Strict -1
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Flash
|
||||
@example
|
||||
<Stream test.swf>
|
||||
Feed feed1.ffm
|
||||
Format swf
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
NoAudio
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
ASF compatible
|
||||
@example
|
||||
<Stream test.asf>
|
||||
Feed feed1.ffm
|
||||
Format asf
|
||||
VideoFrameRate 15
|
||||
VideoSize 352x240
|
||||
VideoBitRate 256
|
||||
VideoBufferSize 40
|
||||
VideoGopSize 30
|
||||
AudioBitRate 64
|
||||
StartSendOnKey
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
MP3 audio
|
||||
@example
|
||||
<Stream test.mp3>
|
||||
Feed feed1.ffm
|
||||
Format mp2
|
||||
AudioCodec mp3
|
||||
AudioBitRate 64
|
||||
AudioChannels 1
|
||||
AudioSampleRate 44100
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Ogg Vorbis audio
|
||||
@example
|
||||
<Stream test.ogg>
|
||||
Feed feed1.ffm
|
||||
Metadata title "Stream title"
|
||||
AudioBitRate 64
|
||||
AudioChannels 2
|
||||
AudioSampleRate 44100
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Real with audio only at 32 kbits
|
||||
@example
|
||||
<Stream test.ra>
|
||||
Feed feed1.ffm
|
||||
Format rm
|
||||
AudioBitRate 32
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Real with audio and video at 64 kbits
|
||||
@example
|
||||
<Stream test.rm>
|
||||
Feed feed1.ffm
|
||||
Format rm
|
||||
AudioBitRate 32
|
||||
VideoBitRate 128
|
||||
VideoFrameRate 25
|
||||
VideoGopSize 25
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
For stream coming from a file: you only need to set the input filename
|
||||
and optionally a new format.
|
||||
|
||||
@example
|
||||
<Stream file.rm>
|
||||
File "/usr/local/httpd/htdocs/tlive.rm"
|
||||
NoAudio
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@example
|
||||
<Stream file.asf>
|
||||
File "/usr/local/httpd/htdocs/test.asf"
|
||||
NoAudio
|
||||
Metadata author "Me"
|
||||
Metadata copyright "Super MegaCorp"
|
||||
Metadata title "Test stream from disk"
|
||||
Metadata comment "Test comment"
|
||||
</Stream>
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@c man end
|
||||
|
||||
@include config.texi
|
||||
@ifset config-all
|
||||
@ifset config-avutil
|
||||
@include utils.texi
|
||||
@end ifset
|
||||
@ifset config-avcodec
|
||||
@include codecs.texi
|
||||
@include bitstream_filters.texi
|
||||
@end ifset
|
||||
@ifset config-avformat
|
||||
@include formats.texi
|
||||
@include protocols.texi
|
||||
@end ifset
|
||||
@ifset config-avdevice
|
||||
@include devices.texi
|
||||
@end ifset
|
||||
@ifset config-swresample
|
||||
@include resampler.texi
|
||||
@end ifset
|
||||
@ifset config-swscale
|
||||
@include scaler.texi
|
||||
@end ifset
|
||||
@ifset config-avfilter
|
||||
@include filters.texi
|
||||
@end ifset
|
||||
@end ifset
|
||||
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@ifset config-all
|
||||
@url{ffserver.html,ffserver},
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
@url{ffserver-all.html,ffserver-all},
|
||||
@end ifset
|
||||
the @file{doc/ffserver.conf} example,
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@url{ffmpeg-resampler.html,ffmpeg-resampler},
|
||||
@url{ffmpeg-codecs.html,ffmpeg-codecs},
|
||||
@url{ffmpeg-bitstream-filters.html,ffmpeg-bitstream-filters},
|
||||
@url{ffmpeg-formats.html,ffmpeg-formats},
|
||||
@url{ffmpeg-devices.html,ffmpeg-devices},
|
||||
@url{ffmpeg-protocols.html,ffmpeg-protocols},
|
||||
@url{ffmpeg-filters.html,ffmpeg-filters}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
@ifset config-all
|
||||
ffserver(1),
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
ffserver-all(1),
|
||||
@end ifset
|
||||
the @file{doc/ffserver.conf} example, ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
|
||||
@end ifnothtml
|
||||
|
||||
@include authors.texi
|
||||
|
||||
@ignore
|
||||
|
||||
@setfilename ffserver
|
||||
@settitle ffserver video server
|
||||
|
||||
@end ignore
|
||||
|
||||
@bye
|
||||
@@ -42,20 +42,10 @@ streams, 'V' only matches video streams which are not attached pictures, video
|
||||
thumbnails or cover arts. If @var{stream_index} is given, then it matches
|
||||
stream number @var{stream_index} of this type. Otherwise, it matches all
|
||||
streams of this type.
|
||||
@item p:@var{program_id}[:@var{stream_index}] or p:@var{program_id}[:@var{stream_type}[:@var{stream_index}]] or
|
||||
p:@var{program_id}:m:@var{key}[:@var{value}]
|
||||
In first version, if @var{stream_index} is given, then it matches the stream with number @var{stream_index}
|
||||
@item p:@var{program_id}[:@var{stream_index}]
|
||||
If @var{stream_index} is given, then it matches the stream with number @var{stream_index}
|
||||
in the program with the id @var{program_id}. Otherwise, it matches all streams in the
|
||||
program. In the second version, @var{stream_type} is one of following: 'v' for video, 'a' for audio, 's'
|
||||
for subtitle, 'd' for data. If @var{stream_index} is also given, then it matches
|
||||
stream number @var{stream_index} of this type in the program with the id @var{program_id}.
|
||||
Otherwise, if only @var{stream_type} is given, it matches all
|
||||
streams of this type in the program with the id @var{program_id}.
|
||||
In the third version matches streams in the program with the id @var{program_id} with the metadata
|
||||
tag @var{key} having the specified value. If
|
||||
@var{value} is not given, matches streams that contain the given tag with any
|
||||
value.
|
||||
|
||||
program.
|
||||
@item #@var{stream_id} or i:@var{stream_id}
|
||||
Match the stream by stream id (e.g. PID in MPEG-TS container).
|
||||
@item m:@var{key}[:@var{value}]
|
||||
@@ -163,7 +153,7 @@ Show channel names and standard channel layouts.
|
||||
Show recognized color names.
|
||||
|
||||
@item -sources @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
|
||||
Show autodetected sources of the input device.
|
||||
Show autodetected sources of the intput device.
|
||||
Some devices may provide system-dependent source names that cannot be autodetected.
|
||||
The returned list cannot be assumed to be always complete.
|
||||
@example
|
||||
@@ -178,24 +168,14 @@ The returned list cannot be assumed to be always complete.
|
||||
ffmpeg -sinks pulse,server=192.168.0.4
|
||||
@end example
|
||||
|
||||
@item -loglevel [@var{flags}+]@var{loglevel} | -v [@var{flags}+]@var{loglevel}
|
||||
Set logging level and flags used by the library.
|
||||
|
||||
The optional @var{flags} prefix can consist of the following values:
|
||||
@table @samp
|
||||
@item repeat
|
||||
Indicates that repeated log output should not be compressed to the first line
|
||||
and the "Last message repeated n times" line will be omitted.
|
||||
@item level
|
||||
Indicates that log output should add a @code{[level]} prefix to each message
|
||||
line. This can be used as an alternative to log coloring, e.g. when dumping the
|
||||
log to file.
|
||||
@end table
|
||||
Flags can also be used alone by adding a '+'/'-' prefix to set/reset a single
|
||||
flag without affecting other @var{flags} or changing @var{loglevel}. When
|
||||
setting both @var{flags} and @var{loglevel}, a '+' separator is expected
|
||||
between the last @var{flags} value and before @var{loglevel}.
|
||||
|
||||
@item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel}
|
||||
Set the logging level used by the library.
|
||||
Adding "repeat+" indicates that repeated log output should not be compressed
|
||||
to the first line and the "Last message repeated n times" line will be
|
||||
omitted. "repeat" can also be used alone.
|
||||
If "repeat" is used alone, and with no prior loglevel set, the default
|
||||
loglevel will be used. If multiple loglevel parameters are given, using
|
||||
'repeat' will not change the loglevel.
|
||||
@var{loglevel} is a string or a number containing one of the following values:
|
||||
@table @samp
|
||||
@item quiet, -8
|
||||
@@ -221,17 +201,6 @@ Show everything, including debugging information.
|
||||
@item trace, 56
|
||||
@end table
|
||||
|
||||
For example to enable repeated log output, add the @code{level} prefix, and set
|
||||
@var{loglevel} to @code{verbose}:
|
||||
@example
|
||||
ffmpeg -loglevel repeat+level+verbose -i input output
|
||||
@end example
|
||||
Another example that enables repeated log output without affecting current
|
||||
state of @code{level} prefix flag or @var{loglevel}:
|
||||
@example
|
||||
ffmpeg [...] -loglevel +repeat
|
||||
@end example
|
||||
|
||||
By default the program logs to stderr. If coloring is supported by the
|
||||
terminal, colors are used to mark errors and warnings. Log coloring
|
||||
can be disabled setting the environment variable
|
||||
@@ -346,6 +315,51 @@ Possible flags for this option are:
|
||||
@item k8
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@item -opencl_bench
|
||||
This option is used to benchmark all available OpenCL devices and print the
|
||||
results. This option is only available when FFmpeg has been compiled with
|
||||
@code{--enable-opencl}.
|
||||
|
||||
When FFmpeg is configured with @code{--enable-opencl}, the options for the
|
||||
global OpenCL context are set via @option{-opencl_options}. See the
|
||||
"OpenCL Options" section in the ffmpeg-utils manual for the complete list of
|
||||
supported options. Amongst others, these options include the ability to select
|
||||
a specific platform and device to run the OpenCL code on. By default, FFmpeg
|
||||
will run on the first device of the first platform. While the options for the
|
||||
global OpenCL context provide flexibility to the user in selecting the OpenCL
|
||||
device of their choice, most users would probably want to select the fastest
|
||||
OpenCL device for their system.
|
||||
|
||||
This option assists the selection of the most efficient configuration by
|
||||
identifying the appropriate device for the user's system. The built-in
|
||||
benchmark is run on all the OpenCL devices and the performance is measured for
|
||||
each device. The devices in the results list are sorted based on their
|
||||
performance with the fastest device listed first. The user can subsequently
|
||||
invoke @command{ffmpeg} using the device deemed most appropriate via
|
||||
@option{-opencl_options} to obtain the best performance for the OpenCL
|
||||
accelerated code.
|
||||
|
||||
Typical usage to use the fastest OpenCL device involve the following steps.
|
||||
|
||||
Run the command:
|
||||
@example
|
||||
ffmpeg -opencl_bench
|
||||
@end example
|
||||
Note down the platform ID (@var{pidx}) and device ID (@var{didx}) of the first
|
||||
i.e. fastest device in the list.
|
||||
Select the platform and device using the command:
|
||||
@example
|
||||
ffmpeg -opencl_options platform_idx=@var{pidx}:device_idx=@var{didx} ...
|
||||
@end example
|
||||
|
||||
@item -opencl_options options (@emph{global})
|
||||
Set OpenCL environment options. This option is only available when
|
||||
FFmpeg has been compiled with @code{--enable-opencl}.
|
||||
|
||||
@var{options} must be a list of @var{key}=@var{value} option pairs
|
||||
separated by ':'. See the ``OpenCL Options'' section in the
|
||||
ffmpeg-utils manual for the list of supported options.
|
||||
@end table
|
||||
|
||||
@section AVOptions
|
||||
|
||||
@@ -5,7 +5,7 @@ This document explains guidelines that should be observed (or ignored with
|
||||
good reason) when writing filters for libavfilter.
|
||||
|
||||
In this document, the word “frame” indicates either a video frame or a group
|
||||
of audio samples, as stored in an AVFrame structure.
|
||||
of audio samples, as stored in an AVFilterBuffer structure.
|
||||
|
||||
|
||||
Format negotiation
|
||||
@@ -35,31 +35,32 @@ Format negotiation
|
||||
to set the formats supported on another.
|
||||
|
||||
|
||||
Frame references ownership and permissions
|
||||
==========================================
|
||||
Buffer references ownership and permissions
|
||||
===========================================
|
||||
|
||||
Principle
|
||||
---------
|
||||
|
||||
Audio and video data are voluminous; the frame and frame reference
|
||||
Audio and video data are voluminous; the buffer and buffer reference
|
||||
mechanism is intended to avoid, as much as possible, expensive copies of
|
||||
that data while still allowing the filters to produce correct results.
|
||||
|
||||
The data is stored in buffers represented by AVFrame structures.
|
||||
Several references can point to the same frame buffer; the buffer is
|
||||
automatically deallocated once all corresponding references have been
|
||||
destroyed.
|
||||
The data is stored in buffers represented by AVFilterBuffer structures.
|
||||
They must not be accessed directly, but through references stored in
|
||||
AVFilterBufferRef structures. Several references can point to the
|
||||
same buffer; the buffer is automatically deallocated once all
|
||||
corresponding references have been destroyed.
|
||||
|
||||
The characteristics of the data (resolution, sample rate, etc.) are
|
||||
stored in the reference; different references for the same buffer can
|
||||
show different characteristics. In particular, a video reference can
|
||||
point to only a part of a video buffer.
|
||||
|
||||
A reference is usually obtained as input to the filter_frame method or
|
||||
requested using the ff_get_video_buffer or ff_get_audio_buffer
|
||||
functions. A new reference on an existing buffer can be created with
|
||||
av_frame_ref(). A reference is destroyed using
|
||||
the av_frame_free() function.
|
||||
A reference is usually obtained as input to the start_frame or
|
||||
filter_frame method or requested using the ff_get_video_buffer or
|
||||
ff_get_audio_buffer functions. A new reference on an existing buffer can
|
||||
be created with the avfilter_ref_buffer. A reference is destroyed using
|
||||
the avfilter_unref_bufferp function.
|
||||
|
||||
Reference ownership
|
||||
-------------------
|
||||
@@ -72,13 +73,17 @@ Frame references ownership and permissions
|
||||
|
||||
Here are the (fairly obvious) rules for reference ownership:
|
||||
|
||||
* A reference received by the filter_frame method belongs to the
|
||||
corresponding filter.
|
||||
* A reference received by the filter_frame method (or its start_frame
|
||||
deprecated version) belongs to the corresponding filter.
|
||||
|
||||
* A reference passed to ff_filter_frame is given away and must no longer
|
||||
be used.
|
||||
Special exception: for video references: the reference may be used
|
||||
internally for automatic copying and must not be destroyed before
|
||||
end_frame; it can be given away to ff_start_frame.
|
||||
|
||||
* A reference created with av_frame_ref() belongs to the code that
|
||||
* A reference passed to ff_filter_frame (or the deprecated
|
||||
ff_start_frame) is given away and must no longer be used.
|
||||
|
||||
* A reference created with avfilter_ref_buffer belongs to the code that
|
||||
created it.
|
||||
|
||||
* A reference obtained with ff_get_video_buffer or ff_get_audio_buffer
|
||||
@@ -90,32 +95,89 @@ Frame references ownership and permissions
|
||||
Link reference fields
|
||||
---------------------
|
||||
|
||||
The AVFilterLink structure has a few AVFrame fields.
|
||||
|
||||
partial_buf is used by libavfilter internally and must not be accessed
|
||||
by filters.
|
||||
|
||||
fifo contains frames queued in the filter's input. They belong to the
|
||||
framework until they are taken by the filter.
|
||||
The AVFilterLink structure has a few AVFilterBufferRef fields. The
|
||||
cur_buf and out_buf were used with the deprecated
|
||||
start_frame/draw_slice/end_frame API and should no longer be used.
|
||||
src_buf and partial_buf are used by libavfilter internally
|
||||
and must not be accessed by filters.
|
||||
|
||||
Reference permissions
|
||||
---------------------
|
||||
|
||||
Since the same frame data can be shared by several frames, modifying may
|
||||
have unintended consequences. A frame is considered writable if only one
|
||||
reference to it exists. The code owning that reference it then allowed
|
||||
to modify the data.
|
||||
The AVFilterBufferRef structure has a perms field that describes what
|
||||
the code that owns the reference is allowed to do to the buffer data.
|
||||
Different references for the same buffer can have different permissions.
|
||||
|
||||
A filter can check if a frame is writable by using the
|
||||
av_frame_is_writable() function.
|
||||
For video filters that implement the deprecated
|
||||
start_frame/draw_slice/end_frame API, the permissions only apply to the
|
||||
parts of the buffer that have already been covered by the draw_slice
|
||||
method.
|
||||
|
||||
A filter can ensure that a frame is writable at some point of the code
|
||||
by using the ff_inlink_make_frame_writable() function. It will duplicate
|
||||
the frame if needed.
|
||||
The value is a binary OR of the following constants:
|
||||
|
||||
A filter can ensure that the frame passed to the filter_frame() callback
|
||||
is writable by setting the needs_writable flag on the corresponding
|
||||
input pad. It does not apply to the activate() callback.
|
||||
* AV_PERM_READ: the owner can read the buffer data; this is essentially
|
||||
always true and is there for self-documentation.
|
||||
|
||||
* AV_PERM_WRITE: the owner can modify the buffer data.
|
||||
|
||||
* AV_PERM_PRESERVE: the owner can rely on the fact that the buffer data
|
||||
will not be modified by previous filters.
|
||||
|
||||
* AV_PERM_REUSE: the owner can output the buffer several times, without
|
||||
modifying the data in between.
|
||||
|
||||
* AV_PERM_REUSE2: the owner can output the buffer several times and
|
||||
modify the data in between (useless without the WRITE permissions).
|
||||
|
||||
* AV_PERM_ALIGN: the owner can access the data using fast operations
|
||||
that require data alignment.
|
||||
|
||||
The READ, WRITE and PRESERVE permissions are about sharing the same
|
||||
buffer between several filters to avoid expensive copies without them
|
||||
doing conflicting changes on the data.
|
||||
|
||||
The REUSE and REUSE2 permissions are about special memory for direct
|
||||
rendering. For example a buffer directly allocated in video memory must
|
||||
not modified once it is displayed on screen, or it will cause tearing;
|
||||
it will therefore not have the REUSE2 permission.
|
||||
|
||||
The ALIGN permission is about extracting part of the buffer, for
|
||||
copy-less padding or cropping for example.
|
||||
|
||||
|
||||
References received on input pads are guaranteed to have all the
|
||||
permissions stated in the min_perms field and none of the permissions
|
||||
stated in the rej_perms.
|
||||
|
||||
References obtained by ff_get_video_buffer and ff_get_audio_buffer are
|
||||
guaranteed to have at least all the permissions requested as argument.
|
||||
|
||||
References created by avfilter_ref_buffer have the same permissions as
|
||||
the original reference minus the ones explicitly masked; the mask is
|
||||
usually ~0 to keep the same permissions.
|
||||
|
||||
Filters should remove permissions on reference they give to output
|
||||
whenever necessary. It can be automatically done by setting the
|
||||
rej_perms field on the output pad.
|
||||
|
||||
Here are a few guidelines corresponding to common situations:
|
||||
|
||||
* Filters that modify and forward their frame (like drawtext) need the
|
||||
WRITE permission.
|
||||
|
||||
* Filters that read their input to produce a new frame on output (like
|
||||
scale) need the READ permission on input and must request a buffer
|
||||
with the WRITE permission.
|
||||
|
||||
* Filters that intend to keep a reference after the filtering process
|
||||
is finished (after filter_frame returns) must have the PRESERVE
|
||||
permission on it and remove the WRITE permission if they create a new
|
||||
reference to give it away.
|
||||
|
||||
* Filters that intend to modify a reference they have kept after the end
|
||||
of the filtering process need the REUSE2 permission and must remove
|
||||
the PRESERVE permission if they create a new reference to give it
|
||||
away.
|
||||
|
||||
|
||||
Frame scheduling
|
||||
@@ -127,100 +189,11 @@ Frame scheduling
|
||||
Simple filters that output one frame for each input frame should not have
|
||||
to worry about it.
|
||||
|
||||
There are two design for filters: one using the filter_frame() and
|
||||
request_frame() callbacks and the other using the activate() callback.
|
||||
|
||||
The design using filter_frame() and request_frame() is legacy, but it is
|
||||
suitable for filters that have a single input and process one frame at a
|
||||
time. New filters with several inputs, that treat several frames at a time
|
||||
or that require a special treatment at EOF should probably use the design
|
||||
using activate().
|
||||
|
||||
activate
|
||||
--------
|
||||
|
||||
This method is called when something must be done in a filter; the
|
||||
definition of that "something" depends on the semantic of the filter.
|
||||
|
||||
The callback must examine the status of the filter's links and proceed
|
||||
accordingly.
|
||||
|
||||
The status of output links is stored in the frame_wanted_out, status_in
|
||||
and status_out fields and tested by the ff_outlink_frame_wanted()
|
||||
function. If this function returns true, then the processing requires a
|
||||
frame on this link and the filter is expected to make efforts in that
|
||||
direction.
|
||||
|
||||
The status of input links is stored by the status_in, fifo and
|
||||
status_out fields; they must not be accessed directly. The fifo field
|
||||
contains the frames that are queued in the input for processing by the
|
||||
filter. The status_in and status_out fields contains the queued status
|
||||
(EOF or error) of the link; status_in is a status change that must be
|
||||
taken into account after all frames in fifo have been processed;
|
||||
status_out is the status that have been taken into account, it is final
|
||||
when it is not 0.
|
||||
|
||||
The typical task of an activate callback is to first check the backward
|
||||
status of output links, and if relevant forward it to the corresponding
|
||||
input. Then, if relevant, for each input link: test the availability of
|
||||
frames in fifo and process them; if no frame is available, test and
|
||||
acknowledge a change of status using ff_inlink_acknowledge_status(); and
|
||||
forward the result (frame or status change) to the corresponding input.
|
||||
If nothing is possible, test the status of outputs and forward it to the
|
||||
corresponding input(s). If still not possible, return FFERROR_NOT_READY.
|
||||
|
||||
If the filters stores internally one or a few frame for some input, it
|
||||
can consider them to be part of the FIFO and delay acknowledging a
|
||||
status change accordingly.
|
||||
|
||||
Example code:
|
||||
|
||||
ret = ff_outlink_get_status(outlink);
|
||||
if (ret) {
|
||||
ff_inlink_set_status(inlink, ret);
|
||||
return 0;
|
||||
}
|
||||
if (priv->next_frame) {
|
||||
/* use it */
|
||||
return 0;
|
||||
}
|
||||
ret = ff_inlink_consume_frame(inlink, &frame);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (ret) {
|
||||
/* use it */
|
||||
return 0;
|
||||
}
|
||||
ret = ff_inlink_acknowledge_status(inlink, &status, &pts);
|
||||
if (ret) {
|
||||
/* flush */
|
||||
ff_outlink_set_status(outlink, status, pts);
|
||||
return 0;
|
||||
}
|
||||
if (ff_outlink_frame_wanted(outlink)) {
|
||||
ff_inlink_request_frame(inlink);
|
||||
return 0;
|
||||
}
|
||||
return FFERROR_NOT_READY;
|
||||
|
||||
The exact code depends on how similar the /* use it */ blocks are and
|
||||
how related they are to the /* flush */ block, and needs to apply these
|
||||
operations to the correct inlink or outlink if there are several.
|
||||
|
||||
Macros are available to factor that when no extra processing is needed:
|
||||
|
||||
FF_FILTER_FORWARD_STATUS_BACK(outlink, inlink);
|
||||
FF_FILTER_FORWARD_STATUS_ALL(outlink, filter);
|
||||
FF_FILTER_FORWARD_STATUS(inlink, outlink);
|
||||
FF_FILTER_FORWARD_STATUS_ALL(inlink, filter);
|
||||
FF_FILTER_FORWARD_WANTED(outlink, inlink);
|
||||
|
||||
filter_frame
|
||||
------------
|
||||
|
||||
For filters that do not use the activate() callback, this method is
|
||||
called when a frame is pushed to the filter's input. It can be called at
|
||||
any time except in a reentrant way.
|
||||
This method is called when a frame is pushed to the filter's input. It
|
||||
can be called at any time except in a reentrant way.
|
||||
|
||||
If the input frame is enough to produce output, then the filter should
|
||||
push the output frames on the output link immediately.
|
||||
@@ -249,10 +222,9 @@ Frame scheduling
|
||||
request_frame
|
||||
-------------
|
||||
|
||||
For filters that do not use the activate() callback, this method is
|
||||
called when a frame is wanted on an output.
|
||||
This method is called when a frame is wanted on an output.
|
||||
|
||||
For a source, it should directly call filter_frame on the corresponding
|
||||
For an input, it should directly call filter_frame on the corresponding
|
||||
output.
|
||||
|
||||
For a filter, if there are queued frames already ready, one of these
|
||||
@@ -282,7 +254,16 @@ Frame scheduling
|
||||
}
|
||||
return 0;
|
||||
|
||||
Note that, except for filters that can have queued frames and sources,
|
||||
request_frame does not push frames: it requests them to its input, and
|
||||
as a reaction, the filter_frame method possibly will be called and do
|
||||
the work.
|
||||
Note that, except for filters that can have queued frames, request_frame
|
||||
does not push frames: it requests them to its input, and as a reaction,
|
||||
the filter_frame method possibly will be called and do the work.
|
||||
|
||||
Legacy API
|
||||
==========
|
||||
|
||||
Until libavfilter 3.23, the filter_frame method was split:
|
||||
|
||||
- for video filters, it was made of start_frame, draw_slice (that could be
|
||||
called several times on distinct parts of the frame) and end_frame;
|
||||
|
||||
- for audio filters, it was called filter_samples.
|
||||
|
||||
2575
doc/filters.texi
2575
doc/filters.texi
File diff suppressed because it is too large
Load Diff
@@ -182,10 +182,9 @@ Default is 0.
|
||||
Correct single timestamp overflows if set to 1. Default is 1.
|
||||
|
||||
@item flush_packets @var{integer} (@emph{output})
|
||||
Flush the underlying I/O stream after each packet. Default is -1 (auto), which
|
||||
means that the underlying protocol will decide, 1 enables it, and has the
|
||||
effect of reducing the latency, 0 disables it and may increase IO throughput in
|
||||
some cases.
|
||||
Flush the underlying I/O stream after each packet. Default 1 enables it, and
|
||||
has the effect of reducing the latency; 0 disables it and may slightly
|
||||
increase performance in some cases.
|
||||
|
||||
@item output_ts_offset @var{offset} (@emph{output})
|
||||
Set the output time offset.
|
||||
|
||||
@@ -17,14 +17,6 @@ for more formats. None of them are used by default, their use has to be
|
||||
explicitly requested by passing the appropriate flags to
|
||||
@command{./configure}.
|
||||
|
||||
@section Alliance for Open Media libaom
|
||||
|
||||
FFmpeg can make use of the libaom library for AV1 decoding.
|
||||
|
||||
Go to @url{http://aomedia.org/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libaom} to configure to
|
||||
enable it.
|
||||
|
||||
@section OpenJPEG
|
||||
|
||||
FFmpeg can use the OpenJPEG libraries for encoding/decoding J2K videos. Go to
|
||||
@@ -93,24 +85,6 @@ Go to @url{http://www.twolame.org/} and follow the
|
||||
instructions for installing the library.
|
||||
Then pass @code{--enable-libtwolame} to configure to enable it.
|
||||
|
||||
@section libcodec2 / codec2 general
|
||||
|
||||
FFmpeg can make use of libcodec2 for codec2 encoding and decoding.
|
||||
There is currently no native decoder, so libcodec2 must be used for decoding.
|
||||
|
||||
Go to @url{http://freedv.org/}, download "Codec 2 source archive".
|
||||
Build and install using CMake. Debian users can install the libcodec2-dev package instead.
|
||||
Once libcodec2 is installed you can pass @code{--enable-libcodec2} to configure to enable it.
|
||||
|
||||
The easiest way to use codec2 is with .c2 files, since they contain the mode information required for decoding.
|
||||
To encode such a file, use a .c2 file extension and give the libcodec2 encoder the -mode option:
|
||||
@code{ffmpeg -i input.wav -mode 700C output.c2}.
|
||||
Playback is as simple as @code{ffplay output.c2}.
|
||||
For a list of supported modes, run @code{ffmpeg -h encoder=libcodec2}.
|
||||
Raw codec2 files are also supported.
|
||||
To make sense of them the mode in use needs to be specified as a format option:
|
||||
@code{ffmpeg -f codec2raw -mode 1300 -i input.raw output.wav}.
|
||||
|
||||
@section libvpx
|
||||
|
||||
FFmpeg can make use of the libvpx library for VP8/VP9 encoding.
|
||||
@@ -127,14 +101,6 @@ Go to @url{http://www.wavpack.com/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libwavpack} to configure to
|
||||
enable it.
|
||||
|
||||
@section libxavs
|
||||
|
||||
FFmpeg can make use of the libxavs library for Xavs encoding.
|
||||
|
||||
Go to @url{http://xavs.sf.net/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libxavs} to configure to
|
||||
enable it.
|
||||
|
||||
@section OpenH264
|
||||
|
||||
FFmpeg can make use of the OpenH264 library for H.264 encoding and decoding.
|
||||
@@ -251,18 +217,6 @@ The dispatcher is open source and can be downloaded from
|
||||
with the @code{--enable-libmfx} option and @code{pkg-config} needs to be able to
|
||||
locate the dispatcher's @code{.pc} files.
|
||||
|
||||
@section AMD VCE
|
||||
|
||||
FFmpeg can use the AMD Advanced Media Framework library for accelerated H.264
|
||||
and HEVC encoding on VCE enabled hardware under Windows.
|
||||
|
||||
To enable support you must obtain the AMF framework header files from
|
||||
@url{https://github.com/GPUOpen-LibrariesAndSDKs/AMF.git}.
|
||||
|
||||
Create an @code{AMF/} directory in the system include path.
|
||||
Copy the contents of @code{AMF/amf/public/include/} into that directory.
|
||||
Then configure FFmpeg with @code{--enable-amf}.
|
||||
|
||||
|
||||
@chapter Supported File Formats, Codecs or Features
|
||||
|
||||
@@ -328,10 +282,6 @@ library:
|
||||
@item BRSTM @tab @tab X
|
||||
@tab Audio format used on the Nintendo Wii.
|
||||
@item BWF @tab X @tab X
|
||||
@item codec2 (raw) @tab X @tab X
|
||||
@tab Must be given -mode format option to decode correctly.
|
||||
@item codec2 (.c2 files) @tab X @tab X
|
||||
@tab Contains header with version and mode info, simplifying playback.
|
||||
@item CRI ADX @tab X @tab X
|
||||
@tab Audio-only format used in console video games.
|
||||
@item Discworld II BMV @tab @tab X
|
||||
@@ -383,7 +333,6 @@ library:
|
||||
@item FunCom ISS @tab @tab X
|
||||
@tab Audio format used in various games from FunCom like The Longest Journey.
|
||||
@item G.723.1 @tab X @tab X
|
||||
@item G.726 @tab @tab X @tab Both left- and right-justified.
|
||||
@item G.729 BIT @tab X @tab X
|
||||
@item G.729 raw @tab @tab X
|
||||
@item GENH @tab @tab X
|
||||
@@ -407,7 +356,6 @@ library:
|
||||
@item Interplay MVE @tab @tab X
|
||||
@tab Format used in various Interplay computer games.
|
||||
@item Iterated Systems ClearVideo @tab @tab X
|
||||
@tab I-frames only
|
||||
@item IV8 @tab @tab X
|
||||
@tab A format generated by IndigoVision 8000 video server.
|
||||
@item IVF (On2) @tab X @tab X
|
||||
@@ -467,7 +415,6 @@ library:
|
||||
@item NC camera feed @tab @tab X
|
||||
@tab NC (AVIP NC4600) camera streams
|
||||
@item NIST SPeech HEader REsources @tab @tab X
|
||||
@item Computerized Speech Lab NSP @tab @tab X
|
||||
@item NTT TwinVQ (VQF) @tab @tab X
|
||||
@tab Nippon Telegraph and Telephone Corporation TwinVQ.
|
||||
@item Nullsoft Streaming Video @tab @tab X
|
||||
@@ -482,10 +429,6 @@ library:
|
||||
@item QCP @tab @tab X
|
||||
@item raw ADTS (AAC) @tab X @tab X
|
||||
@item raw AC-3 @tab X @tab X
|
||||
@item raw AMR-NB @tab @tab X
|
||||
@item raw AMR-WB @tab @tab X
|
||||
@item raw aptX @tab X @tab X
|
||||
@item raw aptX HD @tab X @tab X
|
||||
@item raw Chinese AVS video @tab X @tab X
|
||||
@item raw CRI ADX @tab X @tab X
|
||||
@item raw Dirac @tab X @tab X
|
||||
@@ -509,7 +452,6 @@ library:
|
||||
@item raw NULL @tab X @tab
|
||||
@item raw video @tab X @tab X
|
||||
@item raw id RoQ @tab X @tab
|
||||
@item raw SBC @tab X @tab X
|
||||
@item raw Shorten @tab @tab X
|
||||
@item raw TAK @tab @tab X
|
||||
@item raw TrueHD @tab X @tab X
|
||||
@@ -559,7 +501,7 @@ library:
|
||||
@item SAP @tab X @tab X
|
||||
@item SBG @tab @tab X
|
||||
@item SDP @tab @tab X
|
||||
@item Sega FILM/CPK @tab X @tab X
|
||||
@item Sega FILM/CPK @tab @tab X
|
||||
@tab Used in many Sega Saturn console games.
|
||||
@item Silicon Graphics Movie @tab @tab X
|
||||
@item Sierra SOL @tab @tab X
|
||||
@@ -570,7 +512,6 @@ library:
|
||||
@tab Multimedia format used by many games.
|
||||
@item SMJPEG @tab X @tab X
|
||||
@tab Used in certain Loki game ports.
|
||||
@item SMPTE 337M encapsulation @tab @tab X
|
||||
@item Smush @tab @tab X
|
||||
@tab Multimedia format used in some LucasArts games.
|
||||
@item Sony OpenMG (OMA) @tab X @tab X
|
||||
@@ -579,7 +520,7 @@ library:
|
||||
@item Sony Wave64 (W64) @tab X @tab X
|
||||
@item SoX native format @tab X @tab X
|
||||
@item SUN AU format @tab X @tab X
|
||||
@item SUP raw PGS subtitles @tab X @tab X
|
||||
@item SUP raw PGS subtitles @tab @tab X
|
||||
@item SVAG @tab @tab X
|
||||
@tab Audio format used in Konami PS2 games.
|
||||
@item TDSC @tab @tab X
|
||||
@@ -642,8 +583,6 @@ following image formats are supported:
|
||||
@tab Digital Picture Exchange
|
||||
@item EXR @tab @tab X
|
||||
@tab OpenEXR
|
||||
@item FITS @tab X @tab X
|
||||
@tab Flexible Image Transport System
|
||||
@item JPEG @tab X @tab X
|
||||
@tab Progressive JPEG is not supported.
|
||||
@item JPEG 2000 @tab X @tab X
|
||||
@@ -727,8 +666,6 @@ following image formats are supported:
|
||||
@item Autodesk Animator Flic video @tab @tab X
|
||||
@item Autodesk RLE @tab @tab X
|
||||
@tab fourcc: AASC
|
||||
@item AV1 @tab @tab E
|
||||
@tab Supported through external library libaom
|
||||
@item Avid 1:1 10-bit RGB Packer @tab X @tab X
|
||||
@tab fourcc: AVrp
|
||||
@item AVS (Audio Video Standard) video @tab @tab X
|
||||
@@ -766,7 +703,7 @@ following image formats are supported:
|
||||
@item DFA @tab @tab X
|
||||
@tab Codec used in Chronomaster game.
|
||||
@item Dirac @tab E @tab X
|
||||
@tab supported though the native vc2 (Dirac Pro) encoder
|
||||
@tab supported through external library libschroedinger
|
||||
@item Deluxe Paint Animation @tab @tab X
|
||||
@item DNxHD @tab X @tab X
|
||||
@tab aka SMPTE VC3
|
||||
@@ -843,8 +780,7 @@ following image formats are supported:
|
||||
@item LucasArts SANM/Smush @tab @tab X
|
||||
@tab Used in LucasArts games / SMUSH animations.
|
||||
@item lossless MJPEG @tab X @tab X
|
||||
@item MagicYUV Video @tab X @tab X
|
||||
@item Mandsoft Screen Capture Codec @tab @tab X
|
||||
@item MagicYUV Video @tab @tab X
|
||||
@item Microsoft ATC Screen @tab @tab X
|
||||
@tab Also known as Microsoft Screen 3.
|
||||
@item Microsoft Expression Encoder Screen @tab @tab X
|
||||
@@ -912,7 +848,6 @@ following image formats are supported:
|
||||
@tab used in some games by Entertainment Software Partners
|
||||
@item ScreenPressor @tab @tab X
|
||||
@item Screenpresso @tab @tab X
|
||||
@item Screen Recorder Gold Codec @tab @tab X
|
||||
@item Sierra VMD video @tab @tab X
|
||||
@tab Used in Sierra VMD files.
|
||||
@item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X
|
||||
@@ -1041,10 +976,6 @@ following image formats are supported:
|
||||
@item Amazing Studio PAF Audio @tab @tab X
|
||||
@item Apple lossless audio @tab X @tab X
|
||||
@tab QuickTime fourcc 'alac'
|
||||
@item aptX @tab X @tab X
|
||||
@tab Used in Bluetooth A2DP
|
||||
@item aptX HD @tab X @tab X
|
||||
@tab Used in Bluetooth A2DP
|
||||
@item ATRAC1 @tab @tab X
|
||||
@item ATRAC3 @tab @tab X
|
||||
@item ATRAC3+ @tab @tab X
|
||||
@@ -1052,8 +983,6 @@ following image formats are supported:
|
||||
@tab Used in Bink and Smacker files in many games.
|
||||
@item CELT @tab @tab E
|
||||
@tab decoding supported through external library libcelt
|
||||
@item codec2 @tab E @tab E
|
||||
@tab en/decoding supported through external library libcodec2
|
||||
@item Delphine Software International CIN audio @tab @tab X
|
||||
@tab Codec used in Delphine Software International games.
|
||||
@item Digital Speech Standard - Standard Play mode (DSS SP) @tab @tab X
|
||||
@@ -1062,7 +991,6 @@ following image formats are supported:
|
||||
@tab All versions except 5.1 are supported.
|
||||
@item DCA (DTS Coherent Acoustics) @tab X @tab X
|
||||
@tab supported extensions: XCh, XXCH, X96, XBR, XLL, LBR (partially)
|
||||
@item Dolby E @tab @tab X
|
||||
@item DPCM id RoQ @tab X @tab X
|
||||
@tab Used in Quake III, Jedi Knight 2 and other computer games.
|
||||
@item DPCM Interplay @tab @tab X
|
||||
@@ -1152,8 +1080,6 @@ following image formats are supported:
|
||||
@tab Real low bitrate AC-3 codec
|
||||
@item RealAudio Lossless @tab @tab X
|
||||
@item RealAudio SIPR / ACELP.NET @tab @tab X
|
||||
@item SBC (low-complexity subband codec) @tab X @tab X
|
||||
@tab Used in Bluetooth A2DP
|
||||
@item Shorten @tab @tab X
|
||||
@item Sierra VMD audio @tab @tab X
|
||||
@tab Used in Sierra VMD files.
|
||||
|
||||
289
doc/indevs.texi
289
doc/indevs.texi
@@ -63,51 +63,12 @@ Set the number of channels. Default is 2.
|
||||
|
||||
@end table
|
||||
|
||||
@section android_camera
|
||||
|
||||
Android camera input device.
|
||||
|
||||
This input devices uses the Android Camera2 NDK API which is
|
||||
available on devices with API level 24+. The availability of
|
||||
android_camera is autodetected during configuration.
|
||||
|
||||
This device allows capturing from all cameras on an Android device,
|
||||
which are integrated into the Camera2 NDK API.
|
||||
|
||||
The available cameras are enumerated internally and can be selected
|
||||
with the @var{camera_index} parameter. The input file string is
|
||||
discarded.
|
||||
|
||||
Generally the back facing camera has index 0 while the front facing
|
||||
camera has index 1.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item video_size
|
||||
Set the video size given as a string such as 640x480 or hd720.
|
||||
Falls back to the first available configuration reported by
|
||||
Android if requested video size is not available or by default.
|
||||
|
||||
@item framerate
|
||||
Set the video framerate.
|
||||
Falls back to the first available configuration reported by
|
||||
Android if requested framerate is not available or by default (-1).
|
||||
|
||||
@item camera_index
|
||||
Set the index of the camera to use. Default is 0.
|
||||
|
||||
@item input_queue_size
|
||||
Set the maximum number of frames to buffer. Default is 5.
|
||||
|
||||
@end table
|
||||
|
||||
@section avfoundation
|
||||
|
||||
AVFoundation input device.
|
||||
|
||||
AVFoundation is the currently recommended framework by Apple for streamgrabbing on OSX >= 10.7 as well as on iOS.
|
||||
The older QTKit framework has been marked deprecated since OSX version 10.7.
|
||||
|
||||
The input filename has to be given in the following syntax:
|
||||
@example
|
||||
@@ -254,9 +215,8 @@ need to configure with the appropriate @code{--extra-cflags}
|
||||
and @code{--extra-ldflags}.
|
||||
On Windows, you need to run the IDL files through @command{widl}.
|
||||
|
||||
DeckLink is very picky about the formats it supports. Pixel format of the
|
||||
input can be set with @option{raw_format}.
|
||||
Framerate and video size must be determined for your device with
|
||||
DeckLink is very picky about the formats it supports. Pixel format is
|
||||
uyvy422 or v210, framerate and video size must be determined for your device with
|
||||
@command{-list_formats 1}. Audio sample rate is always 48 kHz and the number
|
||||
of channels can be 2, 8 or 16. Note that all audio channels are bundled in one single
|
||||
audio track.
|
||||
@@ -278,45 +238,20 @@ This sets the input video format to the format given by the FourCC. To see
|
||||
the supported values of your device(s) use @option{list_formats}.
|
||||
Note that there is a FourCC @option{'pal '} that can also be used
|
||||
as @option{pal} (3 letters).
|
||||
Default behavior is autodetection of the input video format, if the hardware
|
||||
supports it.
|
||||
|
||||
@item bm_v210
|
||||
This is a deprecated option, you can use @option{raw_format} instead.
|
||||
If set to @samp{1}, video is captured in 10 bit v210 instead
|
||||
of uyvy422. Not all Blackmagic devices support this option.
|
||||
|
||||
@item raw_format
|
||||
Set the pixel format of the captured video.
|
||||
Available values are:
|
||||
@table @samp
|
||||
@item uyvy422
|
||||
|
||||
@item yuv422p10
|
||||
|
||||
@item argb
|
||||
|
||||
@item bgra
|
||||
|
||||
@item rgb10
|
||||
|
||||
@end table
|
||||
|
||||
@item teletext_lines
|
||||
If set to nonzero, an additional teletext stream will be captured from the
|
||||
vertical ancillary data. Both SD PAL (576i) and HD (1080i or 1080p)
|
||||
sources are supported. In case of HD sources, OP47 packets are decoded.
|
||||
|
||||
This option is a bitmask of the SD PAL VBI lines captured, specifically lines 6
|
||||
to 22, and lines 318 to 335. Line 6 is the LSB in the mask. Selected lines
|
||||
which do not contain teletext information will be ignored. You can use the
|
||||
special @option{all} constant to select all possible lines, or
|
||||
@option{standard} to skip lines 6, 318 and 319, which are not compatible with
|
||||
all receivers.
|
||||
|
||||
For SD sources, ffmpeg needs to be compiled with @code{--enable-libzvbi}. For
|
||||
HD sources, on older (pre-4K) DeckLink card models you have to capture in 10
|
||||
bit mode.
|
||||
vertical ancillary data. This option is a bitmask of the VBI lines checked,
|
||||
specifically lines 6 to 22, and lines 318 to 335. Line 6 is the LSB in the mask.
|
||||
Selected lines which do not contain teletext information will be ignored. You
|
||||
can use the special @option{all} constant to select all possible lines, or
|
||||
@option{standard} to skip lines 6, 318 and 319, which are not compatible with all
|
||||
receivers. Capturing teletext only works for SD PAL sources in 8 bit mode.
|
||||
To use this option, ffmpeg needs to be compiled with @code{--enable-libzvbi}.
|
||||
|
||||
@item channels
|
||||
Defines number of audio channels to capture. Must be @samp{2}, @samp{8} or @samp{16}.
|
||||
@@ -338,32 +273,16 @@ Sets the audio input source. Must be @samp{unset}, @samp{embedded},
|
||||
|
||||
@item video_pts
|
||||
Sets the video packet timestamp source. Must be @samp{video}, @samp{audio},
|
||||
@samp{reference}, @samp{wallclock} or @samp{abs_wallclock}.
|
||||
Defaults to @samp{video}.
|
||||
@samp{reference} or @samp{wallclock}. Defaults to @samp{video}.
|
||||
|
||||
@item audio_pts
|
||||
Sets the audio packet timestamp source. Must be @samp{video}, @samp{audio},
|
||||
@samp{reference}, @samp{wallclock} or @samp{abs_wallclock}.
|
||||
Defaults to @samp{audio}.
|
||||
@samp{reference} or @samp{wallclock}. Defaults to @samp{audio}.
|
||||
|
||||
@item draw_bars
|
||||
If set to @samp{true}, color bars are drawn in the event of a signal loss.
|
||||
Defaults to @samp{true}.
|
||||
|
||||
@item queue_size
|
||||
Sets maximum input buffer size in bytes. If the buffering reaches this value,
|
||||
incoming frames will be dropped.
|
||||
Defaults to @samp{1073741824}.
|
||||
|
||||
@item audio_depth
|
||||
Sets the audio sample bit depth. Must be @samp{16} or @samp{32}.
|
||||
Defaults to @samp{16}.
|
||||
|
||||
@item decklink_copyts
|
||||
If set to @option{true}, timestamps are forwarded as they are without removing
|
||||
the initial offset.
|
||||
Defaults to @option{false}.
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
@@ -385,129 +304,19 @@ ffmpeg -f decklink -list_formats 1 -i 'Intensity Pro'
|
||||
@item
|
||||
Capture video clip at 1080i50:
|
||||
@example
|
||||
ffmpeg -format_code Hi50 -f decklink -i 'Intensity Pro' -c:a copy -c:v copy output.avi
|
||||
ffmpeg -format_code Hi50 -f decklink -i 'Intensity Pro' -acodec copy -vcodec copy output.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
Capture video clip at 1080i50 10 bit:
|
||||
@example
|
||||
ffmpeg -bm_v210 1 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -c:a copy -c:v copy output.avi
|
||||
ffmpeg -bm_v210 1 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -acodec copy -vcodec copy output.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
Capture video clip at 1080i50 with 16 audio channels:
|
||||
@example
|
||||
ffmpeg -channels 16 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -c:a copy -c:v copy output.avi
|
||||
@end example
|
||||
|
||||
@end itemize
|
||||
|
||||
@section kmsgrab
|
||||
|
||||
KMS video input device.
|
||||
|
||||
Captures the KMS scanout framebuffer associated with a specified CRTC or plane as a
|
||||
DRM object that can be passed to other hardware functions.
|
||||
|
||||
Requires either DRM master or CAP_SYS_ADMIN to run.
|
||||
|
||||
If you don't understand what all of that means, you probably don't want this. Look at
|
||||
@option{x11grab} instead.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item device
|
||||
DRM device to capture on. Defaults to @option{/dev/dri/card0}.
|
||||
|
||||
@item format
|
||||
Pixel format of the framebuffer. Defaults to @option{bgr0}.
|
||||
|
||||
@item format_modifier
|
||||
Format modifier to signal on output frames. This is necessary to import correctly into
|
||||
some APIs, but can't be autodetected. See the libdrm documentation for possible values.
|
||||
|
||||
@item crtc_id
|
||||
KMS CRTC ID to define the capture source. The first active plane on the given CRTC
|
||||
will be used.
|
||||
|
||||
@item plane_id
|
||||
KMS plane ID to define the capture source. Defaults to the first active plane found if
|
||||
neither @option{crtc_id} nor @option{plane_id} are specified.
|
||||
|
||||
@item framerate
|
||||
Framerate to capture at. This is not synchronised to any page flipping or framebuffer
|
||||
changes - it just defines the interval at which the framebuffer is sampled. Sampling
|
||||
faster than the framebuffer update rate will generate independent frames with the same
|
||||
content. Defaults to @code{30}.
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
@itemize
|
||||
|
||||
@item
|
||||
Capture from the first active plane, download the result to normal frames and encode.
|
||||
This will only work if the framebuffer is both linear and mappable - if not, the result
|
||||
may be scrambled or fail to download.
|
||||
@example
|
||||
ffmpeg -f kmsgrab -i - -vf 'hwdownload,format=bgr0' output.mp4
|
||||
@end example
|
||||
|
||||
@item
|
||||
Capture from CRTC ID 42 at 60fps, map the result to VAAPI, convert to NV12 and encode as H.264.
|
||||
@example
|
||||
ffmpeg -crtc_id 42 -framerate 60 -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' -c:v h264_vaapi output.mp4
|
||||
@end example
|
||||
|
||||
@end itemize
|
||||
|
||||
@section libndi_newtek
|
||||
|
||||
The libndi_newtek input device provides capture capabilities for using NDI (Network
|
||||
Device Interface, standard created by NewTek).
|
||||
|
||||
Input filename is a NDI source name that could be found by sending -find_sources 1
|
||||
to command line - it has no specific syntax but human-readable formatted.
|
||||
|
||||
To enable this input device, you need the NDI SDK and you
|
||||
need to configure with the appropriate @code{--extra-cflags}
|
||||
and @code{--extra-ldflags}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item find_sources
|
||||
If set to @option{true}, print a list of found/available NDI sources and exit.
|
||||
Defaults to @option{false}.
|
||||
|
||||
@item wait_sources
|
||||
Override time to wait until the number of online sources have changed.
|
||||
Defaults to @option{0.5}.
|
||||
|
||||
@item allow_video_fields
|
||||
When this flag is @option{false}, all video that you receive will be progressive.
|
||||
Defaults to @option{true}.
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
@itemize
|
||||
|
||||
@item
|
||||
List input devices:
|
||||
@example
|
||||
ffmpeg -f libndi_newtek -find_sources 1 -i dummy
|
||||
@end example
|
||||
|
||||
@item
|
||||
Restream to NDI:
|
||||
@example
|
||||
ffmpeg -f libndi_newtek -i "DEV-5.INTERNAL.M1STEREO.TV (NDI_SOURCE_NAME_1)" -f libndi_newtek -y NDI_SOURCE_NAME_2
|
||||
ffmpeg -channels 16 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -acodec copy -vcodec copy output.avi
|
||||
@end example
|
||||
|
||||
@end itemize
|
||||
@@ -712,6 +521,31 @@ $ ffmpeg -f dshow -show_video_device_dialog true -crossbar_video_input_pin_numbe
|
||||
|
||||
@end itemize
|
||||
|
||||
@section dv1394
|
||||
|
||||
Linux DV 1394 input device.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item framerate
|
||||
Set the frame rate. Default is 25.
|
||||
|
||||
@item standard
|
||||
|
||||
Available values are:
|
||||
@table @samp
|
||||
@item pal
|
||||
|
||||
@item ntsc
|
||||
|
||||
@end table
|
||||
|
||||
Default value is @code{ntsc}.
|
||||
|
||||
@end table
|
||||
|
||||
@section fbdev
|
||||
|
||||
Linux framebuffer input device.
|
||||
@@ -1248,6 +1082,49 @@ Record a stream from default device:
|
||||
ffmpeg -f pulse -i default /tmp/pulse.wav
|
||||
@end example
|
||||
|
||||
@section qtkit
|
||||
|
||||
QTKit input device.
|
||||
|
||||
The filename passed as input is parsed to contain either a device name or index.
|
||||
The device index can also be given by using -video_device_index.
|
||||
A given device index will override any given device name.
|
||||
If the desired device consists of numbers only, use -video_device_index to identify it.
|
||||
The default device will be chosen if an empty string or the device name "default" is given.
|
||||
The available devices can be enumerated by using -list_devices.
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -i "0" out.mpg
|
||||
@end example
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -video_device_index 0 -i "" out.mpg
|
||||
@end example
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -i "default" out.mpg
|
||||
@end example
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -list_devices true -i ""
|
||||
@end example
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item frame_rate
|
||||
Set frame rate. Default is 30.
|
||||
|
||||
@item list_devices
|
||||
If set to @code{true}, print a list of devices and exit. Default is
|
||||
@code{false}.
|
||||
|
||||
@item video_device_index
|
||||
Select the video device by index for devices with the same name (starts at 0).
|
||||
|
||||
@end table
|
||||
|
||||
@section sndio
|
||||
|
||||
sndio input device.
|
||||
|
||||
@@ -193,6 +193,9 @@ ffplay
|
||||
ffprobe
|
||||
issues in or related to ffprobe.c
|
||||
|
||||
ffserver
|
||||
issues in or related to ffserver.c
|
||||
|
||||
postproc
|
||||
issues in libpostproc/*
|
||||
|
||||
|
||||
@@ -94,20 +94,19 @@ Stuff that didn't reach the codebase:
|
||||
- a853388d2 hevc: change the stride of the MC buffer to be in bytes instead of elements
|
||||
- 0cef06df0 checkasm: add HEVC MC tests
|
||||
- e7078e842 hevcdsp: add x86 SIMD for MC
|
||||
- 7993ec19a hevc: Add hevc_get_pixel_4/8/12/16/24/32/48/64
|
||||
- new bitstream reader (see http://ffmpeg.org/pipermail/ffmpeg-devel/2017-April/209609.html)
|
||||
- use av_cpu_max_align() instead of hardcoding alignment requirements (see https://ffmpeg.org/pipermail/ffmpeg-devel/2017-September/215834.html)
|
||||
- f44ec22e0 lavc: use av_cpu_max_align() instead of hardcoding alignment requirements
|
||||
- 4de220d2e frame: allow align=0 (meaning automatic) for av_frame_get_buffer()
|
||||
- Support recovery from an already present HLS playlist (see 16cb06bb30)
|
||||
- Remove all output devices (see 8e7e042d41, 8d3db95f20, 6ce13070bd, d46cd24986 and https://ffmpeg.org/pipermail/ffmpeg-devel/2017-September/216904.html)
|
||||
- VAAPI VP8 decode hwaccel (currently under review: http://ffmpeg.org/pipermail/ffmpeg-devel/2017-February/thread.html#207348)
|
||||
- Removal of the custom atomic API (5cc0057f49, see http://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209003.html)
|
||||
- Use the new bitstream filter for extracting extradata (8e2ea69135 and 096a8effa3, see https://ffmpeg.org/pipermail/ffmpeg-devel/2017-March/209068.html)
|
||||
- Read aac_adtstoasc extradata updates from packet side data on Matroska once mov and the bsf in question are fixed (See 13a211e632 and 5ef1959080)
|
||||
|
||||
Collateral damage that needs work locally:
|
||||
------------------------------------------
|
||||
|
||||
- Merge proresdec2.c and proresdec_lgpl.c
|
||||
- Merge proresenc_anatoliy.c and proresenc_kostya.c
|
||||
- Remove ADVANCED_PARSER in libavcodec/hevc_parser.c
|
||||
- Fix MIPS AC3 downmix
|
||||
- hlsenc encryption support may need some adjustment (see edc43c571d)
|
||||
|
||||
Extra changes needed to be aligned with Libav:
|
||||
----------------------------------------------
|
||||
|
||||
@@ -26,13 +26,13 @@ implementing robust and fast codecs as well as for experimentation.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-codecs.html,ffmpeg-codecs}, @url{ffmpeg-bitstream-filters.html,bitstream-filters},
|
||||
@url{libavutil.html,libavutil}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1),
|
||||
libavutil(3)
|
||||
@end ifnothtml
|
||||
|
||||
@@ -23,13 +23,13 @@ VfW, DShow, and ALSA.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-devices.html,ffmpeg-devices},
|
||||
@url{libavutil.html,libavutil}, @url{libavcodec.html,libavcodec}, @url{libavformat.html,libavformat}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-devices(1),
|
||||
libavutil(3), libavcodec(3), libavformat(3)
|
||||
@end ifnothtml
|
||||
|
||||
@@ -21,14 +21,14 @@ framework containing several filters, sources and sinks.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-filters.html,ffmpeg-filters},
|
||||
@url{libavutil.html,libavutil}, @url{libswscale.html,libswscale}, @url{libswresample.html,libswresample},
|
||||
@url{libavcodec.html,libavcodec}, @url{libavformat.html,libavformat}, @url{libavdevice.html,libavdevice}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-filters(1),
|
||||
libavutil(3), libswscale(3), libswresample(3), libavcodec(3), libavformat(3), libavdevice(3)
|
||||
@end ifnothtml
|
||||
|
||||
@@ -26,13 +26,13 @@ resource.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-formats.html,ffmpeg-formats}, @url{ffmpeg-protocols.html,ffmpeg-protocols},
|
||||
@url{libavutil.html,libavutil}, @url{libavcodec.html,libavcodec}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-formats(1), ffmpeg-protocols(1),
|
||||
libavutil(3), libavcodec(3)
|
||||
@end ifnothtml
|
||||
|
||||
@@ -42,12 +42,12 @@ It should avoid useless features that almost no one needs.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-utils(1)
|
||||
@end ifnothtml
|
||||
|
||||
|
||||
@@ -48,13 +48,13 @@ enabled through dedicated options.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-resampler.html,ffmpeg-resampler},
|
||||
@url{libavutil.html,libavutil}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-resampler(1),
|
||||
libavutil(3)
|
||||
@end ifnothtml
|
||||
|
||||
@@ -41,13 +41,13 @@ colorspaces differ.
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@url{libavutil.html,libavutil}
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg(1), ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-scaler(1),
|
||||
libavutil(3)
|
||||
@end ifnothtml
|
||||
|
||||
@@ -1,365 +0,0 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Mailing List FAQ
|
||||
@titlepage
|
||||
@center @titlefont{FFmpeg Mailing List FAQ}
|
||||
@end titlepage
|
||||
|
||||
@top
|
||||
|
||||
@contents
|
||||
|
||||
@chapter General Questions
|
||||
|
||||
@section What is a mailing list?
|
||||
|
||||
A mailing list is not much different than emailing someone, but the
|
||||
main difference is that your message is received by everyone who
|
||||
subscribes to the list. It is somewhat like a forum but in email form.
|
||||
|
||||
See the @url{https://lists.ffmpeg.org/pipermail/ffmpeg-user/, ffmpeg-user archives}
|
||||
for examples.
|
||||
|
||||
@section What type of questions can I ask?
|
||||
|
||||
@itemize
|
||||
@item
|
||||
@url{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-user/, ffmpeg-user}:
|
||||
For questions involving unscripted usage or compilation of the FFmpeg
|
||||
command-line tools (@command{ffmpeg}, @command{ffprobe}, @command{ffplay}).
|
||||
|
||||
@item
|
||||
@url{https://lists.ffmpeg.org/mailman/listinfo/libav-user/, libav-user}:
|
||||
For questions involving the FFmpeg libav* libraries (libavcodec,
|
||||
libavformat, libavfilter, etc).
|
||||
|
||||
@item
|
||||
@url{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-devel/, ffmpeg-devel}:
|
||||
For discussions involving the development of FFmpeg and for submitting
|
||||
patches. User questions should be asked at ffmpeg-user or libav-user.
|
||||
@end itemize
|
||||
|
||||
To report a bug see @url{https://ffmpeg.org/bugreports.html}.
|
||||
|
||||
We cannot provide help for scripts and/or third-party tools.
|
||||
|
||||
@anchor{How do I ask a question or send a message to a mailing list?}
|
||||
@section How do I ask a question or send a message to a mailing list?
|
||||
|
||||
All you have to do is send an email:
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Email @email{ffmpeg-user@@ffmpeg.org} to send a message to the
|
||||
ffmpeg-user mailing list.
|
||||
|
||||
@item
|
||||
Email @email{libav-user@@ffmpeg.org} to send a message to the
|
||||
libav-user mailing list.
|
||||
@end itemize
|
||||
|
||||
If you are not subscribed to the mailing list then your question must be
|
||||
manually approved. Approval may take several days, but the wait is
|
||||
usually less. If you want the message to be sent with no delay then you
|
||||
must subscribe first. See @ref{How do I subscribe?}
|
||||
|
||||
Please do not send a message, subscribe, and re-send the message: this
|
||||
results in duplicates, causes more work for the admins, and may lower
|
||||
your chance at getting an answer. However, you may do so if you first
|
||||
@ref{How do I delete my message in the moderation queue?, delete your original message from the moderation queue}.
|
||||
|
||||
@chapter Subscribing / Unsubscribing
|
||||
|
||||
@section What does subscribing do?
|
||||
|
||||
Subscribing allows two things:
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Your messages will show up in the mailing list without waiting in the
|
||||
moderation queue and needing to be manually approved by a mailing list
|
||||
admin.
|
||||
|
||||
@item
|
||||
You will receive all messages to the mailing list including replies to
|
||||
your messages. Non-subscribed users do not receive any messages.
|
||||
@end itemize
|
||||
|
||||
@section Do I need to subscribe?
|
||||
|
||||
No. You can still send a message to the mailing list without
|
||||
subscribing. See @ref{How do I ask a question or send a message to a mailing list?}
|
||||
|
||||
However, your message will need to be manually approved by a mailing
|
||||
list admin, and you will not receive any mailing list messages or
|
||||
replies.
|
||||
|
||||
You can ask to be CCd in your message, but replying users will
|
||||
sometimes forget to do so.
|
||||
|
||||
You may also view and reply to messages via the @ref{Where are the archives?, archives}.
|
||||
|
||||
@anchor{How do I subscribe?}
|
||||
@section How do I subscribe?
|
||||
|
||||
Email @email{ffmpeg-user-request@@ffmpeg.org} with the subject
|
||||
@emph{subscribe}.
|
||||
|
||||
Or visit the @url{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-user/, ffmpeg-user mailing list info page}
|
||||
and refer to the @emph{Subscribing to ffmpeg-user} section.
|
||||
|
||||
The process is the same for the other mailing lists.
|
||||
|
||||
@section How do I unsubscribe?
|
||||
|
||||
Email @email{ffmpeg-user-request@@ffmpeg.org} with subject @emph{unsubscribe}.
|
||||
|
||||
Or visit the @url{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-user/, ffmpeg-user mailing list info page},
|
||||
scroll to bottom of page, enter your email address in the box, and click
|
||||
the @emph{Unsubscribe or edit options} button.
|
||||
|
||||
The process is the same for the other mailing lists.
|
||||
|
||||
Please avoid asking a mailing list admin to unsubscribe you unless you
|
||||
are absolutely unable to do so by yourself. See @ref{Who do I contact if I have a problem with the mailing list?}
|
||||
|
||||
@chapter Moderation Queue
|
||||
@anchor{Why is my message awaiting moderator approval?}
|
||||
@section Why is my message awaiting moderator approval?
|
||||
|
||||
Some messages are automatically held in the @emph{moderation queue} and
|
||||
must be manually approved by a mailing list admin:
|
||||
|
||||
These are:
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Messages from users who are @strong{not} subscribed.
|
||||
|
||||
@item
|
||||
Messages that exceed the @ref{What is the message size limit?, message size limit}.
|
||||
|
||||
@item
|
||||
Messages from users whose accounts have been set with the @emph{moderation flag}
|
||||
(very rarely occurs, but may if a user repeatedly ignores the rules
|
||||
or is abusive towards others).
|
||||
@end itemize
|
||||
|
||||
@section How long does it take for my message in the moderation queue to be approved?
|
||||
|
||||
The queue is usually checked once or twice a day, but on occasion
|
||||
several days may pass before someone checks the queue.
|
||||
|
||||
@anchor{How do I delete my message in the moderation queue?}
|
||||
@section How do I delete my message in the moderation queue?
|
||||
|
||||
You should have received an email with the subject @emph{Your message to ffmpeg-user awaits moderator approval}.
|
||||
A link is in the message that will allow you to delete your message
|
||||
unless a mailing list admin already approved or rejected it.
|
||||
|
||||
@chapter Archives
|
||||
|
||||
@anchor{Where are the archives?}
|
||||
@section Where are the archives?
|
||||
|
||||
See the @emph{Archives} section on the @url{https://ffmpeg.org/contact.html, FFmpeg Contact}
|
||||
page for links to all FFmpeg mailing list archives.
|
||||
|
||||
Note that the archives are split by month. Discussions that span
|
||||
several months will be split into separate months in the archives.
|
||||
|
||||
@section How do I reply to a message in the archives?
|
||||
|
||||
Click the email link at the top of the message just under the subject
|
||||
title. The link will provide the proper headers to keep the message
|
||||
within the thread.
|
||||
|
||||
@section How do I search the archives?
|
||||
|
||||
Perform a site search using your favorite search engine. Example:
|
||||
|
||||
@t{site:lists.ffmpeg.org/pipermail/ffmpeg-user/ "search term"}
|
||||
|
||||
@chapter Other
|
||||
|
||||
@section Is there an alternative to the mailing list?
|
||||
|
||||
You can ask for help in the official @t{#ffmpeg} IRC channel on Freenode.
|
||||
|
||||
Some users prefer the third-party Nabble interface which presents the
|
||||
mailing lists in a typical forum layout.
|
||||
|
||||
There are also numerous third-party help sites such as Super User and
|
||||
r/ffmpeg on reddit.
|
||||
|
||||
@anchor{What is top-posting?}
|
||||
@section What is top-posting?
|
||||
|
||||
See @url{https://en.wikipedia.org/wiki/Posting_style#Top-posting}.
|
||||
|
||||
Instead, use trimmed interleaved/inline replies (@url{https://lists.ffmpeg.org/pipermail/ffmpeg-user/2017-April/035849.html, example}).
|
||||
|
||||
@anchor{What is the message size limit?}
|
||||
@section What is the message size limit?
|
||||
|
||||
The message size limit is 500 kilobytes for the user lists and 1000
|
||||
kilobytes for ffmpeg-devel. Please provide links to larger files instead
|
||||
of attaching them.
|
||||
|
||||
@section Where can I upload sample files?
|
||||
|
||||
Anywhere that is not too annoying for us to use.
|
||||
|
||||
Google Drive and Dropbox are acceptable if you need a file host, and
|
||||
0x0.st is good for files under 256 MiB.
|
||||
|
||||
Small, short samples are preferred if possible.
|
||||
|
||||
@section Will I receive spam if I send and/or subscribe to a mailing list?
|
||||
|
||||
Highly unlikely.
|
||||
|
||||
@itemize
|
||||
@item
|
||||
The list of subscribed users is not public.
|
||||
|
||||
@item
|
||||
Email addresses in the archives are obfuscated.
|
||||
|
||||
@item
|
||||
Several unique test email accounts were utilized and none have yet
|
||||
received any spam.
|
||||
@end itemize
|
||||
|
||||
However, you may see a spam in the mailing lists on rare occasions:
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Spam in the moderation queue may be accidentally approved due to human
|
||||
error.
|
||||
|
||||
@item
|
||||
There have been a few messages from subscribed users who had their own
|
||||
email addresses hacked and spam messages from (or appearing to be from)
|
||||
the hacked account were sent to their contacts (a mailing list being a
|
||||
contact in these cases).
|
||||
|
||||
@item
|
||||
If you are subscribed to the bug tracker mailing list (ffmpeg-trac) you
|
||||
may see the occasional spam as a false bug report, but we take measures
|
||||
to try to prevent this.
|
||||
@end itemize
|
||||
|
||||
@section How do I filter mailing list messages?
|
||||
|
||||
Use the @emph{List-Id}. For example, the ffmpeg-user mailing list is
|
||||
@t{ffmpeg-user.ffmpeg.org}. You can view the List-Id in the raw message
|
||||
or headers.
|
||||
|
||||
You can then filter the mailing list messages to their own folder.
|
||||
|
||||
@chapter Rules and Etiquette
|
||||
|
||||
@section What are the rules and the proper etiquette?
|
||||
|
||||
There may seem to be many things to remember, but we want to help and
|
||||
following these guidelines will allow you to get answers more quickly
|
||||
and help avoid getting ignored.
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Always show your actual, unscripted @command{ffmpeg} command and the
|
||||
complete, uncut console output from your command.
|
||||
|
||||
@item
|
||||
Use the most simple and minimal command that still shows the issue you
|
||||
are encountering.
|
||||
|
||||
@item
|
||||
Provide all necessary information so others can attempt to duplicate
|
||||
your issue. This includes the actual command, complete uncut console
|
||||
output, and any inputs that are required to duplicate the issue.
|
||||
|
||||
@item
|
||||
Use the latest @command{ffmpeg} build you can get. See the @url{https://ffmpeg.org/download.html, FFmpeg Download}
|
||||
page for links to recent builds for Linux, macOS, and Windows. Or
|
||||
compile from the current git master branch.
|
||||
|
||||
@item
|
||||
Avoid @url{https://en.wikipedia.org/wiki/Posting_style#Top-posting, top-posting}.
|
||||
Also see @ref{What is top-posting?}
|
||||
|
||||
@item
|
||||
Avoid hijacking threads. Thread hijacking is replying to a message and
|
||||
changing the subject line to something unrelated to the original thread.
|
||||
Most email clients will still show the renamed message under the
|
||||
original thread. This can be confusing and these types of messages are
|
||||
often ignored.
|
||||
|
||||
@item
|
||||
Do not send screenshots. Copy and paste console text instead of making
|
||||
screenshots of the text.
|
||||
|
||||
@item
|
||||
Avoid sending email disclaimers and legalese if possible as this is a
|
||||
public list.
|
||||
|
||||
@item
|
||||
Avoid using the @code{-loglevel debug}, @code{-loglevel quiet}, and
|
||||
@command{-hide_banner} options unless requested to do so.
|
||||
|
||||
@item
|
||||
If you attach files avoid compressing small files. Uncompressed is
|
||||
preferred.
|
||||
|
||||
@item
|
||||
Please do not send HTML-only messages. The mailing list will ignore the
|
||||
HTML component of your message. Most mail clients will automatically
|
||||
include a text component: this is what the mailing list will use.
|
||||
|
||||
@item
|
||||
Configuring your mail client to break lines after 70 or so characters is
|
||||
recommended.
|
||||
|
||||
@item
|
||||
Avoid sending the same message to multiple mailing lists.
|
||||
|
||||
@item
|
||||
Please follow our @url{https://ffmpeg.org/developer.html#Code-of-conduct, Code of Conduct}.
|
||||
@end itemize
|
||||
|
||||
@chapter Help
|
||||
|
||||
@section Why am I not receiving any messages?
|
||||
|
||||
Some email providers have blacklists or spam filters that block or mark
|
||||
the mailing list messages as false positives. Unfortunately, the user is
|
||||
often not aware of this and is often out of their control.
|
||||
|
||||
When possible we attempt to notify the provider to be removed from the
|
||||
blacklists or filters.
|
||||
|
||||
@section Why are my sent messages not showing up?
|
||||
|
||||
Excluding @ref{Why is my message awaiting moderator approval?, messages that are held in the moderation queue}
|
||||
there are a few other reasons why your messages may fail to appear:
|
||||
|
||||
@itemize
|
||||
@item
|
||||
HTML-only messages are ignored by the mailing lists. Most mail clients
|
||||
automatically include a text component alongside HTML email: this is what
|
||||
the mailing list will use. If it does not then consider your client to be
|
||||
broken, because sending a text component along with the HTML component to
|
||||
form a multi-part message is recommended by email standards.
|
||||
|
||||
@item
|
||||
Check your spam folder.
|
||||
@end itemize
|
||||
|
||||
@anchor{Who do I contact if I have a problem with the mailing list?}
|
||||
@section Who do I contact if I have a problem with the mailing list?
|
||||
|
||||
Send a message to @email{ffmpeg-user-owner@@ffmpeg.org}.
|
||||
|
||||
@bye
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user