There is no need to scan for NULL, if we inject it ourselves.
Fixes: warning: 'strncat' specified bound 10 equals source length [-Wstringop-overflow=]
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
Currently, the thread loop of ffmpeg_filter essentially works like this:
while (1) {
frame, idx = get_from_decoder();
err = send_to_filter_graph(frame);
if (err) { // i.e. EOF
close_input(idx);
continue;
}
while (filtered_frame = get_filtered_frame())
send_to_encoder(filtered_frame);
}
The exact details are not 100% correct since the actual control flow is a bit
more complicated as a result of the scheduler, but this is the general flow.
Notably, this leaves the possibility of leaving a no-longer-needed input
permanently open if the filter graph starts producing infinite frames (during
the second loop) *after* it finishes reading from an input, e.g. in a filter
graph like -af atrim,apad.
This patch avoids this issue by always querying the status of all filter graph
inputs and explicitly closing any that were closed downstream; after each round
of reading output frames. As a result, information about the filtergraph being
closed can now propagate back upstream, even if the filter is no longer
requesting any input frames (i.e. input_idx == fg->nb_inputs).
Fixes: https://trac.ffmpeg.org/ticket/11061
See-Also: https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/20457#issuecomment-6208
Ensure that the fragment to close the header (`</div>\"]`) is written when the
function `mermaid_print_section_header` is called only once, which happens when
the filtergraph has no inputs.
When the furthest-behind stream is being fed by a demuxer that is also
feeding packets to a choked filter graph, we need to unchoke that filter
graph to prevent the demuxer from getting stuck trying to write packets to
the choked filter graph.
This situation can also apply recursively - if the demuxer is also writing
to a filtergraph that is also reading from a choked demuxer, there is a
similar deadlock.
Solve all such deadlocks by just brute-force recursively unchoking all
nodes that can somehow prevent this demuxer from writing packets. This
should normally not result in any change in behavior, unless audio/video
streams are badly desynchronized, in which case it may result in extra
memory usage from the too-far-ahead stream buffering packets inside the
muxer. (But this is, of course, preferable to a deadlock)
Fixes: https://code.ffmpeg.org/FFmpeg/FFmpeg/issues/20611
The graph string is either freed or attached to the filtergraph, so it's best to
not leave a dangling pointer with the caller.
Signed-off-by: James Almer <jamrial@gmail.com>
For the purpose of merging streams in a stream group, a filtergraph can't be
created once we know it will be used. Therefore, allow filtergraphs to be
removed from the scheduler after being added.
Signed-off-by: James Almer <jamrial@gmail.com>
Several formats are being designed where more than one independent video or
audio stream within a container are part of what should be a single combined
output. This is the case for HEIF (Defining a grid where the decoded output
of several separate video streams are to be placed to generate a single output
image) and IAMF (Defining audio streams where channels are present in separate
coded stream).
AVStreamGroup was designed and implemented in libavformat to convey this
information, but the actual merging is left to the caller.
This change allows the FFmpeg CLI to take said information, parse it, and
create filtergraphs to merge the streams, making the combined output be usable
automatically further in the process.
Signed-off-by: James Almer <jamrial@gmail.com>
This keeps global and per frame side data clearly separated, and actually
propagates the former as it comes out from the buffersink filter.
Signed-off-by: James Almer <jamrial@gmail.com>
Certain parameters, like calculated framerate, are unavailable when connecting
the output of a filtergraph to the input of another.
This fixes command lines like
ffmpeg -lavfi "testsrc=rate=1:duration=1[out0]" -filter_complex "[out0]null[out1]" -map [out1] -y out.png
Signed-off-by: James Almer <jamrial@gmail.com>
The function name 'file_read' is too generic and conflicts with a function
of the same name in the NuttX kernel. Since NuttX links kernel and userspace
into a single binary, this causes a symbol collision when building FFmpeg tools.
Signed-off-by: caifan3 <caifan3@xiaomi.com>
Add "license" as a long-form command line option alongside the existing
"L" short option for showing license information. This maintains
consistent option naming patterns with other commands that provide both
short and long forms (help/?/help, etc.) and improves command line
usability by providing more descriptive option names.
This allows upstream filters to observe EOF on their corresponding outputs
and terminate early, which is particularly useful for decoders and demuxers
that may then gracefully release their resources.
Prevents "leaking" memory for previously used, but now unused, filter inputs.
THis is currently done by sch_demux_send() (via demux_stream_send_to_dst()),
sch_enc_send() (via enc_send_to_dst()), and sch_dec_send() (via
dec_send_to_dst()), but not by sch_filter_send().
Implement the same queue-closing logic for them. The main benefit here is that
this will allow them to mark downstream inputs as send-done (in addition
to received-done), which is useful for a following commit.
Cut off a choked demuxer's output codec/filter queues, effectively preventing
them from processing packets while the demuxer is choked. Avoids downstream
nodes from piling up extra input that a demuxer shouldn't currently be
sending.
The main benefit of this is to avoid queuing up excess packets that don't want
to be decoded yet, reducing memory consumption for idle inputs by preventing
them from being read earlier than needed.
Currently, when a demuxer thread is choked, it will avoid queuing more
packets, but any packets already present on the thread queue will still be
processed.
This can be quite wasteful if the choke is due to e.g. decoder not being
needed yet, such as in a filter graph involving concatenation-style filters.
Adding the ability to propagate the choke status to the thread queue directly
allows downstream decoders and filter graphs to avoid unnecessary work and
buffering.
Reduces the effective latency between scheduler updates and changes in the
thread workfload.
Currently, while the filter graph is being initially created, the scheduler
continues demuxing frames on the last input that happened to be active before
the filter graph was complete.
This can lead to an excess number of decoded frames "piling" up on this input,
regardless of whether or not it will actually be requested by the configured
filter graph. Suspending the filter graph during this initialization phase
reduces the amount of wasted memory.
This field is just saving (typically) a single pointer indirection; and IMO
makes the logic and graph relations unnecessarily complicated. I am also
considering adding choking logic to decoders and encoders as well, which this
field would get in the way of.
Apart from the unchoking logic in unchoke_for_input(), the only other place
that uses this field is the (cold) function check_acyclic(), which can be
served just as well with a simple function to do the graph traversal there.
I tested this extensively under different conditions and could not come up
with any scenario where using a larger queue size was actually beneficial.
Moreover, having such a large default queue is very wasteful especially
for larger frame sizes; and can in the worst case lead to an extra ~50% memory
footprint per input (with the default 16 threads), regardless of whether that
input is currently in use or not.
My methodology was to add logging in the event of a queue underrun/overrun,
and then observe and then observe the frequency of such events in practice,
as well as the impact on performance. I came up with an example filter graph
involving decoding, filtering and encoding with several input files and
various changes to move the bottleneck around.
I found that, in all configurations I tested, with all thread counts and
bottlenecks, using a queue size of 2 frames yielded practically identical
performance to a queue size of 8 frames. I was only able to consistently
measure a slowdown when restricting the queue to a single frame, where the
underruns ended up making up almost 1.1% of frame events in the worst case.
A summary of my test log follows:
= Bottleneck in decoder =
ffmpeg -i A -i B -i C -filter_complex "concat=n=3" -f null -
== 16 threads ==
=== Queue statistics (dec -> filtergraph) ===
- 8 frames = 91355 underruns, 1 overrun
- 4 frames = 91381 underruns, 2 overruns
- 2 frames = 91326 underruns, 21 overruns
- 1 frame = 91284 underruns, 102 overruns
=== Time elapsed ===
- 8 frames = 14.37s
- 4 frames = 14.28s
- 2 frames = 14.27s
- 1 frame = 14.35s
== 1 thread ==
=== Queue statistics (dec -> filtergraph) ===
- 8 frames = 91801 underruns, 0 overruns
- 4 frames = 91929 underruns, 1 overrun
- 2 frames = 91854 underruns, 7 overruns
- 1 frame = 91745 underrons, 83 overruns
=== Time elapsed ===
- 8 frames = 39.51s
- 4 frames = 39.94s
- 2 frames = 39.91s
- 1 frame = 41.69s
= Bottleneck in filter graph: =
ffmpeg -i A -i B -i C -filter_complex "concat=n=3,scale=3840x2160" -f null -
== 16 threads ==
=== Queue statistics (dec -> filtergraph) ===
- 8 frames = 277 underruns, 84673 overruns
- 4 frames = 640 underruns, 86523 overruns
- 2 frames = 850 underruns, 88751 overruns
- 1 frame = 1028 underruns, 89957 overruns
=== Time elapsed ===
- 8 frames = 26.35s
- 4 frames = 26.31s
- 2 frames = 26.38s
- 1 frame = 26.55s
== 1 thread ==
=== Queue statistics (dec -> filtergraph) ===
- 8 frames = 29746 underruns, 57033 overruns
- 4 frames = 29940 underruns, 58948 overruns
- 2 frames = 30160 underruns, 60185 overruns
- 1 frame = 30259 underruns, 61126 overruns
=== Time elapsed ===
- 8 frames = 52.08s
- 4 frames = 52.49s
- 2 frames = 52.25s
- 1 frame = 52.69s
= Bottleneck in encoder: =
ffmpeg -i A -i B -i C -filter_complex "concat=n=3" -c:v libx264 -preset veryfast -f null -
== 1 thread ==
== Queue statistics (filtergraph -> enc) ==
- 8 frames = 26763 underruns, 63535 overruns
- 4 frames = 26863 underruns, 63810 overruns
- 2 frames = 27243 underruns, 63839 overruns
- 1 frame = 27670 underruns, 63953 overruns
== Time elapsed ==
- 8 frames = 89.45s
- 4 frames = 89.04s
- 2 frames = 89.24s
- 1 frame = 90.26s
The code in the decoder just cares about allocating enough extra hw frames
to cover the size of the queue; but there's no reason we actually *have* to
use this many. We can safely relax the assertion to a <= check.
This is required placement by standard [[maybe_unused]] attribute, works
the same for __attribute__((unused)).
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
Add a separator line to show_filters() function output to maintain
consistent formatting with other show functions like show_codecs,
show_formats_devices, etc. This provides uniform formatting across
all command-line output functions for better readability and
parsing by external tools.
And add the missing goto fail. This should ensure the alpha mode is set and remove
bogus warnings printed by ffplay.
Signed-off-by: James Almer <jamrial@gmail.com>
d119ae2fd8 removed the loop-breaking condition
received_sigterm.
Thus, signals no longer gracefully shutdown ffmpeg.
Fixes: #10834
Signed-off-by: Patrick Wang <mail6543210@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
We don't need to print the tags here because they're added as dict
elements to AVFrame->metadata and are printed elsewhere with ffprobe
-show_frames.
Signed-off-by: Leo Izen <leo.izen@gmail.com>
instead of only AV-specific options. The previous code assumed that any
option not defining the codec in an `ffpreset` file is an AVOption. This
for example prevented the use of options defined in `OptionDef[]`, like
`-pix_fmt`, as part of preset files, requiring users to type these out
every time.
Closes: #1530
Signed-off-by: Andreas Hartmann <hartan@7x.de>
Fixes: signed integer overflow: 10 * 1952737655 cannot be represented in type 'int'
Fixes: PoC_avi_demux
Found-by: 2ourc3 (Salim LARGO)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>