ctx->options.async does not exist on DnnContext; the correct
field is ctx->async directly on the context struct.
Signed-off-by: younengxiao <steven.xiao@amd.com>
Why: the change is done to comply with lilv expectations of hosts.
Added call lilv_instance_activate in the config_output function to abide by lilv documentation that states it must be called before lilv_instance_run:
"This MUST be called before calling lilv_instance_run()" - documentation source (https://github.com/lv2/lilv/blob/main/include/lilv/lilv.h)
Added call lilv_instance_deactivate in the uninit function to abide by lv2 documentation:
"If a host calls activate(), it MUST call deactivate() at some point in the future" - documentation source (https://gitlab.com/lv2/lv2/-/blob/main/include/lv2/core/lv2.h)
Added instance_activated integer to LV2Context struct to track if instance was activated and only do lilv_instance_deactivate if was activated to abide by lv2 documentation:
"Hosts MUST NOT call deactivate() unless activate() was previously called." - documentation source (https://gitlab.com/lv2/lv2/-/blob/main/include/lv2/core/lv2.h)
Regarding the patcheck warning (possibly constant :instance_activated):
This is a false positive since the struct member is zero-initialized.
Fixes: trac issue #11661 (https://trac.ffmpeg.org/ticket/11661)
Reported-by: Dave Flater
Signed-off-by: Karl Mogensen <karlmogensen0@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If s->stop is set, the return value would be overwritten
before being checked. This bug was introduced in the switch
to AV_TX in 014ace8f98.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Added in e995cf1bcc,
yet this filter does not have any dsp function using MMX:
it only has generic x86 assembly, no SIMD at all,
so this emms_c() was always unnecessary.
Reviewed-by: Kacper Michajłow <kasper93@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Fixes: signed integer overflow: 536870944 * 16 cannot be represented in type 'int'
Fixes: #21587
Found-by: HAORAN FANG
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
There are two options which use non-zero default value: async and
batch_size of openvino. init_model_ov checks and set batch_size to
one when batch_size is equal to zero, so the only option affected
by missing default value is async. Now async works as expected.
This commit update deinterlace_d3d12 filter options name.
Currently it follows the options name with "deinterlace_vaapi",
In this commit, it will follow filters such as "yadif" and "w3fdif".
Sample command lines:
1. Software decode with hwupload:
ffmpeg -init_hw_device d3d12va=d3d12 -i interlaced.ts \
-vf "format=nv12,hwupload,deinterlace_d3d12=method=default,hwdownload,format=nv12" \
-c:v libx264 output.mp4
2. Full hardware pipeline:
ffmpeg -hwaccel d3d12va -hwaccel_output_format d3d12 -i interlaced.ts \
-vf "deinterlace_d3d12=method=custom:mode=field" \
-c:v h264_d3d12va output.mp4
Signed-off-by: younengxiao <steven.xiao@amd.com>
This patch implements the DNNAsyncExecModule for the LibTorch backend,
enabling non-blocking inference using the common infrastructure instead
of custom threading (th_async_module_submit) to align with the
TensorFlow and OpenVINO backends.
The implementation uses ff_dnn_start_inference_async which provides
unified async logic across all DNN backends, eliminating the need for
backend-specific threading code.
Verified with:
ffmpeg -f lavfi -i testsrc=duration=5:size=320x240:rate=30 -vf dnn_processing=dnn_backend=torch:model=model.pt -y output.mp4
Signed-off-by: Raja Rathour <imraja729@gmail.com>
Also deduplicate printing json and summary output.
Reviewed-by: Kyle Swanson <k@ylo.ph>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Descriptor buffers were a neat attempt at organizing descriptors.
Simple, robust, reliable.
Unfortunately, driver support never caught on, and neither did validation
layer support.
Now they're being replaced by descriptor heaps, which promises to be
the future. We'll see how it goes.
fixup 80229c1
[scale_vulkan @ 0000028b1c2c1300] scale:31: error: 'texture' : no matching overloaded function found
scale:31: error: 'return' : cannot convert return value to function return type
Signed-off-by: nyanmisaka <nst799610810@gmail.com>
loudnorm provides stats output that's meant to be used for two-pass
normalization. These stats are often interleaved with ffmpeg's stream
descriptions and other output, making them difficult to parse and pass
to the second pass. The new stats_file option enables writing the stats
to a separate file, or to standard output, for simple parsing and other
programmatic usage.
Signed-off-by: Adam Jensen <adam@acj.sh>
Given we now align both dimensions when allocating buffers, don't reinitialize
the pool when dealing with dimension changes that will not affect the existing
pool size.
Signed-off-by: James Almer <jamrial@gmail.com>
Starting with PyTorch 2.6, the hooks API was redesigned so it no longer depends on the device type.
As part of this change, the XPU initialization function was renamed from initXPU() to init().
Add version check to support both old and new LibTorch versions.
Signed-off-by: younengxiao <steven.xiao@amd.com>
When using `dnn_processing` filter with torch backend, FFmpeg hangs indefinitely because no inference is actually performed.
Resolve this problem by add "else" branch for synchronous execution path.
Usage:
ffmpeg -i input.mp4 -vf scale=224:224,format=rgb24,dnn_processing=dnn_backend=torch:model=sr_model_torch.pt:device=cpu output.mp4
Return the data in an AVFrame instead. This is what several users
({find,cover}_rect*) want anyway. This also avoids accessing
AVFrame.format (an int) via an enum AVPixelFormat*.
*: This commit actually avoids two frame copies For find_rect:
av_frame_clone() contained an implicit alloc+copy.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also rename it to ff_pixfmt_is_in(). This is more type-safe;
in particular, it is required to support -fshort-enum.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>