To avoid pulling in the entire libavfilter when using the DSP functions
from checkasm.
The rest of the struct is not needed outside vf_idet.c and was moved there.
Depending on the threading backend the stdlib uses, creating a
mutex/condvar can be quite expensive.
So keep this object alive in the ctx, on which we synchronize via the
uninit mutex anyway.
Apparently, using a normal frame pool in a multithreaded environment
leads to strange resource leaks on shutdown, which vanish when using a
free threaded pool.
Add a new flag to the vf_colorspace filter which provides the user an
option to clamp the linear and delinear transfer characteristics LUT
values to the [0, 1] represented range. This helps constrain the
potential value range when converting between colorspaces.
Certain colors when going through the conversion can result in out of
gamut colors after the rotation. The colorspace filter allows that with
the extended range. The added clamping just keeps the colors within the
[0, 1) range rather than using that extended range. I'm not enough of a
color scientist to say which is correct, but there are certain
situations where we would prefer to keep the colors in gamut.
The example I have is:
A solid color image of 8-bit YUV: Y=157, U=164, V=98.
Specify the input as:
Input range: MPEG
In color matrix: BT470BG
In color primaries: BT470M
In color transfer characteristics: Gamma 28
Output as:
Out color range: JPEG
Out color matrix: BT.709
Out color primaries: BT.709
Out color transfer characteristics: BT.709
During the calculation you get:
Input YUV: y=157, u=164, v-98
Post-yuv2rgb BT.470BG: r=0.456055, g=0.684152, b=0.928606
Post-apply gamma28 linear LUT: r=0.110979, g=0.345494, b=0.812709
Post-color rotation BT.470M to BT.709: r=-0.04161, g=0.384626, b=0.852400
Post-apply Rec.709 delinear LUT: r=-0.16382, g=0.615932, b=0.923793
Post-rgb2yuv Rec.709 matrix: y=120, u=190, v=25
Where with this change, the delinear LUT output would be clamped to 0,
so the result would be:
r=0.000000, g=0.612390, b=0.918807 and a final output of
y=129, u=185, v=46
As for the long and av_clip64, this was just because lrint returned a
long, so I left it as that and then used av_clip64 to the [0,1) range to
avoid overflow. But re-reading, it looks like av_clip_int16 would
downcast that long to int anyway so the possibility of overflow already
existed there. I've put it back to int just to match the existing
behavior.
Combining C++ with our w32pthreads does not work out.
The C++ standard lib headers might include pthread.h, which will then
define clashing types with our w32pthread header.
To solve this, switch to using C++ native std::thread.
This is a newly added field, so there's no point to try and keep backwards
compatibility with an older API - newer clients should just use the new
fields.
Instead of just 2 files, generalize this filter to support crossfading
arbitrarily many files. This makes the filter essentially operate similar
to the `concat` filter, chaining multiple files one after another.
Aside from just adding more input pads, this requires rewriting the
activate function. Instead of a finite state machine, we keep track of the
currently active input index; and advance it only once the current input is
fully exhausted.
This results in arguably simpler logic overall.
This behavior is currently completely broken, leading to an abrupt end of the
first audio stream. I want to generalize this filter to multiple inputs, but
having too short input files will always represent a significant problem.
I considered a few approaches for how to handle this more gracefully, but
most of them come with their own problems; in particular when a short input
is sandwiched between two longer ones; or when there is a sequence of short
inputs. In the end, it's simplest to just shorten the crossfade window.
I also considered (and tested) padding the input with silence, but this also
has its own aesthetic implications and strange edge cases.
See-Also: https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/20388
There's no reason to use a completely separate graph just to process the
alpha plane in isolation - zimg supports native alpha handling as part of the
main image.
Fixes several issues with this filter when adding or removing alpha planes,
and also adds support for scaling premultiplied alpha (which reduces artefacts
near the image borders).
Fortunately, we only care if this flag is set - otherwise, this filter is
alpha mode agnostic (since it is purely scaling, etc).
That said, there is an argument to be made that we should prefer premul
alpha when scaling, because scaling in straight alpha will leak garbage
pixels; but I think that would be a too backwards-incompatible change to
be worth thinking about at this time.
The overwhelming majority of references to ff_draw_init2() just set the
colorspace properties from a filter link. This wrapper allows us to update
all such referencen automatically.
Conceptually, these are pretty simple to handle, since they are basically
equivalent to the case of alpha being absent, since the only thing the
destination alpha plane is used for is unpremultiplyng straight alpha.
For planar formats, the total number of cases doesn't change in principle,
since before this patch we have:
- alpha present, overlay straight
- alpha present, overlay premultiplied
- alpha absent, overlay straight
- alpha absent, overlay premultiplied
And now we have:
- main straight, overlay straight
- main straight, overlay premultiplied
- main premultiplied, overlay straight
- main premultiplied, overlay premultiplied
That said, we do gain some cases now that we used to (incorrectly) ignore,
like premultiplied yuva420p10.
Notably, we can skip defining separate functions for the case of main alpha
being absent, since that can be a single cheap branch inside the function
itself, on whether or not to also process the alpha plane. Otherwise, as long
as we treat "alpha absent" as "main premultiplied", the per-plane logic will
skip touching the nonexistent alpha plane.
The only format that actually needs extra cases is packed rgb, but that's only
two additional cases in total.
Also arguably simplifies the assignment logic massively.
Chooses the desired output alpha mode. Note that this depends on
an upstream version of libplacebo new enough to respect the corresponding
AVFrame field in pl_map_avframe_ex.
While vf_scale cannot directly convert between premultiplied and straight
alpha, the effective tagging can still change as a result of a change in
the pixel format (i.e. adding or removing an alpha channel).