Also fixes peak indicators *not* resetting when muting via track header (they only got reset when using mute buttons of the Mixer)
Regression introduced in 25.08.0 compared to 25.04.3 and earlier
On my system (Arch Linux, Qt 6.9.1 KDE Frameworks 6.17.0) the Clip Properties Panel looks like this:

I assume this not always looked like this so maybe this is a Breeze or Qt bug?
I found a workaround by overwriting the style sheet for QTabBar, this is how it looks then:

Another Alternative would be to set DocumentMode to false then these ugly gray overlays will not be shown either. Then it would look like this on my system:

I tested 24.12 AppImage and get the same result so it's probably not a recent change/bug in Breeze/Qt)
The docs about QTabBar DocumentMode mention MacOS so maybe this is specific to Linux? 🤔 Can you check if you can reproduce this issue on Windows/MacOS?
AFAIK it did not have any negative effects except spamming the logs with
```
QMetaObject::invokeMethod: No such method ClipMonitor_QMLTYPE_80::updatePoints(QVariant,QVariant)
```
This became apparent when using built-in effects which adds a disabled transform effect to all clips.
This bug was introduced in b8ffac30 but I don't remember or understand why I wanted to force this update even if the effect is disabled... This commit reverts this change and removes the hasRotation condition.
This bug happened when loading a project that uses subtitles and has subs on lower layers (not 0).
In this case all subs are visually shown at the top (layer 0) even though they should be shown in lower layers.
* Replace neon green filled square with Save icon. The icon should hopefully make it more self-explanatory what this indicator is for. The neon green didn't fit well in with the rest of the themes color palette and was a bit too distracting I think
* Adjust background color, which was same as the Layout Switcher with a similar but slightly different color and only show it when the icon is shown. Previously, when the indicator was not showing it looked a bit off as there was now visually a larger gap on the left side of the switcher compared to its right
SVT-AV1 code uses the ffmpeg option `crf` to set set the rate factor for
VBR and constrained VBR mode, rather than `qscale`.
Preserve custom quality scale when editing presets.
BUG: 492708
* Fixes#2033
* Make position, Size, Rotation text overlays more readable by flipping the text on high rotation angles
* Keep rotation angles between -360 and + 360 degrees so it matches the range that's used for the rotation parameter in the Transform effect in the Effects Panel
* Adjust Resize corner and edge handles cursor shape depending on rotation handle so they point roughly in the correct correction (e.g. if rotated by 90 deg the top edge handle visually becomes the right edge handle)
* When resizing with rotation keep the rectangle in its position by aligning/moving it after resize so it lines up with its rotated position (e.g. 45deg rotation and user pulls bottom-left corner we keep the rectangle fixed to it's top-right corner)
* Enable antialiasing for drawing the red Rectangle so it looks smoother when rotated
* Fix circular updates Monitor -> cpp -> Monitor when resizing, moving, rotation via Monitor. There was - I assume - a partial fix for this in the code `updateEffectRect` called via Monitor/cpp but it didn't work and was spamming `QMetaObject::invokeMethod: No such method QQuickItem_QML_317::updateEffectRect(QRect)`. Implemented a different approach where the QML side ignores updates from cpp while it is in a moving, resizing or rotation operation.
We've been using Link color instead of highlight. For default breeze theme that's practically the same color but for other non-bluish themes this looked out of place as Link color is blue in all tested themes but we're using Highlight color in all other places to highlight text/icons.
* Fix spin box non-neutral value not getting styled when initially set (worked only if changed by user but not on initial project loading) -> Extracted this logic into a custom class so it cannot be forgotten to update the style after updating the value
* Fix not updating neutral value of the spinbox when switching to recording mode (we only updated the slider neutral value position)
BUG: 465766
* the minimum height of the subtitle edit widget was reduced in bug 506899 by JBM.
* tested the 5 default layouts and they now fit a screen with a low resolution of 1280x720
BUG: 503985
Fixes adding results twice producing garbage transcription results. E.g. on the small-en-us model it would produce lots of 'the' subtitles as reported in the linked bug report.
* fix: repaint background on initial palette change (Widget background was drawn using default/light theme not using the selected theme -> replaced our custom event (which is only fired on manual theme change) with Qt default PaletteChange event)
* fix: use correct bottom position when drawing the bars (was off-by-one)
* change: make style consistent with color scopes (background, border and line colors)
* change: draw vertical lines at frequency label position
* change: use white instead of green color for the bars. I think this could avoid confusion as this green color is also used for the first audio channel when drawing the Audio Thumbnails so it could indicate that the Audio Spectrum is showing only the first channel of the Audio. I checked the fft filter in MLT and it uses the average of all channels so probably better to avoid green (channel color) here and go with a more neutral fill color.
* removed custom greenish/reddish background color for audio tracks (doesn't work well with most color themes, especially our default breeze as its using blue accents)
* use same background color for video and audio tracks (I think the difference in track headers is enough so we don't need another visual clue here and reduce a potential distraction)
* replaced timeline focus from highlighted top border to highlighted timecode in Timeline toolbar similar to what we do with highlighting the Monitors. This reduces the visual overload as the previous highlight line is very close to the multiple other highlight lines from the widget tab groups above (at least if the user did not change their tabs from bottom to top positioning)
* changed the track settings / timeline toolbar settings icon to a good'ol hamburger menu icon. This is more consistent with menus in other widgets, also the previously used settings icon looks very similar to the Audio Mixer button
* fix: update Monitor Timecode and ToolMessage of the MainWindow after user changed the color scheme and redraw them using new colors
* fix: use palette text color for I/Q and HUD text (text was not visible on light themes)
* fix use palette dark color for circle border on light themes (border was barely visible on light themes)
* fix: consider circle border pen width when drawing (circle was cut-off by 1px)
* change: enable antialiasing for drawing the HUD circle
* change use palette highlight color for drawing the HUD (more consistent with other color scopes)
* fix: adds small border around the Waveform to guarantee good contrast regardless of the surrounding background color (was a problem on light themes)
* fix: make axis line color less distracting / lower opacity (now uses same color as used in RGB parade)
* fix: prevent clipping bottom y value
* change: instead of drawing current y value in the HUD scale below/above top/bottom scale values to prevent clipping let's hide the scale value instead and only show the current y value
* change: draw current y value and horizontal line in highlight color instead of text color (think this looks better when drawing the parade only in black/white color mode; same color as used in RGB parade)
* change: use same margin between drawing area and scale on the right for both Waveform and RGB Parade
* fix: adds small border around the parade to guarantee good contrast regardless of the surrounding background color (was a problem on light themes)
* fix: fill area between individual color channels with this border color instead of palette background color (on light themes this looks better as there is too much contrast between parade background and palette background)
* fix: consider device pixel scaling when drawing axis (fixed bug on HiDPI displays where the axis did not fully extend to the parade right border)
* fix: make axis line color less distracting
* change: instead of drawing current y value in the HUD scale below/above top/bottom scale values to prevent clipping let's hide the scale value instead and only show the current y value instead
* change: draw current y value and horizontal line in highlight color instead of text color (think this looks better when drawing the parade only in black/white color mode)
* change: use same color for gradient reference line as is used for drawing the axis lines but with higher opacity
---
Most of these issues are regressions caused by my previous change to use the system palette for the surrounding area like the min/max values or the drop-down selection on top of the widget. The HiDPI bug is ancient and also present in latest release.
Before:
<img src="/uploads/57df01a3162714c1e720148375ccddbe/before_breeze_light.png" width=200>
<img src="/uploads/5b66098b110f25ac36db9820bec38d2d/before_breeze_dark.png" width=200>
After:
<img src="/uploads/33243b7c868dcf748e9cc35b043f06d0/after_breeze_light.png" width=200>
<img src="/uploads/70d63fd9d0b054b6c11a526ff7899895/after_breeze_dark.png" width=200>
* restyle audio mixer
* create new styles for audio levels
* share the same component for audio levels in the mixer and monitor toolbar
* refactor mixer and level widgets
* make channels readable on non-stereo projects (especially on HiDPI displays with fractional scaling they have been pretty much unreadable)
implements: #2008
implements: #2010
Unfortunately the script to add the version to the appstream files
relies on cmake and at the moment it grabs the version from imath, which
is included with OpenTimeLineIO, which is fetched by default.
GIT_SILENT
Unfortunately the script to add the version to the appstream files
relies on cmake and at the moment it grabs the version from imath, which
is included with OpenTimeLineIO, which is fetched by default.
GIT_SILENT
Gave OTIO export a try and noticed that the video tracks were reversed when opening the .otio file with the official `otioviewer` app (audio tracks are fine).
Found this [bug report](https://bugs.kde.org/show_bug.cgi?id=503692) which describes the same issue using Davinci Resolve.
The documentation https://opentimelineio.readthedocs.io/en/latest/tutorials/otio-timeline-structure.html mentions this:
> Rendering of the image tracks in a timeline is done in painter order. The layers in a stack are iterated from the bottom (the first entry in the stack) towards the top (the final entry in the stack)
So when we have the timeline V3, V2, V1, A1, A2, A3 in Kdenlive it seems like that the tracks should be added to the stack in this order: V1, V2, V3, A1, A2, A3. I did not find an answer if video or audio should/must come in a specific order though, but this seems to work with `otioviewer` so is probably fine.
I don't have an application to test this with other than `otioviewer` (or know anything else about OTIO 😅 ) so if someone has Davinci Resolve or some other thing that can import .otio files would be nice if you could test the issue and this change.
We call drawComponent with bins in y and max always set to 256.
so std::max_element gets two indices while the first is inclusive and the last exclusive so by &y[max - 1] we always exclude the last/highest bin.
When you pass in a white color clip then there is only data in the highest bin with index=255. If we skip this then maxBinSize will be 0 leading to infinity in the log function leading to funny results when trying to call setPixel at minus infinity.
Note1: Its not a crash but the app will become unresponsive/blocked and you can see infinite log messages complaining about setPixel
Note2: There is probably another issue unrelated to the Histogram. I would have expected this to die on almost all color clips like #ff0000 or #ffff00. For yellow for example I can reproduce but not for red. For some reason all these red pixels are not 255 but 254...
* Fixes rounding error when converting between cpp and qml offsets. In cpp we use top-left integer offset but in QML centered x/y floating point coords. This accumulated in a misalignment by a few pixels. Also affected drawing the overlays / grids being off slightly.
* Fixes zoombar compensation when setting qml offsets. This was hardcoded to 10 pixels but on my system this is 14. This resulted in major misalignment on higher zoom levels as the error is multiplied by the zoom factor.
2nd problem is most noticable when using rotoscope on a high zoom level but affected all tools. Notice yellow rectangle is off as well as red rotoscope mask border
misaligned:

fixed:

1st problem was also noticable when not zoomed. See slight offset of red rectangle at the bottom
misaligned:

fixed:

I was pulling my hair out trying to understand what `10 * m_zoom` is supposed to be until I understood that is supposed to be the size of the opposite zoombar...
This is definitely a fix for
https://bugs.kde.org/show_bug.cgi?id=498337
and I think also for
https://bugs.kde.org/show_bug.cgi?id=461219
but I'm not exactly sure about the 2nd report or if reported meant something different.
We call drawComponent with bins in y and max always set to 256.
so std::max_element gets two indices while the first is inclusive and the last exclusive so by &y[max - 1] we always exclude the last/highest bin.
When you pass in a white color clip then there is only data in the highest bin with index=255. If we skip this then maxBinSize will be 0 leading to infinity in the log function leading to funny results when trying to call setPixel at minus infinity.
Note1: Its not a crash but the app will become unresponsive/blocked and you can see infinite log messages complaining about setPixel
Note2: There is probably another issue unrelated to the Histogram. I would have expected this to die on almost all color clips like #ff0000 or #ffff00. For yellow for example I can reproduce but not for red. For some reason all these red pixels are not 255 but 254...
When zoomed (gain > 1.0) we did not plot additional pixels in the zoomed region so those remain as fill color producing visible artifacts making it hard to see anything on the scope.
Fixed by interpolating these additional in-between pixels so we get a smooth continuous view without artifacts.
Using QImage SmoothTransformation method here for a smooth/blurry look. With FastTransformation we'd be 2x faster but compared to the time it takes to draw the vectorscope overall the scaling time doesn't really matter.
Testing on a 1080p clip:
Before:
**101_000 us**

After:
**102_000 us**
This includes scaling with SmoothTransformation which itself took **700 us**.

Interpolation Method:
I went with SmoothTransformation but other options would be possible.
FastTransformation would be faster at around **300 us**. But given the time it takes to draw the unscaled Vectorscope this gain is pretty much negligible.
This is how it would look using FastTransformation instead of Smooth:

This is what it looks like in Davinci Resolve: https://youtu.be/m1F9TJzfo1s?feature=shared&t=483
Too me this looks blurry so I guess they use Bilinear/Smooth interpolation.
@emohr was in favor (via Chat) of following Davinci here.
* previously thumbs have been loaded sequentially which wasn't the best experience. This change speeds it up by running these requests in parallel
* according to docs QNetworkAccessManager executes up to 6 requests in parallel which fits this purpose and shouldn't overload these APIs
* don't use temporary files to download the thumbnails before converting to pixmaps. Just do it in memory
I added some instructions on how to get a build environment for Arch Linux. I figured if it's specifically named as being supported for building it would make sense to have some information on how to build under it.
* Fixes rounding error when converting between cpp and qml offsets. In cpp we use top-left integer offset but in QML centered x/y floating point coords. This accumulated in a misalignment by a few pixels. Also affected drawing the overlays / grids being off slightly.
* Fixes zoombar compensation when setting qml offsets. This was hardcoded to 10 pixels but on my system this is 14. This resulted in major misalignment on higher zoom levels as the error is multiplied by the zoom factor.
2nd problem is most noticable when using rotoscope on a high zoom level but affected all tools. Notice yellow rectangle is off as well as red rotoscope mask border
misaligned:

fixed:

1st problem was also noticable when not zoomed. See slight offset of red rectangle at the bottom
misaligned:

fixed:

I was pulling my hair out trying to understand what `10 * m_zoom` is supposed to be until I understood that is supposed to be the size of the opposite zoombar...
This is definitely a fix for
https://bugs.kde.org/show_bug.cgi?id=498337
and I think also for
https://bugs.kde.org/show_bug.cgi?id=461219
but I'm not exactly sure about the 2nd report or if reported meant something different.
Fixes: https://invent.kde.org/multimedia/kdenlive/-/issues/1973
Official AsyncVideoFrameLoader loads all frames into memory which prevents it for being used clips longer than a few seconds.
This introduces our own version of AsyncVideoFrameLoader which doesn't cache all images.
Check out the comment https://invent.kde.org/multimedia/kdenlive/-/issues/1973#note_1199934 for more details.
Didn't bother to create a PR for the official Facebook repo. Based on outstanding open PRs and official activity on that repo its not a community project. Need to fix this on our side unfortunately.
Its basically as three line change as mentioned in the comment but needed to create our custom SAM2VideoPredictor which delegates to the official SAM2VideoPredictorOfficial except loading the images in init in order to fix it (Wanted to avoid forking SAM2 repo so we don't have another repo to maintain...).
I intend to work a bit more on the SAM integration and added a few TODOs for myself. Will clean up this code and fix the TODOs in future MRs.
Also, while testing the feature looks like preview mode is somewhat broken (preview seems to work only for the first frame atm).
Fixes: https://invent.kde.org/multimedia/kdenlive/-/issues/1973
Official AsyncVideoFrameLoader loads all frames into memory which prevents it for being used clips longer than a few seconds.
This introduces our own version of AsyncVideoFrameLoader which doesn't cache all images.
Check out the comment https://invent.kde.org/multimedia/kdenlive/-/issues/1973#note_1199934 for more details.
Didn't bother to create a PR for the official Facebook repo. Based on outstanding open PRs and official activity on that repo its not a community project. Need to fix this on our side unfortunately.
Its basically as three line change as mentioned in the comment but needed to create our custom SAM2VideoPredictor which delegates to the official SAM2VideoPredictorOfficial except loading the images in init in order to fix it (Wanted to avoid forking SAM2 repo so we don't have another repo to maintain...).
I intend to work a bit more on the SAM integration and added a few TODOs for myself. Will clean up this code and fix the TODOs in future MRs.
Also, while testing the feature looks like preview mode is somewhat broken (preview seems to work only for the first frame atm).
* extract layout management dialog (accessible via menu View - Manage Layouts...)
* extract layout switcher (shown in top-right corner of menu bar)
* extract functionality around the collection of layouts (like loading, ordering, getting)
While preparing for https://invent.kde.org/multimedia/kdenlive/-/issues/1999 I had a hard time understanding this code as its doing lots of things so I tried to extract the self-contained functionality mentioned above and kept only the plumbing in layoutmanagement.cpp.
This change contains no user-visible changes, just refactoring of existing functionality.
Left a TODO regarding setting up the autosafe label and corner of the bar menu where I didn't know a good place to put it. If you have a suggestion for this I can do in scope of this MR otherwise I hope I'd find something while working on the layout switcher. Cheers!
I'd like to propose some changes to the README:
After brief welcome and introduction what this project is I would like to point regular users to the website. It contains the best and most up-to-date info for everything except dev/coding documentation.
Everything after that would then focus only on developers / potential code contributors.
I imagine the README to be the landing page we forward people to from the website that are interested in contributing code / hacking on the project. (On the new website this would be from the contribute page https://invent.kde.org/websites/kdenlive-org/-/merge_requests/22)
On Ubuntu 25.04, the libraries listed under `Get the build dependencies`
in `build.md` are insufficient to build the project. Add the missing
libraries.
Fixes https://bugs.kde.org/show_bug.cgi?id=471281.
Green paint mode produced brownish and other weird colors instead of green as 0 values were not handled correctly which produced -inf when calculating its log value
Unfortunately the script to add the version to the appstream files
relies on cmake and at the moment it grabs the version from imath, which
is included with OpenTimeLineIO, which is fetched by default.
GIT_SILENT
(cherry picked from commit 89eb8d717b)
Unfortunately the script to add the version to the appstream files
relies on cmake and at the moment it grabs the version from imath, which
is included with OpenTimeLineIO, which is fetched by default.
GIT_SILENT
Currently the slider only changes if the user hits enter after doing changes in the spin box. On other widgets like Volume Effect widget we update the slider immediately when user changes the value via mousewheel or up/down buttons.
We need to calculate the bounding rectangle after we've set the actual
font via setFont(). Otherwise the default font will be used for the
calculation which may or may not be what is used later when draw the
actual text.
Currently the slider only changes if the user hits enter after doing changes in the spin box. On other widgets like Volume Effect widget we update the slider immediately when user changes the value via mousewheel or up/down buttons.
We need to calculate the bounding rectangle after we've set the actual
font via setFont(). Otherwise the default font will be used for the
calculation which may or may not be what is used later when draw the
actual text.
As I was doing more OTIO testing, I found a freeze while opening a file with all missing media filenames. All of the `ClipCreator::createClipFromFile` callbacks seemed to fire OK, but the test hangs when trying to insert the clips into the timeline. Here is a partial stack trace:
```
QReadWriteLock::lockForRead(class QReadWriteLock * const this) (/usr/include/x86_64-linux-gnu/qt6/QtCore/qreadwritelock.h:68)
QReadLocker::relock(class QReadLocker * const this) (/usr/include/x86_64-linux-gnu/qt6/QtCore/qreadwritelock.h:115)
QReadLocker::QReadLocker(class QReadLocker * const this, class QReadWriteLock * areadWriteLock) (/usr/include/x86_64-linux-gnu/qt6/QtCore/qreadwritelock.h:134)
ClipController::getProducerIntProperty(const class ClipController * const this, const class QString & name) (src/mltcontroller/clipcontroller.cpp:596)
TimelineModel::requestClipInsertion(class TimelineModel * const this, const class QString & binClipId, int trackId, int position, int & id, bool logUndo, bool refreshView, bool useTargets, Fun & undo, Fun & redo, const QVector & allowedTracks) (src/timeline2/model/timelinemodel.cpp:2119)
OtioImport::importClip(class OtioImport * const this, const class std::shared_ptr<OtioImportData> & importData, const struct opentimelineio::v1_0::SerializableObject::Retainer<opentimelineio::v1_0::Clip> & otioClip, int trackId) (src/otio/otioimport.cpp:325)
OtioImport::importTrack(class OtioImport * const this, const class std::shared_ptr<OtioImportData> & importData, const struct opentimelineio::v1_0::SerializableObject::Retainer<opentimelineio::v1_0::Track> & otioTrack, int trackId) (src/otio/otioimport.cpp:276)
OtioImport::importTimeline(class OtioImport * const this, const class std::shared_ptr<OtioImportData> & importData) (src/otio/otioimport.cpp:248)
```
Strangely enough, the existing OTIO missing media test that only has some missing media filenames seems to pass OK.
(Note, I also edited the test OTIO files to remove some empty tracks that were not necessary for testing.)
As I was doing more OTIO testing, I found a freeze while opening a file with all missing media filenames. All of the `ClipCreator::createClipFromFile` callbacks seemed to fire OK, but the test hangs when trying to insert the clips into the timeline. Here is a partial stack trace:
```
QReadWriteLock::lockForRead(class QReadWriteLock * const this) (/usr/include/x86_64-linux-gnu/qt6/QtCore/qreadwritelock.h:68)
QReadLocker::relock(class QReadLocker * const this) (/usr/include/x86_64-linux-gnu/qt6/QtCore/qreadwritelock.h:115)
QReadLocker::QReadLocker(class QReadLocker * const this, class QReadWriteLock * areadWriteLock) (/usr/include/x86_64-linux-gnu/qt6/QtCore/qreadwritelock.h:134)
ClipController::getProducerIntProperty(const class ClipController * const this, const class QString & name) (src/mltcontroller/clipcontroller.cpp:596)
TimelineModel::requestClipInsertion(class TimelineModel * const this, const class QString & binClipId, int trackId, int position, int & id, bool logUndo, bool refreshView, bool useTargets, Fun & undo, Fun & redo, const QVector & allowedTracks) (src/timeline2/model/timelinemodel.cpp:2119)
OtioImport::importClip(class OtioImport * const this, const class std::shared_ptr<OtioImportData> & importData, const struct opentimelineio::v1_0::SerializableObject::Retainer<opentimelineio::v1_0::Clip> & otioClip, int trackId) (src/otio/otioimport.cpp:325)
OtioImport::importTrack(class OtioImport * const this, const class std::shared_ptr<OtioImportData> & importData, const struct opentimelineio::v1_0::SerializableObject::Retainer<opentimelineio::v1_0::Track> & otioTrack, int trackId) (src/otio/otioimport.cpp:276)
OtioImport::importTimeline(class OtioImport * const this, const class std::shared_ptr<OtioImportData> & importData) (src/otio/otioimport.cpp:248)
```
Strangely enough, the existing OTIO missing media test that only has some missing media filenames seems to pass OK.
(Note, I also edited the test OTIO files to remove some empty tracks that were not necessary for testing.)
Two fixes for clips with small durations (1 and zero frames):
* Don't create clips with zero duration when importing OTIO files.
* Remove an assert in ClipModel::requestSlip() that was triggered when slipping clips with a duration of 1 frame.
The diff for the first change looks like a lot, but the change is really just adding this conditional:
```
const int duration = otioTrimmedRange.value().duration().rescaled_to(otioTimelineDuration).round().value();
if (duration > 0) {
```
Two fixes for clips with small durations (1 and zero frames):
* Don't create clips with zero duration when importing OTIO files.
* Remove an assert in ClipModel::requestSlip() that was triggered when slipping clips with a duration of 1 frame.
The diff for the first change looks like a lot, but the change is really just adding this conditional:
```
const int duration = otioTrimmedRange.value().duration().rescaled_to(otioTimelineDuration).round().value();
if (duration > 0) {
```
- Added suffix, one decimal point, translated planes from numbers to plain text and changed type to "list"
- Added parameters for alpha channel, added comments with explanation
- Corrected max values, added alpha channel, added comments
[Kdenlive](https://kdenlive.org) is a Free and Open Source video editing application, based on MLT Framework and KDE Frameworks 6. It is distributed under the [GNU General Public License Version 3](https://www.gnu.org/licenses/gpl-3.0.en.html) or any later version that is accepted by the KDE project.
Kdenlive is a powerful, free and open-source video editor that brings professional-grade video editing capabilities to everyone. Whether you're creating a simple family video or working on a complex project, Kdenlive provides the tools you need to bring your vision to life.
# Building from source
For more information about Kdenlive's features, tutorials, and community, please visit our [official website](https://kdenlive.org).
[Instructions to build Kdenlive](dev-docs/build.md) are available in the dev-docs folder.
There you can also find downloads for both stable releases and experimental daily builds for Kdenlive.
- Add the kde flatpak repository (if not already done) by typing `flatpak remote-add --if-not-exists kdeapps --from https://distribute.kde.org/kdeapps.flatpakrepo` on a command line. (This step may be optional in your version of Flatpak.)
- Install kdenlive nightly with `flatpak install kdeapps org.kde.kdenlive`.
- Use `flatpak update` to update if the nightly is already installed.
- _Attention! If you use the stable kdenlive flatpak already, the `*.desktop` file (e.g. responsible for start menu entry) is maybe replaced by the nightly (and vice versa). You can still run the stable version with `flatpak run org.kde.kdenlive/x86_64/stable` and the nightly with `flatpak run org.kde.kdenlive/x86_64/master` (replace `x86_64` by `aarch64` or `arm` depending on your system)_
Kdenlive is a community-driven project, and we welcome contributions from everyone! There are many ways to contribute beyond coding:
*Note * - nightly/daily builds are not meant to be used in production.*
- Help translate Kdenlive into your language
- Report and triage bugs
- Write documentation
- Create tutorials
- Help other users on forums and bug trackers
# Contributing to Kdenlive
Visit [kdenlive.org](https://kdenlive.org) to learn more about non-code contributions.
Please note that Kdenlive's Github repo is just a mirror: read [this explanation for more details](https://community.kde.org/Infrastructure/Github_Mirror).
## Developer Information
The prefered way of submitting patches is a merge request on the [KDE GitLab on invent.kde.org](https://invent.kde.org/-/ide/project/multimedia/kdenlive): if you are not familar with the process there is a [step by step instruction on how to submit a merge reqest in KDE context](https://community.kde.org/Infrastructure/GitLab#Submitting_a_Merge_Request).
### Technology Stack
We welcome all feedback and offers for help!
Kdenlive is written in C++ and is using these technologies and frameworks:
* Talk about us!
*[Report bugs](https://kdenlive.org/en/bug-reports/) you encounter (if not already done)
*Help other users [on the forum](http://forum.kde.org/viewforum.php?f=262) and bug tracker
* [Help to fill the manual](https://community.kde.org/Kdenlive/Workgroup/Documentation)
* Complete and check [application and documentation translation](http://l10n.kde.org)
* Prepare video tutorials (intro, special tricks...) in your language
and send us a link to add in homepage or doc
* Detail improvement suggestions
we don't test every (any?) other video editor, so give precise explanations
* Code! Help fixing bugs, improving usability, optimizing, porting...
register on KDE infrastructure, study its guidelines, and pick from roadmap. See [here](dev-docs/contributing.md) for more information
- **Core Framework**: MLT for video editing functionality
1. Check out our [build instructions](dev-docs/build.md) to set up your development environment
2. Familiarize yourself with the [architecture](dev-docs/architecture.md) and [coding guidelines](dev-docs/coding.md)
4. If the MLT library is new to you check out [MLT Introduction](dev-docs/mlt-intro.md)
3. Join our Matrix channel `#kdenlive-dev:kde.org` for developer discussions and support
### Contributing Code
Kdenlive's primary development happens on [KDE Invent](https://invent.kde.org/multimedia/kdenlive). While we maintain a GitHub mirror, all code contributions should be submitted through KDE's GitLab instance. For more information about KDE's development infrastructure, visit the [KDE GitLab documentation](https://community.kde.org/Infrastructure/GitLab).
### Finding Things to Work On
- Browse open issues on [KDE Invent](https://invent.kde.org/multimedia/kdenlive/-/issues)
- Check the [KDE Bug Tracker](https://bugs.kde.org) for reported issues
- Look for issues tagged with "good first issue" or "help wanted"
Need help getting started? Join our Matrix channel `#kdenlive-dev:kde.org` - our community is friendly and always ready to help new contributors!
@@ -129,6 +129,12 @@ For double values these placeholders are available:
* represented by a checkbox
##### `"multiswitch"`
* 2 possible options defined by strings (max / min)
* this special parameter type will affect 2 different parameters when changed. the `name` of this parameter will contain the name of the 2 final parameters, separated by a LF character: ` `. Same thing for the `default`, `min` and `max` which will contain the values for these 2 parameters, separated by an LF character. See for example the fade_to_black effect.
<paramlistdisplay>BT.709,BT.470M,BT.470BG,Constant gamma of 2.2,Constant gamma of 2.8,SMPTE-170M,SMPTE-240M,SRGB,iec61966-2-1,iec61966-2-4,xvycc,BT.2020 for 10-bits content, BT.2020 for 12-bits content</paramlistdisplay>
<name>Output transfer characteristicse</name>
<paramlistdisplay>BT.709,BT.470M,BT.470BG,Constant gamma of 2.2,Constant gamma of 2.8,SMPTE170M,SMPTE240M,SRGB,IEC 61966-2-1,IEC 61966-2-4,xvYCC,BT.2020 for 10-bits content, BT.2020 for 12-bits content</paramlistdisplay>
<paramlistdisplay>BT.709,BT.470M,BT.470BG,Constant gamma of 2.2,Constant gamma of 2.8,SMPTE-170M,SMPTE-240M,SRGB,iec61966-2-1,iec61966-2-4,xvycc,BT.2020 for 10-bits content, BT.2020 for 12-bits content</paramlistdisplay>
<paramlistdisplay>BT.709,BT.470M,BT.470BG,Constant gamma of 2.2,Constant gamma of 2.8,SMPTE170M,SMPTE240M,SRGB,IEC 61966-2-1,IEC 61966-2-4,xvYCC,BT.2020 for 10-bits content, BT.2020 for 12-bits content</paramlistdisplay>
<name>Override input transfer characteristics</name>
<comment>A list of times in seconds for each channel over which the instantaneous level of the input signal is averaged to determine its volume. attacks refers to increase of volume and decays refers to decrease of volume. For most situations, the attack time (response to the audio getting louder) should be shorter than the decay time, because the human ear is more sensitive to sudden loud audio than sudden soft audio. A typical value for attack is 0.3 seconds and a typical value for decay is 0.8 seconds. If specified number of attacks and decays is lower than number of channels, the last set attack/decay will be used for all remaining channels.</comment>
<comment>A list of times in seconds for each channel over which the instantaneous level of the input signal is averaged to determine its volume. Attacks refers to increase of volume and decays refers to decrease of volume. For most situations, the attack time (response to the audio getting louder) should be shorter than the decay time, because the human ear is more sensitive to sudden loud audio than sudden soft audio. A typical value for attack is 0.3 seconds and a typical value for decay is 0.8 seconds. If specified number of attacks and decays is lower than number of channels, the last set attack/decay will be used for all remaining channels.</comment>
<comment>A list of times in seconds for each channel over which the instantaneous level of the input signal is averaged to determine its volume. attacks refers to increase of volume and decays refers to decrease of volume. For most situations, the attack time (response to the audio getting louder) should be shorter than the decay time, because the human ear is more sensitive to sudden loud audio than sudden soft audio. A typical value for attack is 0.3 seconds and a typical value for decay is 0.8 seconds. If specified number of attacks and decays is lower than number of channels, the last set attack/decay will be used for all remaining channels.</comment>
<comment>A list of times in seconds for each channel over which the instantaneous level of the input signal is averaged to determine its volume. Attacks refers to increase of volume and decays refers to decrease of volume. For most situations, the attack time (response to the audio getting louder) should be shorter than the decay time, because the human ear is more sensitive to sudden loud audio than sudden soft audio. A typical value for attack is 0.3 seconds and a typical value for decay is 0.8 seconds. If specified number of attacks and decays is lower than number of channels, the last set attack/decay will be used for all remaining channels.</comment>
<comment>et an initial volume, in dB, to be assumed for each channel when filtering starts. This permits the user to supply a nominal level initially, so that, for example, a very large gain is not applied to initial signal levels before the companding has begun to operate. A typical value for audio which is initially quiet is -90 dB.</comment>
<comment>Set an initial volume, in dB, to be assumed for each channel when filtering starts. This permits the user to supply a nominal level initially, so that, for example, a very large gain is not applied to initial signal levels before the companding has begun to operate. A typical value for audio which is initially quiet is -90 dB.</comment>
@@ -8,7 +8,7 @@ For example, you have recorded guitar with two microphones placed in different l
The best result can be reached when you take one track as base and synchronize other tracks one by one with it. Remember that synchronization/delay tolerance depends on sample rate, too. Higher sample rates will give more tolerance. </description>
<comment><![CDATA[Banding detection range in pixels. Default is 16.<br>
If positive, random number in range 0 to set value will be used. If negative, exact absolute value will be used. The range defines square of four pixels around current pixel.]]></comment>
<comment><![CDATA[Sets direction in degrees from which four pixel will be compared.<br>
If positive, random direction from 0 to set direction will be picked. If negative, exact of absolute value will be picked. For example direction 0°, -180°, or -360° will pick only pixels on same row and -90° will pick only pixels on same column]]></comment>
<comment><![CDATA[If enabled, current pixel is compared with average value of all four surrounding pixels.<br>
The default is enabled. If disabled, current pixel is compared with all four surrounding pixels. The pixel is considered banded if only all four differences with surrounding pixels are less than threshold.]]></comment>
Simulates image dilation, an effect which will enlarge the lightest pixels in the image by replacing the pixel by the local (3x3) maximum.]]></description>
Simulates image erosion, an effect which will enlarge the darkest pixels in the image by replacing the pixel by the local (3x3) minimum.]]></description>
<description>Fill borders of the input video, without changing video stream dimensions. Sometimes video can have garbage at the four edges and you may not want to crop video input to keep size multiple of some number</description>
<name>Fill Borders</name>
<description>Fill borders of the input video without changing video stream dimensions. Sometimes video can have garbage at the four edges and you may not want to crop video input to keep the size multiple of some number</description>
<description><![CDATA[Debands video quickly using gradients.<br>
<description><![CDATA[<b>-= Deprecated =-</b><br>
Debands video quickly using gradients.<br>
Fix the banding artifacts that are sometimes introduced into nearly flat regions by truncation to 8-bit color depth.<br>
Interpolate the gradients that should go where the bands are, and dither them.<br>
<b>It is designed for playback only. Do not use it prior to lossy compression, because compression tends to lose the dither and bring back the bands<b>.]]></description>
<b>It is designed for playback only. Do not use it prior to lossy compression, because compression tends to lose the dither and bring back the bands</b>.]]></description>
<paramlistdisplay>White on Black,Black on White,White on Gray,Black on Gray,Color on Black,Color on White,Color on Gray,Black on Color,White on Color,Gray on Color</paramlistdisplay>
<description>Scale the input by 2, 3 or 4 using the hq*x magnification algorithm</description>
<description>Apply a high-quality magnification filter designed for pixel art. Scaling is done by 2, 3 or 4 using the hq*x magnification algorithm.</description>
<description>Deinterlace input video by applying Donald Graft’s adaptive kernel deinterling. Work on interlaced parts of a video to produce progressive frames. </description>
<description>Deinterlace input video by applying Donald Graft’s adaptive kernel deinterling. Works on interlaced parts of a video to produce progressive frames. </description>
<description>Apply a Look Up Table (LUT) to the video.
<full><![CDATA[A LUT is an easy way to correct the color of a video. Supported formats: .3dl (AfterEffects), .cube (Iridas), .dat(DaVinci), .m3d (Pandora)]]></full></description>
<description>Apply a Look Up Table (LUT) to the video. A LUT is an easy way to correct the color of a video. Supported formats: .3dl (AfterEffects), .cube (Iridas), .dat (DaVinci), .m3d (Pandora)</description>
The value specifies the variance of the gaussian filter used to blur the image (slower if larger). If not specified,it defaults to the value set for <em>Luma radius</em>]]></comment>
The value specifies the variance of the gaussian filter used to blur the image (slower if larger). If not specified,it defaults to the value set for <em>Luma radius</em>]]></comment>
<comment><![CDATA[Set the chroma effect strength.<br>
Reasonable values are between -1.5 and 1.5. Negative values will blur the input video, while positive values will sharpen it, a value of zero will disable the effect. ]]></comment>
<comment><![CDATA[Set the alpha effect strength.<br>
Reasonable values are between -1.5 and 1.5. Negative values will blur the input video, while positive values will sharpen it, a value of zero will disable the effect. ]]></comment>
<description>The waveform monitor plots color component intensity. By default luminance only. Each column of the waveform corresponds to a column of pixels in the source video. </description>
<description>Apply the xBR high-quality magnification filter which is designed for pixel art. It follows a set of edge-detection rules, see https://forums.libretro.com/t/xbr-algorithm-tutorial/123</description>
<description>Apply the xBR high-quality magnification filter which is designed for pixel art. It follows a set of edge-detection rules.</description>
<description>Attempts to fill in zenith and nadir by stretching and blurring the image data. It samples a band of latitude near the start of the effect and stretches and blurs it over the pole.</description>
<description>Adds a black matte to the frame. Use this if you filmed using a 360 camera but only want to use part of the 360 image - for example if you and the film crew occupy the 90 degrees behind the camera.</description>
<comment><![CDATA[The <b>Start</b> is half the height in degrees of the un-matted area. The <b>End</b> is the half the height in degrees where the matte is at 100%.]]></comment>
<comment><![CDATA[The Start is half the height in degrees of the un-matted area. The End is the half the height in degrees where the matte is at 100%.]]></comment>
<comment><![CDATA[The <b>Start</b> is the width in degrees of the un-matted area. The <b>End</b> is the width in degrees where the matte is at 100%.]]></comment>
<comment><![CDATA[The <b>Start</b> is the width in degrees of the un-matted area. The <b>End</b> is the width in degrees where the matte is at 100%.]]></comment>
<description>Converts an equirectangular frame (panoramic) to a rectilinear frame (what you're used to seeing). Can be used to preview what will be shown in a 360 video viewer. Delayed frame blitting mapped on a time bitmap</description>
<name>VR360 Hemispherical to Equirectangular</name>
<description>Converts a video frame with two hemispherical images to a single equirectangular frame. The plugin assumes that both hemispheres are in the frame</description>
<comment><![CDATA[The fisheye projection type. Currently only equidistant fisheyes, like the Ricoh Theta and Garmin Virb360 are supported.]]></comment>
<comment><![CDATA[360 cameras like the Theta have a problem with the nadir direction where, no matter what, you will have a little of the camera in the image.<br>
This parameter "stretches" the image near nadir to cover up the missing parts.]]></comment>
<comment><![CDATA[360 cameras like the Theta have a problem with the nadir direction where, no matter what, you will have a little of the camera in the image.<br>
This parameter "stretches" the image near nadir to cover up the missing parts.]]></comment>
If you use Hugin parameters, the Radius should be set to the value of (0.5 * min(image width, image height) / image width). For a 2:1 aspect dual hemispherical image, that would be 0.25.]]></comment>
<comment><![CDATA[If you use Hugin parameters, the radius should be set to the value of (0.5 * image diagonal / image width).<br>
For a 2:1 aspect dual hemispherical image, that would be 0.5590. Use the A parameter to scale the effect and avoid overexposing highlights]]></comment>
The EMoR h(x) parameters are the same as Hugin's Ra - Re in the lens parameters. If you use Hugin-derived values for vignetting correction, you should also use these parameters, as Hugin's vignetting correction assumes that the sensor response has been corrected.]]></comment>
The EMoR h(x) parameters are the same as Hugin's Ra - Re in the lens parameters. If you use Hugin-derived values for vignetting correction, you should also use these parameters, as Hugin's vignetting correction assumes that the sensor response has been corrected.]]></comment>
The EMoR h(x) parameters are the same as Hugin's Ra - Re in the lens parameters. If you use Hugin-derived values for vignetting correction, you should also use these parameters, as Hugin's vignetting correction assumes that the sensor response has been corrected.]]></comment>
The EMoR h(x) parameters are the same as Hugin's Ra - Re in the lens parameters. If you use Hugin-derived values for vignetting correction, you should also use these parameters, as Hugin's vignetting correction assumes that the sensor response has been corrected.]]></comment>
The EMoR h(x) parameters are the same as Hugin's Ra - Re in the lens parameters. If you use Hugin-derived values for vignetting correction, you should also use these parameters, as Hugin's vignetting correction assumes that the sensor response has been corrected.]]></comment>
<description>Converts a rectilinear (a normal-looking) image to an equirectangular image. Use this together with Transform 360 to place "normal" footage in a 360 movie.</description>
<description>Stabilizes 360 footage. The plugin works in two phases - analysis and stabilization. When analyzing footage, it detects frame-to-frame rotation, and when stabilizing it tries to correct high-frequency motion (shake).</description>
<description><![CDATA[Stabilizes 360 footage.<br>
The plugin works in two phases - analysis and stabilization. When analyzing footage, it detects frame-to-frame rotation, and when stabilizing it tries to correct high-frequency motion (shake).]]></description>
<author>Leo Sutic</author>
<parametertype="bool"name="analyze">
<name>Analyze</name>
<comment><![CDATA[Switch on for analysis phase; switch off for stabilization phase.]]></comment>
<comment><![CDATA[The offset into the stabilization file that corresponds to the start of this clip.<br>
For example, if you have a 30 second clip, analyze it all, and then split it into three clips of 10 seconds each, then the start offsets should be 0s, 10s, and 20s.]]></comment>
<comment><![CDATA[The number of frames to use to smooth out the shakes. The higher the value, the slower the camera will follow any intended motion.]]></comment>
<comment><![CDATA[Shift the frames used to smooth out the shakes relative to the stabilized frame.<br>
A value less than zero will give more weight to past frames, and the camera will seem to lag behind intended movement. A value greater than zero will give more weight to future frames, and the camera will appear to move ahead of the intended camera movement. A value of zero should make the camera follow the intended path.]]></comment>
<comment><![CDATA[The amount of stabilization to apply. 100% means that the stabilizer will make the camera as steady as it can. Smaller values reduce the amount of stabilization.]]></comment>
<comment><![CDATA[Shift the frames used to smooth out the shakes relative to the stabilized frame.<br>
A value less than zero will give more weight to past frames, and the camera will seem to lag behind intended movement. A value greater than zero will give more weight to future frames, and the camera will appear to move ahead of the intended camera movement. A value of zero should make the camera follow the intended path.]]></comment>
<comment><![CDATA[Shift the frames used to smooth out the shakes relative to the stabilized frame.<br>
A value less than zero will give more weight to past frames, and the camera will seem to lag behind intended movement. A value greater than zero will give more weight to future frames, and the camera will appear to move ahead of the intended camera movement. A value of zero should make the camera follow the intended path.]]></comment>
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.