mirror of
https://github.com/vondas-network/videobeaux.git
synced 2025-12-05 15:30:02 +01:00
adding new features, docs in docs, examples in examples
This commit is contained in:
66
docs/docs-captburn.txt
Normal file
66
docs/docs-captburn.txt
Normal file
@@ -0,0 +1,66 @@
|
||||
# videobeaux: captburn
|
||||
|
||||
## Description
|
||||
`captburn` is a modular caption-generation and burn-in engine for videobeaux.
|
||||
It ingests video and transcript data (JSON-based) and produces high-quality `.ass` subtitle files or directly burns captions into the video.
|
||||
It supports multiple captioning modes — **pop-on**, **paint-on**, and **roll-up** — and preserves all ASS styling fields.
|
||||
|
||||
## Features
|
||||
- Generates ASS subtitles from JSON transcripts or caption files.
|
||||
- Supports `popon`, `painton`, and `rollup` timing modes.
|
||||
- Full styling options: font, size, color, outline, background, alignment, margins, rotation.
|
||||
- Generates both `.captburn.json` (metadata) and `.captburn.ass` (subtitle style) sidecars.
|
||||
- Can reburn captions directly using `--caption` JSON without reprocessing.
|
||||
- Threaded pipeline compatible with FFmpeg rendering.
|
||||
|
||||
## Parameters
|
||||
| Flag | Type | Description |
|
||||
|------|------|--------------|
|
||||
| `-i, --input` | str | Input video file |
|
||||
| `--json` | str | Path to transcript or caption JSON |
|
||||
| `--caption` | str | (Optional) Caption JSON for reburn |
|
||||
| `--rotate` | float | Rotation angle for text (ASS Angle tag) |
|
||||
| `--font` | str | Font family (e.g. Arial, Helvetica) |
|
||||
| `--font-size` | int | Text size |
|
||||
| `--color` | str | Primary text color (hex or ASS color) |
|
||||
| `--outline-color` | str | Outline color |
|
||||
| `--back-color` | str | Background box color |
|
||||
| `--back-opacity` | float | Background opacity (0.0–1.0) |
|
||||
| `--align` | int | Alignment (ASS-style \\anN code) |
|
||||
| `--margin-v` | int | Vertical margin in pixels |
|
||||
| `--tracking` | float | Character spacing multiplier |
|
||||
| `--rotate` | float | Rotation of the text block |
|
||||
| `-F, --force` | flag | Overwrite existing files |
|
||||
|
||||
## Modes
|
||||
| Mode | Description |
|
||||
|------|--------------|
|
||||
| `popon` | Sentence-level captions (common for subtitles) |
|
||||
| `painton` | Word-by-word reveal effect |
|
||||
| `rollup` | Scrolling line-by-line broadcast style |
|
||||
|
||||
## Example Usage
|
||||
```bash
|
||||
# Generate and burn captions using a transcript JSON
|
||||
videobeaux -P captburn -i ./media/bbb.mov --json ./media/bbb.json -F
|
||||
|
||||
# Burn captions from an existing caption JSON (reburn mode)
|
||||
videobeaux -P captburn -i ./media/bbb.mov --caption ./media/bbb.captburn.json -F
|
||||
|
||||
# Specify font, size, and color
|
||||
videobeaux -P captburn -i ./media/bbb.mov --json ./media/bbb.json --font "Helvetica" --font-size 42 --color "#FFFFFF" --outline-color "#000000" --align 2 --margin-v 50 -F
|
||||
|
||||
# Apply rotation to text (e.g., stylized tilt)
|
||||
videobeaux -P captburn -i ./media/bbb.mov --json ./media/bbb.json --rotate 5 -F
|
||||
```
|
||||
|
||||
## Outputs
|
||||
- `.captburn.ass`: Styled subtitle file
|
||||
- `.captburn.json`: Caption metadata file
|
||||
- `.mp4`: (if burn enabled) Video with captions baked in
|
||||
|
||||
## Notes
|
||||
- Automatically detects video resolution for accurate ASS PlayResX/Y.
|
||||
- Uses actual pixel-true coordinates and alignment.
|
||||
- Fully compatible with reburn workflow using `--caption`.
|
||||
- Built upon FFmpeg `subtitles` and `ass` filters for reliable rendering.
|
||||
41
docs/docs-convert_dims.txt
Normal file
41
docs/docs-convert_dims.txt
Normal file
@@ -0,0 +1,41 @@
|
||||
# videobeaux: convert_dims
|
||||
|
||||
## Description
|
||||
`convert_dims` resizes videos according to standardized or platform-specific aspect ratio presets.
|
||||
It supports direct scaling, padding to maintain aspect ratio, and an optional “translate/stretch” mode to fill target dimensions exactly.
|
||||
|
||||
## Features
|
||||
- Dozens of predefined presets: HD, 4K, Instagram Reels, YouTube Shorts, TikTok, etc.
|
||||
- Stretch (`--translate yes`) or maintain aspect ratio (`--translate no`).
|
||||
- New "fill" mode allows uniform scaling with crop to fill the target frame.
|
||||
- Automatically sets safe `setdar` and `setsar` filters.
|
||||
- Optional forced overwrite with `-F`.
|
||||
|
||||
## Parameters
|
||||
| Flag | Type | Description |
|
||||
|------|------|--------------|
|
||||
| `-i, --input` | str | Input video file |
|
||||
| `--output-format` | str | Output format (e.g. mp4, mov) |
|
||||
| `--preset` | str | Target dimension preset (e.g. 1080p, instagram_reels) |
|
||||
| `--translate` | yes/no | Stretch to fit or preserve aspect ratio with padding |
|
||||
| `--fill` | yes/no | Crop to fill the target frame (centered) |
|
||||
| `-F, --force` | flag | Overwrite output file |
|
||||
|
||||
## Example Usage
|
||||
```bash
|
||||
# Resize to 1080p maintaining aspect ratio
|
||||
videobeaux -P convert_dims -i ./media/bbb.mov --output-format mp4 --preset 1080p -F
|
||||
|
||||
# Force exact stretch to TikTok format (9:16)
|
||||
videobeaux -P convert_dims -i ./media/bbb.mov --output-format mp4 --preset tiktok_video --translate yes -F
|
||||
|
||||
# Maintain ratio with letterboxing to square
|
||||
videobeaux -P convert_dims -i ./media/bbb.mov --output-format mp4 --preset square1080 --translate no -F
|
||||
|
||||
# Fill target aspect ratio (crop edges to fit)
|
||||
videobeaux -P convert_dims -i ./media/bbb.mov --output-format mp4 --preset instagram_reels --fill yes -F
|
||||
```
|
||||
|
||||
## Output
|
||||
- Produces scaled MP4 or MOV with proper aspect ratio metadata.
|
||||
- Compatible with social platforms and web delivery.
|
||||
60
docs/docs-convert_mux.txt
Normal file
60
docs/docs-convert_mux.txt
Normal file
@@ -0,0 +1,60 @@
|
||||
# videobeaux: convert_mux
|
||||
|
||||
## Description
|
||||
`convert_mux` provides a unified wrapper for re-muxing or transcoding any input media file.
|
||||
It’s a high-level black-box converter supporting FFmpeg profiles and custom codec settings.
|
||||
|
||||
## Features
|
||||
- Automatically infers codecs and container from extension.
|
||||
- Supports dozens of curated profiles: mp4_h264, mp4_hevc, webm_vp9, prores_422, lossless_ffv1, etc.
|
||||
- Stream-copy mode for instant rewraps.
|
||||
- Handles audio-only formats (mp3, aac, flac, etc.)
|
||||
- Full FFmpeg passthrough via `--` for raw arguments.
|
||||
|
||||
## Parameters
|
||||
| Flag | Type | Description |
|
||||
|------|------|--------------|
|
||||
| `-i, --input` | str | Input media file |
|
||||
| `-o, --output` | str | Output file (container inferred by extension) |
|
||||
| `--profile` | str | Use a curated preset (see list below) |
|
||||
| `--vcodec` / `--acodec` | str | Manual override codecs |
|
||||
| `--crf`, `--bitrate`, etc. | mixed | Control quality, rate, and buffer settings |
|
||||
| `--copy` | flag | Stream copy without re-encoding |
|
||||
| `--format` | str | Force container format |
|
||||
| `--ffmpeg_args` | list | Raw FFmpeg passthrough args |
|
||||
| `-F, --force` | flag | Overwrite output |
|
||||
|
||||
## Common Profiles
|
||||
| Profile | Container | Description |
|
||||
|----------|------------|-------------|
|
||||
| mp4_h264 | MP4 | Web-friendly H.264/AAC |
|
||||
| mp4_hevc | MP4 | HEVC (H.265) Apple-ready |
|
||||
| webm_vp9 | WebM | VP9 + Opus |
|
||||
| prores_422 | MOV | ProRes mezzanine |
|
||||
| prores_4444 | MOV | ProRes + Alpha |
|
||||
| lossless_ffv1 | MKV | Archival |
|
||||
| avi_mjpeg_fast | AVI | Fast MJPEG |
|
||||
| avi_mpeg4_fast | AVI | Legacy fast MPEG-4 |
|
||||
|
||||
## Example Usage
|
||||
```bash
|
||||
# Simple H.264 conversion
|
||||
videobeaux -P convert_mux -i ./media/bbb.mov -o ./out/bbb_h264.mp4 --profile mp4_h264 -F
|
||||
|
||||
# Convert to WebM VP9 + Opus
|
||||
videobeaux -P convert_mux -i ./media/bbb.mov -o ./out/bbb_vp9.webm --profile webm_vp9 -F
|
||||
|
||||
# Convert to ProRes for post-production
|
||||
videobeaux -P convert_mux -i ./media/bbb.mov -o ./out/bbb_prores.mov --profile prores_422 -F
|
||||
|
||||
# Fast rewrap (no re-encode)
|
||||
videobeaux -P convert_mux -i ./media/bbb.mov -o ./out/bbb.mkv --copy -F
|
||||
|
||||
# Raw FFmpeg flags
|
||||
videobeaux -P convert_mux -i ./media/bbb.mov -o ./out/bbb_custom.mp4 --profile mp4_h264 -- --max_muxing_queue_size 9999
|
||||
```
|
||||
|
||||
## Output
|
||||
- Encoded or rewrapped video/audio in the specified format.
|
||||
- Preserves metadata and timestamps.
|
||||
- Ideal for format conversion, web publishing, or broadcast deliverables.
|
||||
93
docs/docs-gamma_fix.txt
Normal file
93
docs/docs-gamma_fix.txt
Normal file
@@ -0,0 +1,93 @@
|
||||
videobeaux — gamma_fix
|
||||
========================
|
||||
|
||||
Purpose
|
||||
-------
|
||||
Normalize overall exposure for web/broadcast delivery by sampling video luma
|
||||
(YAVG) with a fast prepass, computing gentle contrast/brightness (and optional
|
||||
gamma), then encoding the adjusted video. Optionally clamp output to
|
||||
broadcast-legal (TV/limited) range.
|
||||
|
||||
How It Works
|
||||
------------
|
||||
1) Probe pass: ffmpeg + signalstats collects per-frame YAVG values (0..255).
|
||||
2) Robust center: we take the median YAVG to avoid bright/dark spikes.
|
||||
3) Mapping:
|
||||
- Choose contrast ~= target/current (clamped to a friendly range).
|
||||
- Solve brightness so the current median maps near the desired target.
|
||||
- Optional gamma curve if you request it.
|
||||
4) Optional “legalize” remaps to TV range and outputs yuv420p for broad
|
||||
compatibility.
|
||||
|
||||
Basic Invocation
|
||||
----------------
|
||||
videobeaux -P gamma_fix -i INPUT -o OUTPUT [options]
|
||||
|
||||
Where -P selects the program, and -i/-o/--force and similar are provided by
|
||||
videobeaux’s CLI front-end.
|
||||
|
||||
Inputs / Outputs
|
||||
----------------
|
||||
- Input: Any ffmpeg-readable video file.
|
||||
- Output: Encoded video with exposure normalization applied.
|
||||
- Audio: Encoded using --acodec/--ab (defaults aac/160k).
|
||||
|
||||
Key Options (program-specific)
|
||||
------------------------------
|
||||
--target-yavg FLOAT Target average luma, 0..255 (default: 64.0)
|
||||
~64 is a balanced midpoint for web. Try 60–70 for
|
||||
darker looks, 70–90 for brighter looks.
|
||||
|
||||
--min-contrast FLOAT Lower clamp for the auto contrast (default: 0.80)
|
||||
--max-contrast FLOAT Upper clamp for the auto contrast (default: 1.35)
|
||||
|
||||
--gamma FLOAT Optional gamma adjustment (default: 1.00 = neutral).
|
||||
Leave at 1.00 to rely purely on contrast/brightness.
|
||||
|
||||
--sat FLOAT Saturation multiplier via hue filter (default: 1.00).
|
||||
Example: 1.10 = +10% saturation.
|
||||
|
||||
--legalize Clamp output to broadcast-legal (TV/limited) range,
|
||||
then convert to yuv420p for delivery safety.
|
||||
|
||||
--vcodec STR Video codec (default: libx264)
|
||||
--crf STR Quality factor for x264/x265 (default: 18)
|
||||
--preset STR Encoder preset (default: medium)
|
||||
--acodec STR Audio codec (default: aac)
|
||||
--ab STR Audio bitrate (default: 160k)
|
||||
|
||||
Notes on Global Options (provided by videobeaux CLI)
|
||||
----------------------------------------------------
|
||||
-i PATH Input file path
|
||||
-o PATH Output file path
|
||||
--force Overwrite output if it exists
|
||||
|
||||
Quality / Performance Tips
|
||||
--------------------------
|
||||
- Start with --target-yavg 64 (neutral) and tweak by ±5–10 for taste.
|
||||
- Keep contrast clamps near defaults for natural results; widening them can
|
||||
push highlights/shadows into clipping on contrasty scenes.
|
||||
- --legalize is recommended for broadcast or when downstream platforms expect
|
||||
limited (TV) range. For purely web, it’s optional but safe.
|
||||
- A small --sat like 1.05–1.12 often livens washed-out sources without
|
||||
overshooting.
|
||||
- If your footage is very dark or very bright across the entire piece, adjust
|
||||
--target-yavg rather than forcing aggressive contrast.
|
||||
|
||||
Edge Cases
|
||||
----------
|
||||
- Extremely low dynamic range (flat/cast) may require manual grading beyond
|
||||
this auto-normalizer.
|
||||
- Heavily stylized content (crushed blacks, hard clipping) can benefit from a
|
||||
lower --max-contrast and modest --sat.
|
||||
- If probing fails (rare), the module defaults to a neutral pass (contrast=1,
|
||||
brightness=0).
|
||||
|
||||
Examples
|
||||
--------
|
||||
See the separate “gamma_fix_EXAMPLES.txt” file for ready-to-run commands.
|
||||
|
||||
Versioning
|
||||
----------
|
||||
Keep this program additive to your baseline. Do not remove other features in
|
||||
videobeaux; this module is designed as a drop-in program callable via -P.
|
||||
161
docs/docs-hash_fingerprint.txt
Normal file
161
docs/docs-hash_fingerprint.txt
Normal file
@@ -0,0 +1,161 @@
|
||||
videobeaux — hash_fingerprint
|
||||
=================================
|
||||
|
||||
## Description
|
||||
|
||||
`hash_fingerprint` is a fast, flexible hashing cataloger for media libraries within videobeaux.
|
||||
It computes deterministic hashes and fingerprints to ensure data integrity, verify exports, detect duplicates, and measure perceptual similarity.
|
||||
|
||||
### Features
|
||||
- File-level hashes: `md5`, `sha1`, `sha256` (streamed, low RAM)
|
||||
- Stream-level hash: FFmpeg-based hash of decoded content
|
||||
- Frame-level checksum: `framemd5` per frame
|
||||
- Perceptual hash: aHash over sampled frames (Pillow required)
|
||||
- Works on single files or entire directories (recursive)
|
||||
- Outputs to JSON or CSV
|
||||
|
||||
---
|
||||
|
||||
## Why Use It
|
||||
|
||||
### 1. Integrity & Provenance
|
||||
Ensure the exact same content is delivered or archived — detect even one-bit changes.
|
||||
|
||||
### 2. Duplicate & Version Control
|
||||
Detect duplicates and content drift across export iterations.
|
||||
|
||||
### 3. Codec-Level Comparison
|
||||
FFmpeg’s stream hash reveals content changes even when metadata or bitrates differ.
|
||||
|
||||
### 4. Frame-Accurate Verification
|
||||
framemd5 provides true frame-level checksum comparison.
|
||||
|
||||
### 5. Perceptual Matching
|
||||
Find visually similar clips using aHash to detect re-encodes or near-duplicates.
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
- Library audits for media integrity
|
||||
- Delivery verification (QC workflows)
|
||||
- Regression testing for re-exports
|
||||
- Duplicate detection
|
||||
- Visual similarity clustering (phash)
|
||||
|
||||
---
|
||||
|
||||
## Inputs & Outputs
|
||||
|
||||
**Inputs**
|
||||
- `-i/--input`: file or directory
|
||||
- `--recursive`: traverse directories
|
||||
- `--exts`: filter by extensions
|
||||
|
||||
**Outputs**
|
||||
- `--catalog`: JSON or CSV catalog path
|
||||
|
||||
### Example JSON Record
|
||||
```json
|
||||
{
|
||||
"path": "/abs/path/to/media/bbb.mov",
|
||||
"size_bytes": 12345678,
|
||||
"file_md5": "…",
|
||||
"file_sha256": "…",
|
||||
"stream_sha256": "…",
|
||||
"framemd5": ["stream, pts, checksum…"],
|
||||
"phash_algo": "aHash",
|
||||
"phash_frames": 124,
|
||||
"phash_list": ["f3a1…", "9b7c…"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Flags
|
||||
|
||||
| Flag | Description |
|
||||
|------|--------------|
|
||||
| `--file-hashes` | md5, sha1, sha256 (default: md5 sha256) |
|
||||
| `--stream-hash` | Compute stream hash using FFmpeg |
|
||||
| `--framemd5` | Generate per-frame checksums |
|
||||
| `--phash` | Enable perceptual hashing |
|
||||
| `--phash-fps` | Sample frequency for phash |
|
||||
| `--phash-size` | Hash matrix size (8 → 64-bit, 16 → 256-bit) |
|
||||
| `--catalog` | Output catalog path (.json or .csv) |
|
||||
|
||||
---
|
||||
|
||||
## Example Commands
|
||||
|
||||
**Default file hash**
|
||||
```bash
|
||||
videobeaux -P hash_fingerprint -i ./media/bbb.mov --catalog ./out/outbbb_hashes.json -F
|
||||
```
|
||||
|
||||
**Directory recursive hash**
|
||||
```bash
|
||||
videobeaux -P hash_fingerprint -i ./media --recursive --exts .mp4 .mov --catalog ./out/outdir_hashes.json -F
|
||||
```
|
||||
|
||||
**Add stream hash**
|
||||
```bash
|
||||
videobeaux -P hash_fingerprint -i ./media/bbb.mov --stream-hash sha256 --stream-kind video --catalog ./out/outbbb_streamsha.json -F
|
||||
```
|
||||
|
||||
**Frame checksum**
|
||||
```bash
|
||||
videobeaux -P hash_fingerprint -i ./media/bbb.mov --framemd5 --catalog ./out/outbbb_framemd5.json -F
|
||||
```
|
||||
|
||||
**Perceptual hash**
|
||||
```bash
|
||||
videobeaux -P hash_fingerprint -i ./media/bbb.mov --phash --phash-fps 1.0 --phash-size 16 --catalog ./out/outbbb_phash.json -F
|
||||
```
|
||||
|
||||
**Compare exports**
|
||||
```bash
|
||||
videobeaux -P hash_fingerprint -i ./out/v1 --recursive --file-hashes sha256 --catalog ./out/v1_hashes.json -F
|
||||
videobeaux -P hash_fingerprint -i ./out/v2 --recursive --file-hashes sha256 --catalog ./out/v2_hashes.json -F
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Notes
|
||||
|
||||
- File hashes: Fastest, limited by I/O.
|
||||
- Stream hash / framemd5: CPU-intensive (decoding).
|
||||
- Perceptual hashing: Adjustable via fps and size.
|
||||
- Always prefer local disk for large scans.
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Ingest Audit:** `--file-hashes sha256` on daily ingest.
|
||||
- **QC Re-exports:** Add `--stream-hash sha256`.
|
||||
- **Forensic Accuracy:** Use `--framemd5` for exact match.
|
||||
- **Similarity:** Use `--phash --phash-fps 0.5 --phash-size 8` for clustering.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Ensure FFmpeg is installed and in PATH.
|
||||
- Install Pillow for `--phash` (`pip install Pillow`).
|
||||
- Create parent directories for output paths.
|
||||
|
||||
---
|
||||
|
||||
## Security & Determinism
|
||||
|
||||
- Hashes are deterministic and consistent across systems.
|
||||
- md5 is fast for duplicates; sha256 is more secure.
|
||||
- Stream and frame hashes depend on FFmpeg decoding path.
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- `--verify` mode to compare current files vs stored catalog.
|
||||
- Duplicate-grouping report in JSON/CSV.
|
||||
146
docs/docs-meta_extraction.txt
Normal file
146
docs/docs-meta_extraction.txt
Normal file
@@ -0,0 +1,146 @@
|
||||
videobeaux — meta_extractionion
|
||||
================================
|
||||
|
||||
## Description
|
||||
|
||||
`meta_extraction` is a robust metadata extraction and analysis utility for video and audio files.
|
||||
It leverages `ffprobe` and optional `ffmpeg` filters to capture **technical metadata**, **black-frame detection**, **audio loudness analysis**, and **sampled visual summaries**.
|
||||
|
||||
This function is useful for Quality Control (QC), editorial preparation, archival metadata generation, and automated video insight pipelines.
|
||||
|
||||
---
|
||||
|
||||
## Features
|
||||
|
||||
- **Comprehensive ffprobe core metadata**
|
||||
- Format, streams, codecs, durations, bitrates, resolution, color profile, and tags.
|
||||
- **Sampled frame analysis**
|
||||
- Extract visual snapshots every N seconds.
|
||||
- Optionally compute histogram/mean color data.
|
||||
- **Black frame detection**
|
||||
- Detect fade-ins/fade-outs or black commercials gaps.
|
||||
- **EBU R128 Loudness**
|
||||
- Integrated Loudness (LUFS), LRA, True Peak.
|
||||
- **Combined JSON catalog**
|
||||
- Merges all collected data into a structured metadata report.
|
||||
|
||||
---
|
||||
|
||||
## Why Use It
|
||||
|
||||
### 1. Quality Control (QC)
|
||||
Detects black segments, silence, or loudness inconsistencies automatically.
|
||||
|
||||
### 2. Automated Asset Management
|
||||
Generate sidecar JSON metadata for cataloging and search indexing.
|
||||
|
||||
### 3. Archival Provenance
|
||||
Document codec info, stream layout, aspect ratio, and other preservation details.
|
||||
|
||||
### 4. Edit Prep
|
||||
Helps identify fades and loudness ranges to pre-trim or normalize clips before editing.
|
||||
|
||||
---
|
||||
|
||||
## Inputs & Outputs
|
||||
|
||||
**Inputs**
|
||||
- `-i / --input` : Input file (video or audio).
|
||||
- Optional sampling & detection flags.
|
||||
|
||||
**Outputs**
|
||||
- `--outputfile`: Output JSON metadata file.
|
||||
If omitted, defaults to `<input>.videobeaux.meta.json`
|
||||
|
||||
### Example JSON Structure
|
||||
```json
|
||||
{
|
||||
"source": "bbb.mov",
|
||||
"format": { "duration": 8.41, "size": 45678123, "bitrate": 876000 },
|
||||
"streams": [
|
||||
{ "type": "video", "codec": "h264", "width": 1920, "height": 1080 },
|
||||
{ "type": "audio", "codec": "aac", "channels": 2, "samplerate": 48000 }
|
||||
],
|
||||
"blackdetect": [ { "start": 0.00, "end": 0.20, "duration": 0.20 } ],
|
||||
"loudness": { "integrated": -14.2, "lra": 3.5, "truepeak": -1.1 },
|
||||
"samples": [ "frame_0001.jpg", "frame_0002.jpg", "..." ]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Flags
|
||||
|
||||
| Flag | Description |
|
||||
|------|--------------|
|
||||
| `--sample-frames` | Enable frame sampling for visual snapshots |
|
||||
| `--sample-stride` | Seconds between frame samples (default 0.5s) |
|
||||
| `--sample-limit` | Max number of frames to sample |
|
||||
| `--blackdetect` | Run black frame detection |
|
||||
| `--black-pic-th` | Pixel threshold for black detection (default 0.06) |
|
||||
| `--black-dur-min` | Minimum duration for black detection (default 0.06s) |
|
||||
| `--loudness` | Run EBU R128 loudness analysis |
|
||||
| `--outputfile` | Output metadata JSON |
|
||||
| `-F` | Force overwrite existing output file |
|
||||
|
||||
---
|
||||
|
||||
## Example Commands
|
||||
|
||||
### 1. Basic Probe (format + streams + chapters)
|
||||
```bash
|
||||
videobeaux -P meta_extraction -i ./media/bbb.mov --outputfile ./out/outbbb_meta_basic.json -F
|
||||
```
|
||||
|
||||
### 2. Frame Sampling
|
||||
```bash
|
||||
videobeaux -P meta_extraction -i ./media/bbb.mov --outputfile ./out/outbbb_meta_sampled.json --sample-frames --sample-stride 0.5 --sample-limit 200 -F
|
||||
```
|
||||
|
||||
### 3. Black Detection (commercial fade detection)
|
||||
```bash
|
||||
videobeaux -P meta_extraction -i ./media/bbb.mov --outputfile ./out/outbbb_meta_black.json --blackdetect --black-pic-th 0.08 --black-dur-min 0.08 -F
|
||||
```
|
||||
|
||||
### 4. Loudness Measurement
|
||||
```bash
|
||||
videobeaux -P meta_extraction -i ./media/bbb.mov --outputfile ./out/outbbb_meta_loud.json --loudness -F
|
||||
```
|
||||
|
||||
### 5. All-in-One Comprehensive QC Report
|
||||
```bash
|
||||
videobeaux -P meta_extraction -i ./media/bbb.mov --outputfile ./out/outbbb_meta_full.json --sample-frames --sample-stride 0.5 --sample-limit 200 --blackdetect --black-pic-th 0.10 --black-dur-min 0.10 --loudness -F
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Notes
|
||||
|
||||
- Frame sampling and loudness analyses invoke full decoding → expect increased runtime.
|
||||
- ffprobe-only mode is extremely fast (metadata only).
|
||||
- Works with all FFmpeg-supported containers (MOV, MP4, MKV, WEBM, MXF, WAV, etc.).
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Use **ffprobe-only** mode (`no --sample-frames/--blackdetect/--loudness`) for mass ingestion.
|
||||
- Use **blackdetect** for commercial-cut or fade timing analysis.
|
||||
- Use **loudness** when targeting broadcast spec (-23 LUFS EBU or -14 LUFS web).
|
||||
- Combine **sample-frames + loudness** for advanced QC dashboards.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Ensure FFmpeg/ffprobe installed and in PATH.
|
||||
- For noisy blackdetect, raise `--black-pic-th` slightly.
|
||||
- Loudness requires `ebur128` filter; ensure FFmpeg built with it (`ffmpeg -filters | grep ebur128`).
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- Scene detection and color histogram summaries.
|
||||
- Waveform extraction for audio visualization.
|
||||
- Optional export to CSV for large-scale audits.
|
||||
131
docs/docs-thumbs.txt
Normal file
131
docs/docs-thumbs.txt
Normal file
@@ -0,0 +1,131 @@
|
||||
videobeaux — thumbs (Thumbnail / Contact Sheet)
|
||||
===============================================
|
||||
|
||||
## Overview
|
||||
|
||||
`thumbs` automatically extracts representative frames and (optionally) assembles them into a single **contact sheet** image. It’s ideal for catalog previews, editorial reference, QC, and web galleries.
|
||||
|
||||
Two output modes:
|
||||
1) **Contact Sheet** — a single tiled image with evenly spaced or scene-based thumbnails.
|
||||
2) **Frame Sequence** — a directory of individual thumbnail images (e.g., for galleries or further processing).
|
||||
|
||||
---
|
||||
|
||||
## Why This Matters
|
||||
|
||||
- **Preview at a glance**: Summarize a clip’s content without scrubbing.
|
||||
- **Editorial assistance**: Visual guide for cut points and pacing.
|
||||
- **QC**: Spot exposure shifts, black frames, or artifacts quickly.
|
||||
- **Web/social**: Ready-made storyboards or strips for sharing.
|
||||
|
||||
---
|
||||
|
||||
## Flags & Behavior
|
||||
|
||||
### Sampling
|
||||
- `--fps <float>`: Sample rate in frames per second. Example: `0.5` → one frame every 2 seconds.
|
||||
- `--scene`: Enable scene-based selection (uses FFmpeg scene change detection).
|
||||
- `--scene-threshold <float>`: Sensitivity for `--scene`. Lower finds more cuts. Typical range: `0.3–0.6` (default: 0.4).
|
||||
|
||||
### Layout & Appearance
|
||||
- `--tile CxR`: Grid columns x rows for contact sheet (e.g., `6x4`). If omitted, the module can auto-fit based on sample count.
|
||||
- `--scale WxH`: Scale each thumbnail. Use `-1` to preserve aspect for one dimension (e.g., `320:-1`). Use fixed values for square tiles (e.g., `320:320`).
|
||||
- `--timestamps`: Overlay a timestamp on each tile (`hh:mm:ss`).
|
||||
- `--label`: Add a footer label with filename and duration.
|
||||
- `--bg '#RRGGBB'`: Sheet background color (default `#000000`).
|
||||
- `--margin <px>`: Margin around the full sheet (default `12`).
|
||||
- `--padding <px>`: Spacing between tiles (default `6`).
|
||||
- `--fontfile <ttf>`: Custom font path for drawtext (optional). If omitted, system default is used.
|
||||
|
||||
### Outputs
|
||||
- `--contactsheet <path>`: Write a single image (PNG/JPG recommended).
|
||||
- `--outdir <folder>`: Write a sequence of thumbnails (`frame_0001.jpg` etc.).
|
||||
- If you provide **both**, both products are generated.
|
||||
- If you only use global `-o`, it is treated as the contact sheet path.
|
||||
|
||||
### Defaults
|
||||
- If neither `--contactsheet` nor `--outdir` is provided, the module will **require** either global `-o` or one of those flags.
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
**1) Contact sheet, 5x4 grid, timestamps**
|
||||
```bash
|
||||
videobeaux -P thumbs -i ./media/bbb.mov -o ./out/bbb_contact_5x4.jpg --fps 0.5 --tile 5x4 --scale 320:-1 --timestamps --label -F
|
||||
```
|
||||
|
||||
**2) Scene-based sheet (5x5) with timestamps**
|
||||
```bash
|
||||
videobeaux -P thumbs -i ./media/bbb.mov -o ./out/bbb_contact_scenes_5x5.jpg --scene --scene-threshold 0.35 --tile 5x5 --scale 320:-1 --timestamps --label -F
|
||||
```
|
||||
|
||||
**3) Frame sequence only (no contact sheet)**
|
||||
```bash
|
||||
videobeaux -P thumbs -i ./media/bbb.mov --fps 1.0 --scale 480:-1 --outdir ./out/bbb_frames -F
|
||||
```
|
||||
|
||||
**4) Contact sheet + frame sequence, custom font**
|
||||
```bash
|
||||
videobeaux -P thumbs -i ./media/bbb.mov -o ./out/bbb_contact_font.jpg --fps 0.5 --tile 6x4 --scale 360:-1 --timestamps --fontfile ./fonts/Inter-SemiBold.ttf --outdir ./out/bbb_frames2 -F
|
||||
```
|
||||
|
||||
**5) Minimal (uses defaults)**
|
||||
```bash
|
||||
videobeaux -P thumbs -i ./media/bbb.mov -o ./out/bbb_contact_min.jpg -F
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Under the Hood
|
||||
|
||||
- **Frame sampling** uses `fps` or `select='gt(scene,THRESH)'` in FFmpeg filtergraphs.
|
||||
- **Timestamp overlay** uses `drawtext=text='%{pts\:hms}'` positioned near the bottom of each tile.
|
||||
- **Tiling** uses FFmpeg’s `tile=CxR` filter after scaling. Padding/margins can be done with `pad` and `tmpl` layer offsets.
|
||||
- **Scene detection** leverages FFmpeg’s scene filter and selects frames with deltas above the threshold.
|
||||
|
||||
**Conceptual FFmpeg filter chain (contact sheet):**
|
||||
```
|
||||
-vf "fps=0.5,scale=320:-1,tile=5x4"
|
||||
```
|
||||
or for scenes:
|
||||
```
|
||||
-vf "select='gt(scene,0.4)',scale=320:-1,tile=5x4"
|
||||
```
|
||||
|
||||
**Timestamp (drawtext) example:**
|
||||
```
|
||||
drawtext=text='%{pts\:hms}':x=10:y=h-th-10:fontsize=20:fontcolor=white
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Notes
|
||||
|
||||
- Lower `--fps` or higher `--scene-threshold` → fewer frames → faster.
|
||||
- Large grids and high `--scale` values increase memory and processing time.
|
||||
- Prefer JPG for large sheets (smaller files), PNG for lossless or text-heavy overlays.
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
- For long clips, start with `--fps 0.33` or `--scene --scene-threshold 0.4` to avoid massive sheets.
|
||||
- Keep `--scale` widths around `320–480` for practical sheet sizes.
|
||||
- Use `--label` when sending to clients—adds filename and duration context.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Text rendering errors**: Provide a `--fontfile` to guarantee glyph availability.
|
||||
- **Sheet too huge**: Reduce `--scale`, tighten `--fps`, or reduce `--tile` dimensions.
|
||||
- **Colors look off**: Ensure correct quoting for hex colors (`'#101010'`).
|
||||
|
||||
---
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- Optional scene clustering and caption under tiles (shot length).
|
||||
- Border styles (rounded corners, drop shadow).
|
||||
- Multi-row storyboards per chapter or per detected scene.
|
||||
93
docs/docs-tonemap_hdr_sdr.txt
Normal file
93
docs/docs-tonemap_hdr_sdr.txt
Normal file
@@ -0,0 +1,93 @@
|
||||
HDR → SDR Tone Map (videobeaux program)
|
||||
======================================
|
||||
|
||||
Name
|
||||
----
|
||||
tonemap_hdr_sdr — Convert HDR (PQ/HLG) video to SDR (BT.709) using
|
||||
ffmpeg `zscale` + `tonemap` with Hable (default), Mobius, Reinhard, or Clip.
|
||||
|
||||
What it does
|
||||
------------
|
||||
• Linearizes HDR content with zscale using a specified nominal peak (nits).
|
||||
• Applies an SDR-target tonemap operator (default: Hable) with optional highlight desaturation.
|
||||
• Converts color primaries/transfer/matrix to BT.709 and tags the stream accordingly.
|
||||
• Outputs in a user-specified pixel format (default yuv420p).
|
||||
• Re-encodes video (libx264 by default); audio can be copied with --copy-audio.
|
||||
|
||||
When to use
|
||||
-----------
|
||||
Use this when you have HDR10/HLG masters that need a faithful SDR deliverable
|
||||
for web or broadcast players that don’t support HDR.
|
||||
|
||||
Invocation pattern
|
||||
------------------
|
||||
videobeaux -P tonemap_hdr_sdr -i <input> --outfile <output> [options]
|
||||
|
||||
Arguments
|
||||
---------
|
||||
--outfile <path>
|
||||
Output file path for the SDR result. Use this instead of the global -o.
|
||||
|
||||
--algo {hable,mobius,reinhard,clip}
|
||||
The tonemap curve/operator. Defaults to "hable".
|
||||
• hable: Filmic curve with pleasing roll-off; great default.
|
||||
• mobius: Preserves mid-tones; gentle shoulder.
|
||||
• reinhard: Classic operator; can feel flatter.
|
||||
• clip: Hard clip (avoid unless you need an absolute ceiling).
|
||||
|
||||
--desat <float 0.0–1.0>
|
||||
Highlight desaturation applied during tonemapping. Default: 0.0.
|
||||
Try 0.15–0.35 for very hot HDR sources to avoid neon colors in speculars.
|
||||
|
||||
--peak <nits>
|
||||
Nominal peak luminance (nits) passed to zscale:npl for linearization.
|
||||
Default: 1000. Use 4000 for HDR masters graded to 4000 nits.
|
||||
|
||||
--dither {none,ordered,random,error_diffusion}
|
||||
Dithering strategy in zscale before format(). Default: error_diffusion.
|
||||
|
||||
--pix-fmt <ffmpeg pixel format>
|
||||
Output pixel format. Common: yuv420p (default), yuv422p10le, yuv420p10le.
|
||||
|
||||
--x264-preset <string>
|
||||
libx264 preset if encoding H.264. Default: medium. Examples: slow, fast.
|
||||
|
||||
--crf <number>
|
||||
CRF value for libx264 quality. Default: 18 (visually lossless-ish).
|
||||
|
||||
--copy-audio
|
||||
If set, copies audio bit-for-bit. Otherwise audio is encoded as AAC.
|
||||
|
||||
Standard/global flags respected (from videobeaux)
|
||||
------------------------------------------------
|
||||
--force Overwrite output if it exists (injects -y into ffmpeg).
|
||||
|
||||
Color tagging
|
||||
-------------
|
||||
The program writes BT.709 tags on the output:
|
||||
-colorspace bt709 -color_trc bt709 -color_primaries bt709
|
||||
|
||||
Practical guidance
|
||||
------------------
|
||||
• Start with: --algo hable --peak 1000 --desat 0.2
|
||||
• If midtones feel too flat, try --algo mobius.
|
||||
• If highlights look too neon, increase --desat to ~0.25–0.35.
|
||||
• For mezzanine masters, use 10-bit: --pix-fmt yuv422p10le and lower CRF.
|
||||
|
||||
Performance notes
|
||||
-----------------
|
||||
• zscale + tonemap is GPU-agnostic and runs on CPU; performance depends on your CPU.
|
||||
• For speed, you can try a faster x264 preset (e.g., --x264-preset fast) or target ProRes.
|
||||
|
||||
Troubleshooting
|
||||
---------------
|
||||
• “Washed out” SDR: confirm your player isn’t forcing HDR or BT.2020. The output is
|
||||
explicitly tagged BT.709.
|
||||
• Crushed highlights: your source peak was higher than expected; try --peak 4000 or switch
|
||||
operator to mobius.
|
||||
• Banding: use 10-bit pix-fmt (e.g., yuv420p10le) and keep --dither error_diffusion.
|
||||
|
||||
Changelog
|
||||
---------
|
||||
v1.0.1 — Switched to program-scoped --outfile (no use of global -o).
|
||||
v1.0.0 — Initial release (Hable default, Mobius/Reinhard/Clip options, desat, peak, dither, pix-fmt, CRF, copy-audio).
|
||||
@@ -59,3 +59,30 @@ python captburn.py -i testfile.mp4 -t testfile.json --style popon --align 2 --fo
|
||||
|
||||
20) Top-left locator/date (tiny & restrained)
|
||||
python captburn.py -i testfile.mp4 -t testfile.json --style popon --align 7 --font "IBM Plex Sans" --font-size 28 --outline "#000000" --outline-width 2 --margin-l 80 --margin-v 80
|
||||
|
||||
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/default.mp4
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/trans_explicit.mp4 -t media/testfile.json
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/reburn_from_capton.mp4 --caption media/testfile.captburn.json
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/doc_center.mp4 --style popon --font "Helvetica Neue" --font-size 38 --align 2 --primary "#FFFFFF" --outline "#000000" --outline-width 2 --shadow 2
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/vintage_top_right.mp4 --style popon --font "Baskerville" --italic --font-size 48 --align 9 --margin-r 120 --margin-v 80
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/mono_terminal.mp4 --font "Courier New" --font-size 34 --primary "#00FF00" --outline "#002200"
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/opaque_box.mp4 --border-style 3 --back "#000000" --back-opacity 0.6
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/youtube_shadow.mp4 --font "Arial Black" --font-size 48 --shadow 4 --outline "#101010" --outline-width 3
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/bottom_center_film.mp4 --align 2 --margin-v 60 --font-size 36
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/bottom_left.mp4 --align 1 --margin-l 100 --margin-v 60 --font-size 36
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/bottom_right.mp4 --align 3 --margin-r 100 --margin-v 60 --font-size 36
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/top_left.mp4 --align 7 --margin-l 60 --margin-v 80 --font-size 34
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/top_center.mp4 --align 8 --margin-v 100 --font-size 34
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/top_right.mp4 --align 9 --margin-r 80 --margin-v 100 --font-size 34
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/rotate_slight.mp4 --align 2 --font "Source Sans 3" --font-size 40 --rotate -2.0 --outline "#000000" --outline-width 2 --margin-v 80
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/move_slide_in.mp4 --align 2 --move 100,620,100,540,0,1000 --font-size 36
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/spin_quote.mp4 --align 8 --rotate 10 --font "Baskerville" --italic --font-size 42
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/painton.mp4 --style painton --font "Futura" --font-size 40
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/rollup2.mp4 --style rollup --rollup-lines 2 --words-per-line 7 --font-size 34
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/rollup4_small.mp4 --style rollup --rollup-lines 4 --font-size 30
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/scale_compress.mp4 --scale-x 85 --scale-y 100
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/spacing_wide.mp4 --spacing 1.8
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/small_wide_screen.mp4 --font-size 28 --margin-v 120
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/fast_preview.mp4 --crf 28 --preset ultrafast
|
||||
videobeaux -P captburn --input testfile.mp4 --output out/archival_quality.mp4 --crf 16 --preset slow --vcodec libx264
|
||||
87
media/bbb.mov.videobeaux.meta.json
Normal file
87
media/bbb.mov.videobeaux.meta.json
Normal file
@@ -0,0 +1,87 @@
|
||||
{
|
||||
"input_path": "/Users/tgm/Documents/SPLASH/videobeaux/media/bbb.mov",
|
||||
"generated_at_utc": "2025-11-09T18:39:51Z",
|
||||
"provenance": {
|
||||
"version": "videobeaux meta_extract v1",
|
||||
"ffprobe_cmd": "ffprobe -v error -print_format json -show_format -show_streams -show_chapters /Users/tgm/Documents/SPLASH/videobeaux/media/bbb.mov"
|
||||
},
|
||||
"format": {
|
||||
"filename": "/Users/tgm/Documents/SPLASH/videobeaux/media/bbb.mov",
|
||||
"format_name": "mov,mp4,m4a,3gp,3g2,mj2",
|
||||
"duration_sec": 8.4055,
|
||||
"size_bytes": 3668431,
|
||||
"bitrate_mbps": 3.491457,
|
||||
"tags": {
|
||||
"major_brand": "qt ",
|
||||
"minor_version": "0",
|
||||
"compatible_brands": "qt ",
|
||||
"creation_time": "2025-09-09T01:31:13.000000Z",
|
||||
"comment": "Creative Commons Attribution 3.0 - http://bbb3d.renderfarming.net",
|
||||
"artist": "Blender Foundation 2008, Janus Bager Kristensen 2013",
|
||||
"title": "Big Buck Bunny, Sunflower version",
|
||||
"genre": "Animation",
|
||||
"composer": "Sacha Goedegebure"
|
||||
}
|
||||
},
|
||||
"streams": {
|
||||
"video": [
|
||||
{
|
||||
"index": 1,
|
||||
"codec_name": "h264",
|
||||
"profile": "High",
|
||||
"width": 1920,
|
||||
"height": 1080,
|
||||
"pix_fmt": "yuv420p",
|
||||
"sar": "1:1",
|
||||
"dar": "16:9",
|
||||
"avg_fps": "3380000/56357",
|
||||
"avg_fps_float": 59.974803484926454,
|
||||
"time_base": "1/60000",
|
||||
"color": {
|
||||
"space": null,
|
||||
"primaries": null,
|
||||
"transfer": null
|
||||
},
|
||||
"rotation": null,
|
||||
"nb_frames": 507.0
|
||||
}
|
||||
],
|
||||
"audio": [
|
||||
{
|
||||
"index": 0,
|
||||
"codec_name": "aac",
|
||||
"sample_rate": 48000.0,
|
||||
"channels": 6,
|
||||
"channel_layout": "5.1",
|
||||
"bit_rate": 262717.0
|
||||
}
|
||||
],
|
||||
"subtitle": []
|
||||
},
|
||||
"chapters": [],
|
||||
"derived": {
|
||||
"has_video": true,
|
||||
"has_audio": true,
|
||||
"has_subtitles": false,
|
||||
"video_codecs": [
|
||||
"h264"
|
||||
],
|
||||
"audio_codecs": [
|
||||
"aac"
|
||||
],
|
||||
"container": "mov",
|
||||
"display_aspect_ratio": "16:9",
|
||||
"duration_hms": "00:00:08.41"
|
||||
},
|
||||
"sampling": {
|
||||
"enabled": false
|
||||
},
|
||||
"analysis": {
|
||||
"loudness": {
|
||||
"enabled": true,
|
||||
"integrated_lufs": null,
|
||||
"lra": null,
|
||||
"true_peak_dbfs": null
|
||||
}
|
||||
}
|
||||
}
|
||||
BIN
media/logo.png
Normal file
BIN
media/logo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 5.6 KiB |
|
Before Width: | Height: | Size: 65 KiB After Width: | Height: | Size: 65 KiB |
BIN
out/bbb_contact_scenes_5x5.jpg
Normal file
BIN
out/bbb_contact_scenes_5x5.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 50 KiB |
8
out/outbbb_hashes.json
Normal file
8
out/outbbb_hashes.json
Normal file
@@ -0,0 +1,8 @@
|
||||
[
|
||||
{
|
||||
"path": "/Users/tgm/Documents/SPLASH/videobeaux/media/bbb.mov",
|
||||
"size_bytes": 3668431,
|
||||
"file_md5": "0960b8a00d672b5f6513dad980eedf91",
|
||||
"file_sha256": "4c93138bf95abe47b9b5f5164fb25abef51e7ade288696858c76028e8f12ac10"
|
||||
}
|
||||
]
|
||||
109
out/rollup4_small.captburn.ass
Normal file
109
out/rollup4_small.captburn.ass
Normal file
@@ -0,0 +1,109 @@
|
||||
[Script Info]
|
||||
; Script generated by captburn (module)
|
||||
ScriptType: v4.00+
|
||||
WrapStyle: 2
|
||||
ScaledBorderAndShadow: yes
|
||||
PlayResX: 1280
|
||||
PlayResY: 720
|
||||
|
||||
[V4+ Styles]
|
||||
Format: Name,Fontname,Fontsize,PrimaryColour,SecondaryColour,OutlineColour,BackColour,Bold,Italic,Underline,StrikeOut,ScaleX,ScaleY,Spacing,Angle,BorderStyle,Outline,Shadow,Alignment,MarginL,MarginR,MarginV,Encoding
|
||||
Style: CaptBurn,Arial,30,&H00FFFFFF,&H00FFFFFF,&H00000000,&H00000000,0,0,0,0,100,100,0.0,0,1,3.0,0.0,2,60,60,40,1
|
||||
[Events]
|
||||
Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
|
||||
Dialogue: 0,0:00:00.12,0:00:00.84,CaptBurn,,0,0,0,,{\an2}because
|
||||
Dialogue: 0,0:00:00.12,0:00:02.13,CaptBurn,,0,0,0,,{\an2}because as
|
||||
Dialogue: 0,0:00:01.98,0:00:02.23,CaptBurn,,0,0,0,,{\an2}because as we
|
||||
Dialogue: 0,0:00:02.13,0:00:02.38,CaptBurn,,0,0,0,,{\an2}because as we were
|
||||
Dialogue: 0,0:00:02.22,0:00:02.47,CaptBurn,,0,0,0,,{\an2}because as we were just
|
||||
Dialogue: 0,0:00:02.28,0:00:02.73,CaptBurn,,0,0,0,,{\an2}because as we were just talking
|
||||
Dialogue: 0,0:00:02.43,0:00:02.88,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout
|
||||
Dialogue: 0,0:00:02.73,0:00:02.98,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a
|
||||
Dialogue: 0,0:00:02.88,0:00:03.27,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment
|
||||
Dialogue: 0,0:00:02.94,0:00:03.69,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago
|
||||
Dialogue: 0,0:00:03.27,0:00:04.17,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with
|
||||
Dialogue: 0,0:00:03.78,0:00:04.41,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith
|
||||
Dialogue: 0,0:00:04.17,0:00:04.95,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan
|
||||
Dialogue: 0,0:00:04.41,0:00:06.24,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after
|
||||
Dialogue: 0,0:00:05.46,0:00:06.69,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after one
|
||||
Dialogue: 0,0:00:06.30,0:00:06.84,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after one of
|
||||
Dialogue: 0,0:00:06.69,0:00:07.08,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after one of\Nthe
|
||||
Dialogue: 0,0:00:06.84,0:00:07.83,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after one of\Nthe doge
|
||||
Dialogue: 0,0:00:07.08,0:00:09.12,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after one of\Nthe doge employees
|
||||
Dialogue: 0,0:00:08.31,0:00:09.66,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after one of\Nthe doge employees was
|
||||
Dialogue: 0,0:00:09.12,0:00:10.14,CaptBurn,,0,0,0,,{\an2}because as we were just talking\Nabout a moment ago with\Nwith alan after one of\Nthe doge employees was allegedly
|
||||
Dialogue: 0,0:00:09.72,0:00:10.53,CaptBurn,,0,0,0,,{\an2}about a moment ago with\Nwith alan after one of\Nthe doge employees was allegedly\Nattacked
|
||||
Dialogue: 0,0:00:10.14,0:00:10.62,CaptBurn,,0,0,0,,{\an2}about a moment ago with\Nwith alan after one of\Nthe doge employees was allegedly\Nattacked in
|
||||
Dialogue: 0,0:00:10.53,0:00:11.16,CaptBurn,,0,0,0,,{\an2}about a moment ago with\Nwith alan after one of\Nthe doge employees was allegedly\Nattacked in washington
|
||||
Dialogue: 0,0:00:10.62,0:00:11.34,CaptBurn,,0,0,0,,{\an2}about a moment ago with\Nwith alan after one of\Nthe doge employees was allegedly\Nattacked in washington d
|
||||
Dialogue: 0,0:00:11.16,0:00:11.63,CaptBurn,,0,0,0,,{\an2}about a moment ago with\Nwith alan after one of\Nthe doge employees was allegedly\Nattacked in washington d c
|
||||
Dialogue: 0,0:00:11.34,0:00:11.85,CaptBurn,,0,0,0,,{\an2}with alan after one of\Nthe doge employees was allegedly\Nattacked in washington d c\Nthat's
|
||||
Dialogue: 0,0:00:11.65,0:00:11.97,CaptBurn,,0,0,0,,{\an2}with alan after one of\Nthe doge employees was allegedly\Nattacked in washington d c\Nthat's what
|
||||
Dialogue: 0,0:00:11.85,0:00:12.21,CaptBurn,,0,0,0,,{\an2}with alan after one of\Nthe doge employees was allegedly\Nattacked in washington d c\Nthat's what donald
|
||||
Dialogue: 0,0:00:11.97,0:00:12.42,CaptBurn,,0,0,0,,{\an2}with alan after one of\Nthe doge employees was allegedly\Nattacked in washington d c\Nthat's what donald trump
|
||||
Dialogue: 0,0:00:12.21,0:00:12.87,CaptBurn,,0,0,0,,{\an2}with alan after one of\Nthe doge employees was allegedly\Nattacked in washington d c\Nthat's what donald trump used
|
||||
Dialogue: 0,0:00:12.42,0:00:13.23,CaptBurn,,0,0,0,,{\an2}the doge employees was allegedly\Nattacked in washington d c\Nthat's what donald trump used\Nas
|
||||
Dialogue: 0,0:00:12.87,0:00:14.52,CaptBurn,,0,0,0,,{\an2}the doge employees was allegedly\Nattacked in washington d c\Nthat's what donald trump used\Nas justification
|
||||
Dialogue: 0,0:00:13.77,0:00:14.61,CaptBurn,,0,0,0,,{\an2}the doge employees was allegedly\Nattacked in washington d c\Nthat's what donald trump used\Nas justification to
|
||||
Dialogue: 0,0:00:14.52,0:00:15.00,CaptBurn,,0,0,0,,{\an2}the doge employees was allegedly\Nattacked in washington d c\Nthat's what donald trump used\Nas justification to send
|
||||
Dialogue: 0,0:00:14.61,0:00:15.36,CaptBurn,,0,0,0,,{\an2}the doge employees was allegedly\Nattacked in washington d c\Nthat's what donald trump used\Nas justification to send in
|
||||
Dialogue: 0,0:00:15.00,0:00:16.95,CaptBurn,,0,0,0,,{\an2}attacked in washington d c\Nthat's what donald trump used\Nas justification to send in\Nfederal
|
||||
Dialogue: 0,0:00:16.59,0:00:17.31,CaptBurn,,0,0,0,,{\an2}attacked in washington d c\Nthat's what donald trump used\Nas justification to send in\Nfederal troops
|
||||
Dialogue: 0,0:00:16.95,0:00:17.61,CaptBurn,,0,0,0,,{\an2}attacked in washington d c\Nthat's what donald trump used\Nas justification to send in\Nfederal troops into
|
||||
Dialogue: 0,0:00:17.31,0:00:18.18,CaptBurn,,0,0,0,,{\an2}attacked in washington d c\Nthat's what donald trump used\Nas justification to send in\Nfederal troops into washington
|
||||
Dialogue: 0,0:00:17.61,0:00:18.39,CaptBurn,,0,0,0,,{\an2}attacked in washington d c\Nthat's what donald trump used\Nas justification to send in\Nfederal troops into washington d
|
||||
Dialogue: 0,0:00:18.18,0:00:18.62,CaptBurn,,0,0,0,,{\an2}that's what donald trump used\Nas justification to send in\Nfederal troops into washington d\Nc
|
||||
Dialogue: 0,0:00:18.39,0:00:18.84,CaptBurn,,0,0,0,,{\an2}that's what donald trump used\Nas justification to send in\Nfederal troops into washington d\Nc to
|
||||
Dialogue: 0,0:00:18.63,0:00:19.02,CaptBurn,,0,0,0,,{\an2}that's what donald trump used\Nas justification to send in\Nfederal troops into washington d\Nc to to
|
||||
Dialogue: 0,0:00:18.90,0:00:19.20,CaptBurn,,0,0,0,,{\an2}that's what donald trump used\Nas justification to send in\Nfederal troops into washington d\Nc to to get
|
||||
Dialogue: 0,0:00:19.02,0:00:19.38,CaptBurn,,0,0,0,,{\an2}that's what donald trump used\Nas justification to send in\Nfederal troops into washington d\Nc to to get things
|
||||
Dialogue: 0,0:00:19.20,0:00:19.56,CaptBurn,,0,0,0,,{\an2}as justification to send in\Nfederal troops into washington d\Nc to to get things\Nunder
|
||||
Dialogue: 0,0:00:19.38,0:00:20.01,CaptBurn,,0,0,0,,{\an2}as justification to send in\Nfederal troops into washington d\Nc to to get things\Nunder control
|
||||
Dialogue: 0,0:00:19.56,0:00:20.10,CaptBurn,,0,0,0,,{\an2}as justification to send in\Nfederal troops into washington d\Nc to to get things\Nunder control the
|
||||
Dialogue: 0,0:00:20.01,0:00:20.31,CaptBurn,,0,0,0,,{\an2}as justification to send in\Nfederal troops into washington d\Nc to to get things\Nunder control the car
|
||||
Dialogue: 0,0:00:20.10,0:00:20.70,CaptBurn,,0,0,0,,{\an2}as justification to send in\Nfederal troops into washington d\Nc to to get things\Nunder control the car jacking
|
||||
Dialogue: 0,0:00:20.31,0:00:21.33,CaptBurn,,0,0,0,,{\an2}federal troops into washington d\Nc to to get things\Nunder control the car jacking\Nsituation
|
||||
Dialogue: 0,0:00:20.70,0:00:21.51,CaptBurn,,0,0,0,,{\an2}federal troops into washington d\Nc to to get things\Nunder control the car jacking\Nsituation he
|
||||
Dialogue: 0,0:00:21.33,0:00:21.81,CaptBurn,,0,0,0,,{\an2}federal troops into washington d\Nc to to get things\Nunder control the car jacking\Nsituation he used
|
||||
Dialogue: 0,0:00:21.51,0:00:22.11,CaptBurn,,0,0,0,,{\an2}federal troops into washington d\Nc to to get things\Nunder control the car jacking\Nsituation he used that
|
||||
Dialogue: 0,0:00:21.81,0:00:23.31,CaptBurn,,0,0,0,,{\an2}federal troops into washington d\Nc to to get things\Nunder control the car jacking\Nsituation he used that and
|
||||
Dialogue: 0,0:00:22.83,0:00:23.50,CaptBurn,,0,0,0,,{\an2}c to to get things\Nunder control the car jacking\Nsituation he used that and\Ni
|
||||
Dialogue: 0,0:00:23.34,0:00:23.70,CaptBurn,,0,0,0,,{\an2}c to to get things\Nunder control the car jacking\Nsituation he used that and\Ni i
|
||||
Dialogue: 0,0:00:23.50,0:00:23.85,CaptBurn,,0,0,0,,{\an2}c to to get things\Nunder control the car jacking\Nsituation he used that and\Ni i know
|
||||
Dialogue: 0,0:00:23.70,0:00:23.97,CaptBurn,,0,0,0,,{\an2}c to to get things\Nunder control the car jacking\Nsituation he used that and\Ni i know it's
|
||||
Dialogue: 0,0:00:23.85,0:00:24.15,CaptBurn,,0,0,0,,{\an2}c to to get things\Nunder control the car jacking\Nsituation he used that and\Ni i know it's hard
|
||||
Dialogue: 0,0:00:23.97,0:00:24.24,CaptBurn,,0,0,0,,{\an2}under control the car jacking\Nsituation he used that and\Ni i know it's hard\Nto
|
||||
Dialogue: 0,0:00:24.15,0:00:24.48,CaptBurn,,0,0,0,,{\an2}under control the car jacking\Nsituation he used that and\Ni i know it's hard\Nto predict
|
||||
Dialogue: 0,0:00:24.24,0:00:24.57,CaptBurn,,0,0,0,,{\an2}under control the car jacking\Nsituation he used that and\Ni i know it's hard\Nto predict the
|
||||
Dialogue: 0,0:00:24.48,0:00:24.93,CaptBurn,,0,0,0,,{\an2}under control the car jacking\Nsituation he used that and\Ni i know it's hard\Nto predict the future
|
||||
Dialogue: 0,0:00:24.57,0:00:25.26,CaptBurn,,0,0,0,,{\an2}under control the car jacking\Nsituation he used that and\Ni i know it's hard\Nto predict the future mark
|
||||
Dialogue: 0,0:00:24.93,0:00:25.71,CaptBurn,,0,0,0,,{\an2}situation he used that and\Ni i know it's hard\Nto predict the future mark\Nbut
|
||||
Dialogue: 0,0:00:25.26,0:00:26.07,CaptBurn,,0,0,0,,{\an2}situation he used that and\Ni i know it's hard\Nto predict the future mark\Nbut you
|
||||
Dialogue: 0,0:00:25.92,0:00:26.22,CaptBurn,,0,0,0,,{\an2}situation he used that and\Ni i know it's hard\Nto predict the future mark\Nbut you can
|
||||
Dialogue: 0,0:00:26.07,0:00:26.61,CaptBurn,,0,0,0,,{\an2}situation he used that and\Ni i know it's hard\Nto predict the future mark\Nbut you can imagine
|
||||
Dialogue: 0,0:00:26.22,0:00:26.70,CaptBurn,,0,0,0,,{\an2}situation he used that and\Ni i know it's hard\Nto predict the future mark\Nbut you can imagine the
|
||||
Dialogue: 0,0:00:26.61,0:00:27.36,CaptBurn,,0,0,0,,{\an2}i i know it's hard\Nto predict the future mark\Nbut you can imagine the\Nadministration
|
||||
Dialogue: 0,0:00:26.70,0:00:27.84,CaptBurn,,0,0,0,,{\an2}i i know it's hard\Nto predict the future mark\Nbut you can imagine the\Nadministration using
|
||||
Dialogue: 0,0:00:27.36,0:00:28.14,CaptBurn,,0,0,0,,{\an2}i i know it's hard\Nto predict the future mark\Nbut you can imagine the\Nadministration using this
|
||||
Dialogue: 0,0:00:27.84,0:00:28.26,CaptBurn,,0,0,0,,{\an2}i i know it's hard\Nto predict the future mark\Nbut you can imagine the\Nadministration using this as
|
||||
Dialogue: 0,0:00:28.14,0:00:28.39,CaptBurn,,0,0,0,,{\an2}i i know it's hard\Nto predict the future mark\Nbut you can imagine the\Nadministration using this as a
|
||||
Dialogue: 0,0:00:28.26,0:00:29.10,CaptBurn,,0,0,0,,{\an2}to predict the future mark\Nbut you can imagine the\Nadministration using this as a\Njustification
|
||||
Dialogue: 0,0:00:28.32,0:00:29.25,CaptBurn,,0,0,0,,{\an2}to predict the future mark\Nbut you can imagine the\Nadministration using this as a\Njustification for
|
||||
Dialogue: 0,0:00:29.10,0:00:29.73,CaptBurn,,0,0,0,,{\an2}to predict the future mark\Nbut you can imagine the\Nadministration using this as a\Njustification for something
|
||||
Dialogue: 0,0:00:29.25,0:00:31.80,CaptBurn,,0,0,0,,{\an2}to predict the future mark\Nbut you can imagine the\Nadministration using this as a\Njustification for something i
|
||||
Dialogue: 0,0:00:31.53,0:00:32.19,CaptBurn,,0,0,0,,{\an2}to predict the future mark\Nbut you can imagine the\Nadministration using this as a\Njustification for something i must
|
||||
Dialogue: 0,0:00:32.01,0:00:32.40,CaptBurn,,0,0,0,,{\an2}but you can imagine the\Nadministration using this as a\Njustification for something i must\Nadmit
|
||||
Dialogue: 0,0:00:32.19,0:00:32.67,CaptBurn,,0,0,0,,{\an2}but you can imagine the\Nadministration using this as a\Njustification for something i must\Nadmit i'm
|
||||
Dialogue: 0,0:00:32.50,0:00:32.94,CaptBurn,,0,0,0,,{\an2}but you can imagine the\Nadministration using this as a\Njustification for something i must\Nadmit i'm on
|
||||
Dialogue: 0,0:00:32.67,0:00:33.14,CaptBurn,,0,0,0,,{\an2}but you can imagine the\Nadministration using this as a\Njustification for something i must\Nadmit i'm on a
|
||||
Dialogue: 0,0:00:33.06,0:00:33.33,CaptBurn,,0,0,0,,{\an2}but you can imagine the\Nadministration using this as a\Njustification for something i must\Nadmit i'm on a metal
|
||||
Dialogue: 0,0:00:33.14,0:00:33.63,CaptBurn,,0,0,0,,{\an2}administration using this as a\Njustification for something i must\Nadmit i'm on a metal\Nloss
|
||||
Dialogue: 0,0:00:33.33,0:00:33.72,CaptBurn,,0,0,0,,{\an2}administration using this as a\Njustification for something i must\Nadmit i'm on a metal\Nloss to
|
||||
Dialogue: 0,0:00:33.63,0:00:33.99,CaptBurn,,0,0,0,,{\an2}administration using this as a\Njustification for something i must\Nadmit i'm on a metal\Nloss to guess
|
||||
Dialogue: 0,0:00:33.72,0:00:34.14,CaptBurn,,0,0,0,,{\an2}administration using this as a\Njustification for something i must\Nadmit i'm on a metal\Nloss to guess as
|
||||
Dialogue: 0,0:00:33.99,0:00:34.24,CaptBurn,,0,0,0,,{\an2}administration using this as a\Njustification for something i must\Nadmit i'm on a metal\Nloss to guess as to
|
||||
Dialogue: 0,0:00:34.14,0:00:34.41,CaptBurn,,0,0,0,,{\an2}justification for something i must\Nadmit i'm on a metal\Nloss to guess as to\Nwhat
|
||||
Dialogue: 0,0:00:34.23,0:00:34.77,CaptBurn,,0,0,0,,{\an2}justification for something i must\Nadmit i'm on a metal\Nloss to guess as to\Nwhat happens
|
||||
Dialogue: 0,0:00:34.41,0:00:35.19,CaptBurn,,0,0,0,,{\an2}justification for something i must\Nadmit i'm on a metal\Nloss to guess as to\Nwhat happens next
|
||||
Dialogue: 0,0:00:34.77,0:00:35.97,CaptBurn,,0,0,0,,{\an2}justification for something i must\Nadmit i'm on a metal\Nloss to guess as to\Nwhat happens next but
|
||||
Dialogue: 0,0:00:35.58,0:00:36.75,CaptBurn,,0,0,0,,{\an2}justification for something i must\Nadmit i'm on a metal\Nloss to guess as to\Nwhat happens next but courage
|
||||
506
out/rollup4_small.captburn.json
Normal file
506
out/rollup4_small.captburn.json
Normal file
@@ -0,0 +1,506 @@
|
||||
{
|
||||
"version": "1.0.2",
|
||||
"style": {
|
||||
"name": "CaptBurn",
|
||||
"fontname": "Arial",
|
||||
"fontsize": 30,
|
||||
"primary": "#FFFFFF",
|
||||
"outline": "#000000",
|
||||
"outline_width": 3.0,
|
||||
"shadow": 0.0,
|
||||
"back": "#000000",
|
||||
"back_opacity": 0.0,
|
||||
"bold": false,
|
||||
"italic": false,
|
||||
"scale_x": 100,
|
||||
"scale_y": 100,
|
||||
"spacing": 0.0,
|
||||
"margin_l": 60,
|
||||
"margin_r": 60,
|
||||
"margin_v": 40,
|
||||
"align": 2,
|
||||
"border_style": 1
|
||||
},
|
||||
"events": [
|
||||
{
|
||||
"start": 0.12,
|
||||
"end": 0.84,
|
||||
"text": "because"
|
||||
},
|
||||
{
|
||||
"start": 0.12,
|
||||
"end": 2.13,
|
||||
"text": "because as"
|
||||
},
|
||||
{
|
||||
"start": 1.98,
|
||||
"end": 2.23,
|
||||
"text": "because as we"
|
||||
},
|
||||
{
|
||||
"start": 2.13,
|
||||
"end": 2.38,
|
||||
"text": "because as we were"
|
||||
},
|
||||
{
|
||||
"start": 2.219213,
|
||||
"end": 2.469213,
|
||||
"text": "because as we were just"
|
||||
},
|
||||
{
|
||||
"start": 2.28,
|
||||
"end": 2.73,
|
||||
"text": "because as we were just talking"
|
||||
},
|
||||
{
|
||||
"start": 2.43,
|
||||
"end": 2.88,
|
||||
"text": "because as we were just talking\\Nabout"
|
||||
},
|
||||
{
|
||||
"start": 2.73,
|
||||
"end": 2.98,
|
||||
"text": "because as we were just talking\\Nabout a"
|
||||
},
|
||||
{
|
||||
"start": 2.88,
|
||||
"end": 3.27,
|
||||
"text": "because as we were just talking\\Nabout a moment"
|
||||
},
|
||||
{
|
||||
"start": 2.94,
|
||||
"end": 3.69,
|
||||
"text": "because as we were just talking\\Nabout a moment ago"
|
||||
},
|
||||
{
|
||||
"start": 3.27,
|
||||
"end": 4.17,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with"
|
||||
},
|
||||
{
|
||||
"start": 3.78,
|
||||
"end": 4.41,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith"
|
||||
},
|
||||
{
|
||||
"start": 4.17,
|
||||
"end": 4.95,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan"
|
||||
},
|
||||
{
|
||||
"start": 4.41,
|
||||
"end": 6.239729,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after"
|
||||
},
|
||||
{
|
||||
"start": 5.46,
|
||||
"end": 6.69,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after one"
|
||||
},
|
||||
{
|
||||
"start": 6.3,
|
||||
"end": 6.84,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after one of"
|
||||
},
|
||||
{
|
||||
"start": 6.69,
|
||||
"end": 7.08,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after one of\\Nthe"
|
||||
},
|
||||
{
|
||||
"start": 6.84,
|
||||
"end": 7.829846,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after one of\\Nthe doge"
|
||||
},
|
||||
{
|
||||
"start": 7.08,
|
||||
"end": 9.12,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after one of\\Nthe doge employees"
|
||||
},
|
||||
{
|
||||
"start": 8.310181,
|
||||
"end": 9.66,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after one of\\Nthe doge employees was"
|
||||
},
|
||||
{
|
||||
"start": 9.12,
|
||||
"end": 10.14,
|
||||
"text": "because as we were just talking\\Nabout a moment ago with\\Nwith alan after one of\\Nthe doge employees was allegedly"
|
||||
},
|
||||
{
|
||||
"start": 9.72,
|
||||
"end": 10.53,
|
||||
"text": "about a moment ago with\\Nwith alan after one of\\Nthe doge employees was allegedly\\Nattacked"
|
||||
},
|
||||
{
|
||||
"start": 10.14,
|
||||
"end": 10.62,
|
||||
"text": "about a moment ago with\\Nwith alan after one of\\Nthe doge employees was allegedly\\Nattacked in"
|
||||
},
|
||||
{
|
||||
"start": 10.53,
|
||||
"end": 11.16,
|
||||
"text": "about a moment ago with\\Nwith alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington"
|
||||
},
|
||||
{
|
||||
"start": 10.62,
|
||||
"end": 11.34,
|
||||
"text": "about a moment ago with\\Nwith alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington d"
|
||||
},
|
||||
{
|
||||
"start": 11.16,
|
||||
"end": 11.633912,
|
||||
"text": "about a moment ago with\\Nwith alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington d c"
|
||||
},
|
||||
{
|
||||
"start": 11.34,
|
||||
"end": 11.85,
|
||||
"text": "with alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington d c\\Nthat's"
|
||||
},
|
||||
{
|
||||
"start": 11.645056,
|
||||
"end": 11.96972,
|
||||
"text": "with alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington d c\\Nthat's what"
|
||||
},
|
||||
{
|
||||
"start": 11.85,
|
||||
"end": 12.21,
|
||||
"text": "with alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald"
|
||||
},
|
||||
{
|
||||
"start": 11.969919,
|
||||
"end": 12.42,
|
||||
"text": "with alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald trump"
|
||||
},
|
||||
{
|
||||
"start": 12.21,
|
||||
"end": 12.87,
|
||||
"text": "with alan after one of\\Nthe doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald trump used"
|
||||
},
|
||||
{
|
||||
"start": 12.42,
|
||||
"end": 13.23,
|
||||
"text": "the doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald trump used\\Nas"
|
||||
},
|
||||
{
|
||||
"start": 12.87,
|
||||
"end": 14.52,
|
||||
"text": "the doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald trump used\\Nas justification"
|
||||
},
|
||||
{
|
||||
"start": 13.77,
|
||||
"end": 14.61,
|
||||
"text": "the doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald trump used\\Nas justification to"
|
||||
},
|
||||
{
|
||||
"start": 14.52,
|
||||
"end": 15.0,
|
||||
"text": "the doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald trump used\\Nas justification to send"
|
||||
},
|
||||
{
|
||||
"start": 14.61,
|
||||
"end": 15.36,
|
||||
"text": "the doge employees was allegedly\\Nattacked in washington d c\\Nthat's what donald trump used\\Nas justification to send in"
|
||||
},
|
||||
{
|
||||
"start": 15.0,
|
||||
"end": 16.95,
|
||||
"text": "attacked in washington d c\\Nthat's what donald trump used\\Nas justification to send in\\Nfederal"
|
||||
},
|
||||
{
|
||||
"start": 16.590125,
|
||||
"end": 17.31,
|
||||
"text": "attacked in washington d c\\Nthat's what donald trump used\\Nas justification to send in\\Nfederal troops"
|
||||
},
|
||||
{
|
||||
"start": 16.95,
|
||||
"end": 17.61,
|
||||
"text": "attacked in washington d c\\Nthat's what donald trump used\\Nas justification to send in\\Nfederal troops into"
|
||||
},
|
||||
{
|
||||
"start": 17.31,
|
||||
"end": 18.18,
|
||||
"text": "attacked in washington d c\\Nthat's what donald trump used\\Nas justification to send in\\Nfederal troops into washington"
|
||||
},
|
||||
{
|
||||
"start": 17.61,
|
||||
"end": 18.39,
|
||||
"text": "attacked in washington d c\\Nthat's what donald trump used\\Nas justification to send in\\Nfederal troops into washington d"
|
||||
},
|
||||
{
|
||||
"start": 18.18,
|
||||
"end": 18.617108,
|
||||
"text": "that's what donald trump used\\Nas justification to send in\\Nfederal troops into washington d\\Nc"
|
||||
},
|
||||
{
|
||||
"start": 18.39,
|
||||
"end": 18.838445,
|
||||
"text": "that's what donald trump used\\Nas justification to send in\\Nfederal troops into washington d\\Nc to"
|
||||
},
|
||||
{
|
||||
"start": 18.63,
|
||||
"end": 19.02,
|
||||
"text": "that's what donald trump used\\Nas justification to send in\\Nfederal troops into washington d\\Nc to to"
|
||||
},
|
||||
{
|
||||
"start": 18.899874,
|
||||
"end": 19.2,
|
||||
"text": "that's what donald trump used\\Nas justification to send in\\Nfederal troops into washington d\\Nc to to get"
|
||||
},
|
||||
{
|
||||
"start": 19.02,
|
||||
"end": 19.38,
|
||||
"text": "that's what donald trump used\\Nas justification to send in\\Nfederal troops into washington d\\Nc to to get things"
|
||||
},
|
||||
{
|
||||
"start": 19.2,
|
||||
"end": 19.56,
|
||||
"text": "as justification to send in\\Nfederal troops into washington d\\Nc to to get things\\Nunder"
|
||||
},
|
||||
{
|
||||
"start": 19.38,
|
||||
"end": 20.009216,
|
||||
"text": "as justification to send in\\Nfederal troops into washington d\\Nc to to get things\\Nunder control"
|
||||
},
|
||||
{
|
||||
"start": 19.56,
|
||||
"end": 20.099663,
|
||||
"text": "as justification to send in\\Nfederal troops into washington d\\Nc to to get things\\Nunder control the"
|
||||
},
|
||||
{
|
||||
"start": 20.010784,
|
||||
"end": 20.31,
|
||||
"text": "as justification to send in\\Nfederal troops into washington d\\Nc to to get things\\Nunder control the car"
|
||||
},
|
||||
{
|
||||
"start": 20.1,
|
||||
"end": 20.7,
|
||||
"text": "as justification to send in\\Nfederal troops into washington d\\Nc to to get things\\Nunder control the car jacking"
|
||||
},
|
||||
{
|
||||
"start": 20.31,
|
||||
"end": 21.33,
|
||||
"text": "federal troops into washington d\\Nc to to get things\\Nunder control the car jacking\\Nsituation"
|
||||
},
|
||||
{
|
||||
"start": 20.700273,
|
||||
"end": 21.51,
|
||||
"text": "federal troops into washington d\\Nc to to get things\\Nunder control the car jacking\\Nsituation he"
|
||||
},
|
||||
{
|
||||
"start": 21.33,
|
||||
"end": 21.806191,
|
||||
"text": "federal troops into washington d\\Nc to to get things\\Nunder control the car jacking\\Nsituation he used"
|
||||
},
|
||||
{
|
||||
"start": 21.51,
|
||||
"end": 22.11,
|
||||
"text": "federal troops into washington d\\Nc to to get things\\Nunder control the car jacking\\Nsituation he used that"
|
||||
},
|
||||
{
|
||||
"start": 21.806191,
|
||||
"end": 23.31,
|
||||
"text": "federal troops into washington d\\Nc to to get things\\Nunder control the car jacking\\Nsituation he used that and"
|
||||
},
|
||||
{
|
||||
"start": 22.83,
|
||||
"end": 23.500197,
|
||||
"text": "c to to get things\\Nunder control the car jacking\\Nsituation he used that and\\Ni"
|
||||
},
|
||||
{
|
||||
"start": 23.34,
|
||||
"end": 23.699352,
|
||||
"text": "c to to get things\\Nunder control the car jacking\\Nsituation he used that and\\Ni i"
|
||||
},
|
||||
{
|
||||
"start": 23.500197,
|
||||
"end": 23.85,
|
||||
"text": "c to to get things\\Nunder control the car jacking\\Nsituation he used that and\\Ni i know"
|
||||
},
|
||||
{
|
||||
"start": 23.7,
|
||||
"end": 23.97,
|
||||
"text": "c to to get things\\Nunder control the car jacking\\Nsituation he used that and\\Ni i know it's"
|
||||
},
|
||||
{
|
||||
"start": 23.85,
|
||||
"end": 24.15,
|
||||
"text": "c to to get things\\Nunder control the car jacking\\Nsituation he used that and\\Ni i know it's hard"
|
||||
},
|
||||
{
|
||||
"start": 23.97,
|
||||
"end": 24.24,
|
||||
"text": "under control the car jacking\\Nsituation he used that and\\Ni i know it's hard\\Nto"
|
||||
},
|
||||
{
|
||||
"start": 24.15,
|
||||
"end": 24.48,
|
||||
"text": "under control the car jacking\\Nsituation he used that and\\Ni i know it's hard\\Nto predict"
|
||||
},
|
||||
{
|
||||
"start": 24.24,
|
||||
"end": 24.57,
|
||||
"text": "under control the car jacking\\Nsituation he used that and\\Ni i know it's hard\\Nto predict the"
|
||||
},
|
||||
{
|
||||
"start": 24.48,
|
||||
"end": 24.929727,
|
||||
"text": "under control the car jacking\\Nsituation he used that and\\Ni i know it's hard\\Nto predict the future"
|
||||
},
|
||||
{
|
||||
"start": 24.57,
|
||||
"end": 25.26,
|
||||
"text": "under control the car jacking\\Nsituation he used that and\\Ni i know it's hard\\Nto predict the future mark"
|
||||
},
|
||||
{
|
||||
"start": 24.930461,
|
||||
"end": 25.71,
|
||||
"text": "situation he used that and\\Ni i know it's hard\\Nto predict the future mark\\Nbut"
|
||||
},
|
||||
{
|
||||
"start": 25.26,
|
||||
"end": 26.07,
|
||||
"text": "situation he used that and\\Ni i know it's hard\\Nto predict the future mark\\Nbut you"
|
||||
},
|
||||
{
|
||||
"start": 25.92,
|
||||
"end": 26.22,
|
||||
"text": "situation he used that and\\Ni i know it's hard\\Nto predict the future mark\\Nbut you can"
|
||||
},
|
||||
{
|
||||
"start": 26.07,
|
||||
"end": 26.61,
|
||||
"text": "situation he used that and\\Ni i know it's hard\\Nto predict the future mark\\Nbut you can imagine"
|
||||
},
|
||||
{
|
||||
"start": 26.22,
|
||||
"end": 26.7,
|
||||
"text": "situation he used that and\\Ni i know it's hard\\Nto predict the future mark\\Nbut you can imagine the"
|
||||
},
|
||||
{
|
||||
"start": 26.61,
|
||||
"end": 27.36,
|
||||
"text": "i i know it's hard\\Nto predict the future mark\\Nbut you can imagine the\\Nadministration"
|
||||
},
|
||||
{
|
||||
"start": 26.7,
|
||||
"end": 27.84,
|
||||
"text": "i i know it's hard\\Nto predict the future mark\\Nbut you can imagine the\\Nadministration using"
|
||||
},
|
||||
{
|
||||
"start": 27.36,
|
||||
"end": 28.14,
|
||||
"text": "i i know it's hard\\Nto predict the future mark\\Nbut you can imagine the\\Nadministration using this"
|
||||
},
|
||||
{
|
||||
"start": 27.84,
|
||||
"end": 28.26,
|
||||
"text": "i i know it's hard\\Nto predict the future mark\\Nbut you can imagine the\\Nadministration using this as"
|
||||
},
|
||||
{
|
||||
"start": 28.14,
|
||||
"end": 28.39,
|
||||
"text": "i i know it's hard\\Nto predict the future mark\\Nbut you can imagine the\\Nadministration using this as a"
|
||||
},
|
||||
{
|
||||
"start": 28.26,
|
||||
"end": 29.1,
|
||||
"text": "to predict the future mark\\Nbut you can imagine the\\Nadministration using this as a\\Njustification"
|
||||
},
|
||||
{
|
||||
"start": 28.32,
|
||||
"end": 29.25,
|
||||
"text": "to predict the future mark\\Nbut you can imagine the\\Nadministration using this as a\\Njustification for"
|
||||
},
|
||||
{
|
||||
"start": 29.1,
|
||||
"end": 29.73,
|
||||
"text": "to predict the future mark\\Nbut you can imagine the\\Nadministration using this as a\\Njustification for something"
|
||||
},
|
||||
{
|
||||
"start": 29.25,
|
||||
"end": 31.8,
|
||||
"text": "to predict the future mark\\Nbut you can imagine the\\Nadministration using this as a\\Njustification for something i"
|
||||
},
|
||||
{
|
||||
"start": 31.53,
|
||||
"end": 32.19,
|
||||
"text": "to predict the future mark\\Nbut you can imagine the\\Nadministration using this as a\\Njustification for something i must"
|
||||
},
|
||||
{
|
||||
"start": 32.007415,
|
||||
"end": 32.4,
|
||||
"text": "but you can imagine the\\Nadministration using this as a\\Njustification for something i must\\Nadmit"
|
||||
},
|
||||
{
|
||||
"start": 32.19,
|
||||
"end": 32.665836,
|
||||
"text": "but you can imagine the\\Nadministration using this as a\\Njustification for something i must\\Nadmit i'm"
|
||||
},
|
||||
{
|
||||
"start": 32.496588,
|
||||
"end": 32.939832,
|
||||
"text": "but you can imagine the\\Nadministration using this as a\\Njustification for something i must\\Nadmit i'm on"
|
||||
},
|
||||
{
|
||||
"start": 32.665836,
|
||||
"end": 33.138182,
|
||||
"text": "but you can imagine the\\Nadministration using this as a\\Njustification for something i must\\Nadmit i'm on a"
|
||||
},
|
||||
{
|
||||
"start": 33.059432,
|
||||
"end": 33.33,
|
||||
"text": "but you can imagine the\\Nadministration using this as a\\Njustification for something i must\\Nadmit i'm on a metal"
|
||||
},
|
||||
{
|
||||
"start": 33.138182,
|
||||
"end": 33.631571,
|
||||
"text": "administration using this as a\\Njustification for something i must\\Nadmit i'm on a metal\\Nloss"
|
||||
},
|
||||
{
|
||||
"start": 33.33,
|
||||
"end": 33.718253,
|
||||
"text": "administration using this as a\\Njustification for something i must\\Nadmit i'm on a metal\\Nloss to"
|
||||
},
|
||||
{
|
||||
"start": 33.634603,
|
||||
"end": 33.99,
|
||||
"text": "administration using this as a\\Njustification for something i must\\Nadmit i'm on a metal\\Nloss to guess"
|
||||
},
|
||||
{
|
||||
"start": 33.718253,
|
||||
"end": 34.14,
|
||||
"text": "administration using this as a\\Njustification for something i must\\Nadmit i'm on a metal\\Nloss to guess as"
|
||||
},
|
||||
{
|
||||
"start": 33.99,
|
||||
"end": 34.24,
|
||||
"text": "administration using this as a\\Njustification for something i must\\Nadmit i'm on a metal\\Nloss to guess as to"
|
||||
},
|
||||
{
|
||||
"start": 34.14,
|
||||
"end": 34.41,
|
||||
"text": "justification for something i must\\Nadmit i'm on a metal\\Nloss to guess as to\\Nwhat"
|
||||
},
|
||||
{
|
||||
"start": 34.23,
|
||||
"end": 34.77,
|
||||
"text": "justification for something i must\\Nadmit i'm on a metal\\Nloss to guess as to\\Nwhat happens"
|
||||
},
|
||||
{
|
||||
"start": 34.41,
|
||||
"end": 35.19,
|
||||
"text": "justification for something i must\\Nadmit i'm on a metal\\Nloss to guess as to\\Nwhat happens next"
|
||||
},
|
||||
{
|
||||
"start": 34.77,
|
||||
"end": 35.97,
|
||||
"text": "justification for something i must\\Nadmit i'm on a metal\\Nloss to guess as to\\Nwhat happens next but"
|
||||
},
|
||||
{
|
||||
"start": 35.58,
|
||||
"end": 36.75,
|
||||
"text": "justification for something i must\\Nadmit i'm on a metal\\Nloss to guess as to\\Nwhat happens next but courage"
|
||||
}
|
||||
]
|
||||
}
|
||||
26
out/spacing_wide.captburn.ass
Normal file
26
out/spacing_wide.captburn.ass
Normal file
@@ -0,0 +1,26 @@
|
||||
[Script Info]
|
||||
; Script generated by captburn (module)
|
||||
ScriptType: v4.00+
|
||||
WrapStyle: 2
|
||||
ScaledBorderAndShadow: yes
|
||||
PlayResX: 1280
|
||||
PlayResY: 720
|
||||
|
||||
[V4+ Styles]
|
||||
Format: Name,Fontname,Fontsize,PrimaryColour,SecondaryColour,OutlineColour,BackColour,Bold,Italic,Underline,StrikeOut,ScaleX,ScaleY,Spacing,Angle,BorderStyle,Outline,Shadow,Alignment,MarginL,MarginR,MarginV,Encoding
|
||||
Style: CaptBurn,Arial,42,&H00FFFFFF,&H00FFFFFF,&H00000000,&H00000000,0,0,0,0,100,100,1.8,0,1,3.0,0.0,2,60,60,40,1
|
||||
[Events]
|
||||
Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
|
||||
Dialogue: 0,0:00:00.12,0:00:02.88,CaptBurn,,0,0,0,,{\an2}because as we were just talking about
|
||||
Dialogue: 0,0:00:02.88,0:00:06.69,CaptBurn,,0,0,0,,{\an2}a moment ago with with alan after one
|
||||
Dialogue: 0,0:00:06.69,0:00:10.53,CaptBurn,,0,0,0,,{\an2}of the doge employees was allegedly attacked
|
||||
Dialogue: 0,0:00:10.53,0:00:12.21,CaptBurn,,0,0,0,,{\an2}in washington d c that's what donald
|
||||
Dialogue: 0,0:00:12.21,0:00:15.36,CaptBurn,,0,0,0,,{\an2}trump used as justification to send in
|
||||
Dialogue: 0,0:00:16.59,0:00:18.84,CaptBurn,,0,0,0,,{\an2}federal troops into washington d c to
|
||||
Dialogue: 0,0:00:18.90,0:00:20.70,CaptBurn,,0,0,0,,{\an2}to get things under control the car jacking
|
||||
Dialogue: 0,0:00:20.70,0:00:23.97,CaptBurn,,0,0,0,,{\an2}situation he used that and i i know it's
|
||||
Dialogue: 0,0:00:23.97,0:00:26.07,CaptBurn,,0,0,0,,{\an2}hard to predict the future mark but you
|
||||
Dialogue: 0,0:00:26.07,0:00:27.84,CaptBurn,,0,0,0,,{\an2}can imagine the administration using
|
||||
Dialogue: 0,0:00:27.84,0:00:29.73,CaptBurn,,0,0,0,,{\an2}this as a justification for something
|
||||
Dialogue: 0,0:00:31.53,0:00:33.99,CaptBurn,,0,0,0,,{\an2}i must admit i'm on a metal loss to guess
|
||||
Dialogue: 0,0:00:33.99,0:00:36.75,CaptBurn,,0,0,0,,{\an2}as to what happens next but courage
|
||||
91
out/spacing_wide.captburn.json
Normal file
91
out/spacing_wide.captburn.json
Normal file
@@ -0,0 +1,91 @@
|
||||
{
|
||||
"version": "1.0.2",
|
||||
"style": {
|
||||
"name": "CaptBurn",
|
||||
"fontname": "Arial",
|
||||
"fontsize": 42,
|
||||
"primary": "#FFFFFF",
|
||||
"outline": "#000000",
|
||||
"outline_width": 3.0,
|
||||
"shadow": 0.0,
|
||||
"back": "#000000",
|
||||
"back_opacity": 0.0,
|
||||
"bold": false,
|
||||
"italic": false,
|
||||
"scale_x": 100,
|
||||
"scale_y": 100,
|
||||
"spacing": 1.8,
|
||||
"margin_l": 60,
|
||||
"margin_r": 60,
|
||||
"margin_v": 40,
|
||||
"align": 2,
|
||||
"border_style": 1
|
||||
},
|
||||
"events": [
|
||||
{
|
||||
"start": 0.12,
|
||||
"end": 2.88,
|
||||
"text": "because as we were just talking about"
|
||||
},
|
||||
{
|
||||
"start": 2.88,
|
||||
"end": 6.69,
|
||||
"text": "a moment ago with with alan after one"
|
||||
},
|
||||
{
|
||||
"start": 6.69,
|
||||
"end": 10.53,
|
||||
"text": "of the doge employees was allegedly attacked"
|
||||
},
|
||||
{
|
||||
"start": 10.53,
|
||||
"end": 12.21,
|
||||
"text": "in washington d c that's what donald"
|
||||
},
|
||||
{
|
||||
"start": 12.21,
|
||||
"end": 15.36,
|
||||
"text": "trump used as justification to send in"
|
||||
},
|
||||
{
|
||||
"start": 16.590125,
|
||||
"end": 18.838445,
|
||||
"text": "federal troops into washington d c to"
|
||||
},
|
||||
{
|
||||
"start": 18.899874,
|
||||
"end": 20.7,
|
||||
"text": "to get things under control the car jacking"
|
||||
},
|
||||
{
|
||||
"start": 20.700273,
|
||||
"end": 23.97,
|
||||
"text": "situation he used that and i i know it's"
|
||||
},
|
||||
{
|
||||
"start": 23.97,
|
||||
"end": 26.07,
|
||||
"text": "hard to predict the future mark but you"
|
||||
},
|
||||
{
|
||||
"start": 26.07,
|
||||
"end": 27.84,
|
||||
"text": "can imagine the administration using"
|
||||
},
|
||||
{
|
||||
"start": 27.84,
|
||||
"end": 29.73,
|
||||
"text": "this as a justification for something"
|
||||
},
|
||||
{
|
||||
"start": 31.53,
|
||||
"end": 33.99,
|
||||
"text": "i must admit i'm on a metal loss to guess"
|
||||
},
|
||||
{
|
||||
"start": 33.99,
|
||||
"end": 36.75,
|
||||
"text": "as to what happens next but courage"
|
||||
}
|
||||
]
|
||||
}
|
||||
1
testfile.json
Normal file
1
testfile.json
Normal file
File diff suppressed because one or more lines are too long
225
videobeaux/programs/gamma_fix.py
Normal file
225
videobeaux/programs/gamma_fix.py
Normal file
@@ -0,0 +1,225 @@
|
||||
import subprocess
|
||||
import re
|
||||
import statistics
|
||||
from pathlib import Path
|
||||
|
||||
from videobeaux.utils.ffmpeg_operations import run_ffmpeg_with_progress
|
||||
|
||||
# -----------------------------
|
||||
# Helpers
|
||||
# -----------------------------
|
||||
|
||||
_YAVG_RE = re.compile(r"YAVG[:=]\s*([0-9]+(?:\.[0-9]+)?)")
|
||||
|
||||
def _probe_yavg_values(input_path: str, max_samples: int = 200) -> list[float]:
|
||||
"""
|
||||
Run a fast ffmpeg prepass with signalstats to gather YAVG samples.
|
||||
Returns a list of YAVG values in 0..255.
|
||||
"""
|
||||
# We downscale + fps limit during probe for speed, without altering stats trends much.
|
||||
# (stats before scale is ideal, but signalstats after a mild, fast scale is fine for global mean.)
|
||||
cmd = [
|
||||
"ffmpeg",
|
||||
"-hide_banner", "-nostdin",
|
||||
"-i", input_path,
|
||||
# Keep it quick: sample at ~4 fps, tiny scale, metadata only.
|
||||
"-vf", "signalstats,framestep=2,scale=iw*0.25:ih*0.25",
|
||||
"-f", "null", "-"
|
||||
]
|
||||
proc = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
text = proc.stderr + proc.stdout
|
||||
|
||||
yvals = [float(m.group(1)) for m in _YAVG_RE.finditer(text)]
|
||||
if not yvals:
|
||||
# Fallback: try again without framestep/scale if a weird codec/stream blocks it
|
||||
cmd = ["ffmpeg", "-hide_banner", "-nostdin", "-i", input_path, "-vf", "signalstats", "-f", "null", "-"]
|
||||
proc = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
text = proc.stderr + proc.stdout
|
||||
yvals = [float(m.group(1)) for m in _YAVG_RE.finditer(text)]
|
||||
|
||||
if len(yvals) > max_samples:
|
||||
# Uniformly downsample the list to max_samples for median stability
|
||||
step = max(1, len(yvals) // max_samples)
|
||||
yvals = yvals[::step]
|
||||
|
||||
return yvals
|
||||
|
||||
def _compute_eq_params(yavg_values: list[float], target_yavg: float, min_contrast: float, max_contrast: float):
|
||||
"""
|
||||
Compute eq filter brightness/contrast that maps the measured median YAVG close to target.
|
||||
eq works in normalized [0..1] domain as: y' = (y - 0.5)*contrast + 0.5 + brightness
|
||||
|
||||
We choose a contrast around (target/current), clamped; then derive brightness to hit target.
|
||||
Returns (contrast, brightness). Brightness is in [-1, 1]; contrast is typically [0.5, 1.5].
|
||||
"""
|
||||
if not yavg_values:
|
||||
# If probing failed, do nothing (neutral)
|
||||
return 1.0, 0.0
|
||||
|
||||
current = statistics.median(yavg_values) # robust against bright/dark spikes
|
||||
# Convert 8-bit YAVG to normalized [0,1]
|
||||
y_cur = max(0.0, min(1.0, current / 255.0))
|
||||
y_tgt = max(0.0, min(1.0, target_yavg / 255.0))
|
||||
|
||||
# Initial contrast guess: keep gentle moves, clamp to avoid harsh clipping
|
||||
raw_gain = (y_tgt / y_cur) if y_cur > 1e-6 else 1.0
|
||||
contrast = max(min_contrast, min(max_contrast, raw_gain))
|
||||
|
||||
# Solve for brightness that maps the current median to target
|
||||
# t = (y_cur - 0.5)*c + 0.5 + b => b = t - ((y_cur - 0.5)*c + 0.5)
|
||||
brightness = y_tgt - ((y_cur - 0.5) * contrast + 0.5)
|
||||
|
||||
# Clamp brightness to eq valid range [-1, 1]
|
||||
brightness = max(-1.0, min(1.0, brightness))
|
||||
|
||||
return contrast, brightness
|
||||
|
||||
def _build_filter_chain(contrast: float, brightness: float, gamma: float | None, legalize: bool, sat_boost: float) -> str:
|
||||
"""
|
||||
Build the ffmpeg -vf filter chain.
|
||||
- eq for exposure normalization
|
||||
- (optional) gamma tweak
|
||||
- (optional) saturation boost
|
||||
- (optional) legalize to broadcast-safe luma/chroma (TV range)
|
||||
"""
|
||||
chain = []
|
||||
|
||||
# exposure/contrast/brightness normalize
|
||||
eq_parts = [f"contrast={contrast:.3f}", f"brightness={brightness:.3f}"]
|
||||
if gamma is not None and abs(gamma - 1.0) > 1e-6:
|
||||
eq_parts.append(f"gamma={gamma:.3f}")
|
||||
chain.append("eq=" + ":".join(eq_parts))
|
||||
|
||||
# saturation tweak (via hsv / hue sat)
|
||||
if abs(sat_boost - 1.0) > 1e-6:
|
||||
# hue=s=multiplier; 1.10 = +10%
|
||||
chain.append(f"hue=s={sat_boost:.3f}")
|
||||
|
||||
# broadcast legalize: convert from full->TV range if needed
|
||||
if legalize:
|
||||
# Use zscale to remap to TV (limited) range safely
|
||||
# rangein=auto tries to detect; range=tv enforces legal range outputs
|
||||
chain.append("zscale=range=tv")
|
||||
# yuv420p for web/broadcast delivery compat
|
||||
chain.append("format=yuv420p")
|
||||
|
||||
return ",".join(chain)
|
||||
|
||||
# -----------------------------
|
||||
# Public API expected by cli.py
|
||||
# -----------------------------
|
||||
|
||||
def register_arguments(parser):
|
||||
parser.description = (
|
||||
"Gamma / Exposure Fix — auto-detect overall luminance and normalize for web/broadcast.\n"
|
||||
"Prepass samples luma (YAVG) with signalstats, computes friendly contrast/brightness (and optional gamma),\n"
|
||||
"and applies safe clamping if requested."
|
||||
)
|
||||
# Core behavior flags (global --input/--output/--force are provided by cli.py)
|
||||
parser.add_argument(
|
||||
"--target-yavg",
|
||||
type=float,
|
||||
default=64.0,
|
||||
help="Target average luma (0..255). ~64 is a balanced web midpoint. Try 60–70 for darker footage, 70–90 for bright."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--min-contrast",
|
||||
type=float,
|
||||
default=0.80,
|
||||
help="Lower clamp for auto contrast mapping. Default 0.80."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-contrast",
|
||||
type=float,
|
||||
default=1.35,
|
||||
help="Upper clamp for auto contrast mapping. Default 1.35."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--gamma",
|
||||
type=float,
|
||||
default=1.00,
|
||||
help="Optional gamma override (1.00 = neutral). Leave at 1.00 to rely on contrast/brightness mapping."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--sat",
|
||||
type=float,
|
||||
default=1.00,
|
||||
help="Optional saturation multiplier via hue filter (1.00 = unchanged). e.g., 1.10 = +10%% saturation."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legalize",
|
||||
action="store_true",
|
||||
help="Clamp output to broadcast-legal (TV) range using zscale and output yuv420p."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--vcodec",
|
||||
type=str,
|
||||
default="libx264",
|
||||
help="Video codec for output. Default libx264."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--crf",
|
||||
type=str,
|
||||
default="18",
|
||||
help="CRF for output quality (x264/x265). Default 18."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--preset",
|
||||
type=str,
|
||||
default="medium",
|
||||
help="Encoder preset (x264/x265). Default medium."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--acodec",
|
||||
type=str,
|
||||
default="aac",
|
||||
help="Audio codec. Default aac."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--ab",
|
||||
type=str,
|
||||
default="160k",
|
||||
help="Audio bitrate. Default 160k."
|
||||
)
|
||||
|
||||
def run(args):
|
||||
# 1) Probe luminance stats
|
||||
yvals = _probe_yavg_values(args.input)
|
||||
contrast, brightness = _compute_eq_params(
|
||||
yavg_values=yvals,
|
||||
target_yavg=args.target_yavg,
|
||||
min_contrast=args.min_contrast,
|
||||
max_contrast=args.max_contrast
|
||||
)
|
||||
# If user forcibly set gamma != 1.0, honor it; otherwise pass None to omit param
|
||||
gamma = args.gamma if args.gamma and abs(args.gamma - 1.0) > 1e-6 else None
|
||||
|
||||
# 2) Build filter graph
|
||||
vf = _build_filter_chain(
|
||||
contrast=contrast,
|
||||
brightness=brightness,
|
||||
gamma=gamma,
|
||||
legalize=args.legalize,
|
||||
sat_boost=args.sat
|
||||
)
|
||||
|
||||
# 3) Encode
|
||||
command = [
|
||||
"ffmpeg",
|
||||
"-i", args.input,
|
||||
"-vf", vf,
|
||||
"-c:v", args.vcodec,
|
||||
"-crf", args.crf,
|
||||
"-preset", args.preset,
|
||||
"-c:a", args.acodec,
|
||||
"-b:a", args.ab,
|
||||
"-ac", "2",
|
||||
args.output
|
||||
]
|
||||
|
||||
# Use your standard progress runner that respects --force just like other programs
|
||||
run_ffmpeg_with_progress(
|
||||
(command[:1] + ["-y"] + command[1:]) if args.force else command,
|
||||
args.input,
|
||||
args.output
|
||||
)
|
||||
251
videobeaux/programs/hash_fingerprint.py
Normal file
251
videobeaux/programs/hash_fingerprint.py
Normal file
@@ -0,0 +1,251 @@
|
||||
# videobeaux/programs/hash_fingerprint.py
|
||||
from __future__ import annotations
|
||||
import argparse, csv, hashlib, json, os, shlex, subprocess, sys
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
|
||||
# Optional deps for perceptual hashing (gracefully degrade if missing)
|
||||
try:
|
||||
from PIL import Image
|
||||
PIL_OK = True
|
||||
except Exception:
|
||||
PIL_OK = False
|
||||
|
||||
VERSION = "videobeaux hash_fingerprint v1"
|
||||
|
||||
# Default file extensions we’ll scan in batch mode (can be overridden)
|
||||
DEFAULT_EXTS = [
|
||||
".mp4", ".mov", ".m4v", ".mkv", ".webm", ".avi", ".wmv", ".mxf",
|
||||
".mp3", ".m4a", ".aac", ".wav", ".flac", ".ogg", ".opus",
|
||||
".jpg", ".jpeg", ".png", ".bmp", ".tif", ".tiff", ".gif"
|
||||
]
|
||||
|
||||
def register_arguments(parser: argparse.ArgumentParser):
|
||||
parser.description = (
|
||||
"Compute file hashes (md5/sha1/sha256), optional FFmpeg stream hash, per-frame checksums, "
|
||||
"and optional perceptual hashes over sampled frames. Works on a single input or a directory."
|
||||
)
|
||||
|
||||
# Discovery / batch
|
||||
parser.add_argument("--recursive", action="store_true",
|
||||
help="If input is a directory, scan recursively.")
|
||||
parser.add_argument("--exts", nargs="+", default=DEFAULT_EXTS,
|
||||
help="File extensions to include when scanning a directory (case-insensitive).")
|
||||
|
||||
# Algorithms
|
||||
parser.add_argument("--file-hashes", nargs="+",
|
||||
choices=["md5", "sha1", "sha256"], default=["md5", "sha256"],
|
||||
help="File-level hashes to compute (streamed, no load into RAM).")
|
||||
parser.add_argument("--stream-hash", choices=["none", "md5", "sha256"], default="none",
|
||||
help="Use FFmpeg -f hash to hash the primary video stream (fast, codec-level).")
|
||||
parser.add_argument("--framemd5", action="store_true",
|
||||
help="Emit per-frame checksums via FFmpeg -f framemd5 (verbose).")
|
||||
|
||||
# Perceptual hashing (over sampled frames)
|
||||
parser.add_argument("--phash", action="store_true",
|
||||
help="Compute perceptual hashes (average hash) over sampled frames. Requires Pillow.")
|
||||
parser.add_argument("--phash-fps", type=float, default=0.5,
|
||||
help="Approx frames-per-second to sample for perceptual hashing (0.5 = one frame every 2s).")
|
||||
parser.add_argument("--phash-size", type=int, default=8,
|
||||
help="aHash size NxN (default 8 -> 64-bit hash).")
|
||||
|
||||
# Output catalog
|
||||
parser.add_argument("--catalog", required=False,
|
||||
help="Output catalog path (.json or .csv). If not provided, writes <first_input>.hashes.json")
|
||||
|
||||
# Stream selection (advanced)
|
||||
parser.add_argument("--stream-kind", choices=["video", "audio"], default="video",
|
||||
help="Which primary stream to hash for --stream-hash/--framemd5.")
|
||||
|
||||
# Force overwrite behavior is handled at top-level; we just respect existing files for CSV/JSON if desired.
|
||||
|
||||
# ----------------- helpers -----------------
|
||||
|
||||
def _iter_files(entry: Path, exts: List[str], recursive: bool) -> List[Path]:
|
||||
if entry.is_file():
|
||||
return [entry]
|
||||
exts_lower = {e.lower() for e in exts}
|
||||
files: List[Path] = []
|
||||
walker = entry.rglob("*") if recursive else entry.glob("*")
|
||||
for p in walker:
|
||||
if p.is_file() and p.suffix.lower() in exts_lower:
|
||||
files.append(p)
|
||||
return sorted(files)
|
||||
|
||||
def _hash_file(path: Path, method: str) -> str:
|
||||
h = hashlib.new(method)
|
||||
with path.open("rb") as f:
|
||||
for chunk in iter(lambda: f.read(1024 * 1024), b""):
|
||||
h.update(chunk)
|
||||
return h.hexdigest()
|
||||
|
||||
def _ffmpeg_stream_hash(path: Path, algo: str, kind: str) -> Optional[str]:
|
||||
# Map selection
|
||||
stream_map = "0:v:0" if kind == "video" else "0:a:0"
|
||||
cmd = [
|
||||
"ffmpeg", "-v", "error", "-i", str(path),
|
||||
"-map", stream_map,
|
||||
"-f", "hash", "-hash", algo, "-"
|
||||
]
|
||||
proc = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
if proc.returncode != 0:
|
||||
return None
|
||||
# Output line like "MD5=xxxxxxxx" or "SHA256=xxxxxxxx"
|
||||
for line in (proc.stdout or "").splitlines():
|
||||
line = line.strip()
|
||||
if "=" in line:
|
||||
return line.split("=", 1)[1].strip()
|
||||
return None
|
||||
|
||||
def _ffmpeg_framemd5(path: Path, kind: str) -> List[str]:
|
||||
stream_map = "0:v:0" if kind == "video" else "0:a:0"
|
||||
cmd = [
|
||||
"ffmpeg", "-v", "error", "-i", str(path),
|
||||
"-map", stream_map,
|
||||
"-f", "framemd5", "-"
|
||||
]
|
||||
proc = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
if proc.returncode != 0:
|
||||
return []
|
||||
# Return raw lines (CSV/JSON writer can embed or omit)
|
||||
return [ln.rstrip("\n") for ln in (proc.stdout or "").splitlines()]
|
||||
|
||||
def _extract_sample_frames(path: Path, fps: float) -> List[Path]:
|
||||
"""
|
||||
Extract sampled frames to a temp folder next to file. Caller cleans up or keeps ephemeral.
|
||||
For catalog reproducibility we won't delete by default (caller may choose).
|
||||
"""
|
||||
out_dir = path.parent / (path.stem + ".hashframes")
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
# Use -q:v 4 for decent JPEG; names frame_000001.jpg etc.
|
||||
cmd = [
|
||||
"ffmpeg", "-v", "error", "-i", str(path),
|
||||
"-vf", f"fps={fps}",
|
||||
"-qscale:v", "4",
|
||||
str(out_dir / "frame_%06d.jpg")
|
||||
]
|
||||
subprocess.run(cmd, check=False)
|
||||
return sorted(out_dir.glob("frame_*.jpg"))
|
||||
|
||||
def _ahash_image(p: Path, size: int) -> Optional[str]:
|
||||
if not PIL_OK:
|
||||
return None
|
||||
try:
|
||||
img = Image.open(p).convert("L").resize((size, size))
|
||||
px = list(img.getdata())
|
||||
avg = sum(px) / float(len(px))
|
||||
bits = "".join("1" if val >= avg else "0" for val in px)
|
||||
# pack into hex (4 bits per hex char)
|
||||
width = size * size
|
||||
hex_len = (width + 3) // 4
|
||||
return f"{int(bits, 2):0{hex_len}x}"
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def _catalog_default_path(first_input: Path) -> Path:
|
||||
return first_input.with_suffix(first_input.suffix + ".hashes.json")
|
||||
|
||||
def _write_json(path: Path, rows: List[Dict]):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with path.open("w", encoding="utf-8") as f:
|
||||
json.dump(rows, f, indent=2, ensure_ascii=False)
|
||||
|
||||
def _write_csv(path: Path, rows: List[Dict]):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
# stable column order
|
||||
field_order = [
|
||||
"path", "size_bytes",
|
||||
"file_md5", "file_sha1", "file_sha256",
|
||||
"stream_md5", "stream_sha256",
|
||||
"phash_algo", "phash_size", "phash_frames",
|
||||
]
|
||||
# Include framemd5? We'll omit from CSV (too verbose). JSON includes it if requested.
|
||||
with path.open("w", newline="", encoding="utf-8") as f:
|
||||
w = csv.DictWriter(f, fieldnames=field_order, extrasaction="ignore")
|
||||
w.writeheader()
|
||||
for r in rows:
|
||||
w.writerow(r)
|
||||
|
||||
# ----------------- main -----------------
|
||||
|
||||
def run(args: argparse.Namespace):
|
||||
input_path = Path(args.input) # global CLI provides this
|
||||
entries: List[Path] = []
|
||||
|
||||
if not input_path.exists():
|
||||
print(f"❌ Input not found: {input_path}")
|
||||
sys.exit(1)
|
||||
|
||||
if input_path.is_dir():
|
||||
entries = _iter_files(input_path, args.exts, args.recursive)
|
||||
if not entries:
|
||||
print(f"⚠️ No files found in {input_path} (recursive={args.recursive}, exts={args.exts})")
|
||||
sys.exit(0)
|
||||
else:
|
||||
entries = [input_path]
|
||||
|
||||
# Determine catalog path
|
||||
if args.catalog:
|
||||
catalog_path = Path(args.catalog)
|
||||
else:
|
||||
catalog_path = _catalog_default_path(entries[0])
|
||||
|
||||
results: List[Dict] = []
|
||||
|
||||
for p in entries:
|
||||
rec: Dict[str, Optional[str] | int | List[str] | Dict] = {}
|
||||
rec["path"] = str(p.resolve())
|
||||
try:
|
||||
rec["size_bytes"] = p.stat().st_size
|
||||
except Exception:
|
||||
rec["size_bytes"] = None
|
||||
|
||||
# File-level hashes
|
||||
for h in args.file_hashes:
|
||||
try:
|
||||
rec[f"file_{h}"] = _hash_file(p, h)
|
||||
except Exception as e:
|
||||
rec[f"file_{h}"] = None
|
||||
|
||||
# Stream hash via FFmpeg
|
||||
if args.stream_hash != "none":
|
||||
try:
|
||||
sh = _ffmpeg_stream_hash(p, args.stream_hash, args.stream_kind)
|
||||
rec[f"stream_{args.stream_hash}"] = sh
|
||||
except Exception:
|
||||
rec[f"stream_{args.stream_hash}"] = None
|
||||
|
||||
# framemd5 (verbose)
|
||||
if args.framemd5:
|
||||
try:
|
||||
rec["framemd5"] = _ffmpeg_framemd5(p, args.stream_kind)
|
||||
except Exception:
|
||||
rec["framemd5"] = []
|
||||
|
||||
# Perceptual hashing over sampled frames
|
||||
if args.phash:
|
||||
if not PIL_OK:
|
||||
rec["phash_error"] = "Pillow not installed; install Pillow to enable perceptual hashing."
|
||||
else:
|
||||
frames = _extract_sample_frames(p, fps=max(0.01, float(args.phash_fps)))
|
||||
hashes = []
|
||||
for fp in frames:
|
||||
h = _ahash_image(fp, size=max(4, int(args.phash_size)))
|
||||
if h:
|
||||
hashes.append(h)
|
||||
rec["phash_algo"] = "aHash"
|
||||
rec["phash_size"] = int(args.phash_size)
|
||||
rec["phash_frames"] = len(hashes)
|
||||
rec["phash_list"] = hashes # keep full list in JSON; CSV will ignore
|
||||
|
||||
results.append(rec)
|
||||
|
||||
# Write catalog
|
||||
suffix = catalog_path.suffix.lower()
|
||||
if suffix == ".csv":
|
||||
_write_csv(catalog_path, results)
|
||||
else:
|
||||
# default JSON
|
||||
_write_json(catalog_path, results)
|
||||
|
||||
print(f"📒 Wrote catalog → {catalog_path} ({len(results)} item(s))")
|
||||
122
videobeaux/programs/lut_apply.py
Normal file
122
videobeaux/programs/lut_apply.py
Normal file
@@ -0,0 +1,122 @@
|
||||
# videobeaux/programs/lut_apply.py
|
||||
# Color Correction / LUT Apply — requires --outfile (no -o/--output fallback)
|
||||
|
||||
from videobeaux.utils.ffmpeg_operations import run_ffmpeg_with_progress
|
||||
|
||||
def _is_hi_bit_pf(pix_fmt: str) -> bool:
|
||||
if not pix_fmt:
|
||||
return False
|
||||
pf = pix_fmt.lower()
|
||||
return "p10" in pf or "p12" in pf
|
||||
|
||||
def register_arguments(parser):
|
||||
parser.description = (
|
||||
"Color Correction / LUT Apply\n"
|
||||
"• Apply a 3D LUT (.cube/.3dl) with adjustable intensity.\n"
|
||||
"• Basic color tweaks: brightness, contrast, saturation, gamma.\n"
|
||||
"• Uses only --outfile for output (no -o/--output)."
|
||||
)
|
||||
|
||||
# Program-specific output ONLY (no short alias; avoids global -o)
|
||||
parser.add_argument(
|
||||
"--outfile",
|
||||
required=True,
|
||||
help="Output video file (required)"
|
||||
)
|
||||
|
||||
# Optional explicit vcodec; if omitted we auto-pick based on pix_fmt
|
||||
parser.add_argument(
|
||||
"--vcodec",
|
||||
choices=["libx264", "libx265", "prores_ks", "dnxhd"],
|
||||
help="Force a specific video codec (else auto-select)."
|
||||
)
|
||||
|
||||
# LUT controls
|
||||
parser.add_argument("--lut", help="Path to a 3D LUT file (.cube, .3dl).")
|
||||
parser.add_argument("--interp", choices=["tetrahedral", "trilinear", "nearest"],
|
||||
default="tetrahedral", help="LUT interpolation. Default: tetrahedral")
|
||||
parser.add_argument("--intensity", type=float, default=1.0,
|
||||
help="Mix of LUT with original [0.0–1.0]. Default: 1.0")
|
||||
|
||||
# EQ (basic color)
|
||||
parser.add_argument("--brightness", type=float, default=0.0, help="Brightness offset [-1..1].")
|
||||
parser.add_argument("--contrast", type=float, default=1.0, help="Contrast multiplier [0..2].")
|
||||
parser.add_argument("--saturation", type=float, default=1.0, help="Saturation multiplier [0..3].")
|
||||
parser.add_argument("--gamma", type=float, default=1.0, help="Gamma multiplier [0.1..10].")
|
||||
|
||||
# Output / encode
|
||||
parser.add_argument("--pix-fmt", default="yuv420p",
|
||||
help="Output pixel format (e.g., yuv420p, yuv422p10le).")
|
||||
parser.add_argument("--x264-preset", default="medium", help="Encoder preset (x264/x265). Default: medium")
|
||||
parser.add_argument("--crf", type=float, default=18.0,
|
||||
help="CRF (x264/x265). Lower = higher quality.")
|
||||
parser.add_argument("--copy-audio", action="store_true",
|
||||
help="Copy audio instead of re-encoding.")
|
||||
|
||||
# NOTE: Do NOT declare --force here — it’s global. We’ll still read args.force if present.
|
||||
|
||||
def run(args):
|
||||
infile = getattr(args, "input", None) # provided by global CLI
|
||||
outfile = args.outfile # required here
|
||||
if not infile:
|
||||
raise SystemExit("❌ Missing input. Provide -i/--input globally.")
|
||||
|
||||
# EQ chain
|
||||
eq = (
|
||||
f"eq=brightness={args.brightness}:"
|
||||
f"contrast={args.contrast}:"
|
||||
f"saturation={args.saturation}:"
|
||||
f"gamma={args.gamma}"
|
||||
)
|
||||
|
||||
fg_parts = []
|
||||
|
||||
# LUT branch w/ intensity blend
|
||||
if args.lut:
|
||||
intensity = max(0.0, min(1.0, float(args.intensity)))
|
||||
if intensity >= 0.9999:
|
||||
fg_parts.append(f"[0:v]lut3d=file='{args.lut}':interp={args.interp}[v_lut]")
|
||||
src = "[v_lut]"
|
||||
elif intensity <= 0.0001:
|
||||
src = "[0:v]"
|
||||
else:
|
||||
fg_parts.append(
|
||||
f"[0:v]split[v_o][v_b];"
|
||||
f"[v_b]lut3d=file='{args.lut}':interp={args.interp}[v_lut];"
|
||||
f"[v_o][v_lut]blend=all_mode=normal:all_opacity={intensity}[v_mix]"
|
||||
)
|
||||
src = "[v_mix]"
|
||||
else:
|
||||
src = "[0:v]"
|
||||
|
||||
fg_parts.append(f"{src},{eq}[v_eq]")
|
||||
fg_parts.append(f"[v_eq]format={args.pix_fmt}[out_v]")
|
||||
filtergraph = ";".join(fg_parts)
|
||||
|
||||
# Decide codec (auto if not forced)
|
||||
pix_fmt = args.pix_fmt
|
||||
vcodec = args.vcodec or ("libx265" if _is_hi_bit_pf(pix_fmt) else "libx264")
|
||||
|
||||
# Optional audio map so silent inputs don’t fail
|
||||
audio_map = ["-map", "0:a?"]
|
||||
audio_codec = ["-c:a", "copy" if getattr(args, "copy_audio", False) else "aac"]
|
||||
|
||||
cmd = [
|
||||
"ffmpeg",
|
||||
"-err_detect", "ignore_err",
|
||||
"-fflags", "+genpts+discardcorrupt",
|
||||
"-i", infile,
|
||||
"-filter_complex", filtergraph,
|
||||
"-map", "[out_v]",
|
||||
*audio_map,
|
||||
"-c:v", vcodec,
|
||||
"-crf", f"{args.crf}",
|
||||
"-preset", f"{args.x264_preset}", # accepted by x264/x265
|
||||
*audio_codec,
|
||||
"-pix_fmt", f"{pix_fmt}",
|
||||
outfile
|
||||
]
|
||||
|
||||
# Respect global --force if present (we didn’t declare it locally)
|
||||
final_cmd = (cmd[:1] + ["-y"] + cmd[1:]) if getattr(args, "force", False) else cmd
|
||||
run_ffmpeg_with_progress(final_cmd, infile, outfile)
|
||||
324
videobeaux/programs/subs_convert.py
Normal file
324
videobeaux/programs/subs_convert.py
Normal file
@@ -0,0 +1,324 @@
|
||||
#!/usr/bin/env python3
|
||||
# videobeaux/programs/subs_convert.py
|
||||
#
|
||||
# Subtitles Extract / Convert for videobeaux.
|
||||
#
|
||||
# Modes:
|
||||
# A) VIDEO INPUT (-i video.{mp4,mov,mkv,...})
|
||||
# - --list : print subtitle streams and exit
|
||||
# - extract/convert tracks to files:
|
||||
# --indexes 0,2 : extract by stream index
|
||||
# --langs eng,spa : extract by language code (ffprobe 'tags:language')
|
||||
# --all : extract all subtitle streams
|
||||
# --forced-only : only streams with disposition.forced == 1
|
||||
# --exclude-hi : exclude hearing_impaired disposition
|
||||
# --format srt|vtt|ass: convert to target format (default: inferred)
|
||||
# --outdir DIR : write multiple outputs
|
||||
# --outputfile PATH : write exactly one output (only valid when extracting a single stream)
|
||||
# --time-shift +/-S : shift subs by seconds (float; may be negative)
|
||||
#
|
||||
# B) SUBTITLE INPUT (-i subs.{srt,ass,vtt})
|
||||
# - Convert single file to target format:
|
||||
# --format srt|vtt|ass (required)
|
||||
# --outputfile PATH (required)
|
||||
# --time-shift +/-S (optional)
|
||||
#
|
||||
# Notes:
|
||||
# - We prefer --outputfile for single-output artifacts and --outdir for multi-output batches.
|
||||
# - ffprobe + ffmpeg must be on PATH.
|
||||
|
||||
from __future__ import annotations
|
||||
import argparse, json, subprocess, sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
from videobeaux.utils.ffmpeg_operations import run_ffmpeg_with_progress
|
||||
|
||||
|
||||
# ------------------------------
|
||||
# Helpers
|
||||
# ------------------------------
|
||||
def _run_ffprobe_streams(input_path: Path) -> List[Dict[str, Any]]:
|
||||
"""Return list of subtitle streams from ffprobe (may be empty)."""
|
||||
cmd = [
|
||||
"ffprobe", "-v", "error",
|
||||
"-print_format", "json",
|
||||
"-show_entries", "stream=index,codec_name,codec_type,disposition:stream_tags=language,title",
|
||||
"-select_streams", "s",
|
||||
str(input_path)
|
||||
]
|
||||
try:
|
||||
out = subprocess.check_output(cmd)
|
||||
data = json.loads(out.decode("utf-8", errors="replace"))
|
||||
return data.get("streams", []) or []
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise SystemExit(f"❌ ffprobe failed: {e}")
|
||||
|
||||
def _is_subtitle_file(path: Path) -> bool:
|
||||
return path.suffix.lower() in {".srt", ".ass", ".ssa", ".vtt", ".sub"}
|
||||
|
||||
def _infer_target_ext(codec_name: Optional[str]) -> str:
|
||||
# Sensible defaults when --format not provided for video-extract mode
|
||||
if not codec_name:
|
||||
return ".srt"
|
||||
c = codec_name.lower()
|
||||
if c in {"subrip"}: # ffprobe calls SRT 'subrip'
|
||||
return ".srt"
|
||||
if c in {"webvtt", "vtt"}:
|
||||
return ".vtt"
|
||||
if c in {"ass", "ssa"}:
|
||||
return ".ass"
|
||||
if c in {"mov_text"}:
|
||||
return ".srt" # transcode mov_text to SRT by default
|
||||
return ".srt"
|
||||
|
||||
def _format_to_ext(fmt: str) -> str:
|
||||
fmt = fmt.lower()
|
||||
if fmt not in {"srt","vtt","ass"}:
|
||||
raise SystemExit("❌ --format must be one of: srt, vtt, ass")
|
||||
return f".{fmt}"
|
||||
|
||||
def _parse_index_list(val: str) -> List[int]:
|
||||
try:
|
||||
return [int(x.strip()) for x in val.split(",") if x.strip() != ""]
|
||||
except Exception:
|
||||
raise SystemExit("❌ --indexes expects a comma-separated list of integers, e.g., 0,2,3")
|
||||
|
||||
def _parse_langs(val: str) -> List[str]:
|
||||
return [x.strip().lower() for x in val.split(",") if x.strip() != ""]
|
||||
|
||||
def _disp_is_forced(disp: Dict[str, Any]) -> bool:
|
||||
return bool(disp.get("forced", 0))
|
||||
|
||||
def _disp_is_hi(disp: Dict[str, Any]) -> bool:
|
||||
# Some containers mark 'hearing_impaired'; if absent, assume False
|
||||
return bool(disp.get("hearing_impaired", 0))
|
||||
|
||||
def _select_streams(streams: List[Dict[str, Any]],
|
||||
indexes: Optional[List[int]],
|
||||
langs: Optional[List[str]],
|
||||
forced_only: bool,
|
||||
exclude_hi: bool) -> List[Dict[str, Any]]:
|
||||
sel = []
|
||||
for st in streams:
|
||||
if st.get("codec_type") != "subtitle":
|
||||
continue
|
||||
idx_ok = True
|
||||
lang_ok = True
|
||||
forced_ok = True
|
||||
hi_ok = True
|
||||
|
||||
if indexes is not None:
|
||||
idx_ok = (st.get("index") in indexes)
|
||||
|
||||
if langs is not None:
|
||||
lang_tag = (st.get("tags", {}) or {}).get("language", "")
|
||||
lang_ok = (lang_tag.lower() in langs)
|
||||
|
||||
if forced_only:
|
||||
forced_ok = _disp_is_forced(st.get("disposition", {}) or {})
|
||||
|
||||
if exclude_hi:
|
||||
hi_ok = not _disp_is_hi(st.get("disposition", {}) or {})
|
||||
|
||||
if idx_ok and lang_ok and forced_ok and hi_ok:
|
||||
sel.append(st)
|
||||
return sel
|
||||
|
||||
def _shift_args(seconds: float) -> List[str]:
|
||||
# Apply time shift using -itsoffset on the subtitle input branch.
|
||||
# We’ll insert these flags just before the subtitle -i when needed.
|
||||
return ["-itsoffset", str(seconds)]
|
||||
|
||||
def _target_codec_for(fmt: str) -> str:
|
||||
# ffmpeg subtitle encoders by container:
|
||||
# srt: -c:s srt
|
||||
# webvtt: -c:s webvtt
|
||||
# ass: -c:s ass
|
||||
m = {"srt":"srt", "vtt":"webvtt", "ass":"ass"}
|
||||
return m[fmt]
|
||||
|
||||
def _print_list(streams: List[Dict[str, Any]], src: Path) -> None:
|
||||
if not streams:
|
||||
print(f"(no subtitle streams) — {src}")
|
||||
return
|
||||
print(f"Subtitle streams in: {src}")
|
||||
for st in streams:
|
||||
i = st.get("index")
|
||||
c = st.get("codec_name", "?")
|
||||
tag = st.get("tags", {}) or {}
|
||||
lang = tag.get("language", "")
|
||||
title = tag.get("title", "")
|
||||
disp = st.get("disposition", {}) or {}
|
||||
forced = "forced" if disp.get("forced",0)==1 else ""
|
||||
hi = "hearing_impaired" if disp.get("hearing_impaired",0)==1 else ""
|
||||
flags = ", ".join(x for x in (forced, hi) if x)
|
||||
flags = f" [{flags}]" if flags else ""
|
||||
print(f" index={i:>2} codec={c:8} lang={lang or '-':3} title={title or '-'}{flags}")
|
||||
|
||||
# ------------------------------
|
||||
# CLI
|
||||
# ------------------------------
|
||||
def register_arguments(parser: argparse.ArgumentParser):
|
||||
parser.description = (
|
||||
"List, extract, and convert subtitle tracks. "
|
||||
"Works with container-embedded subtitles or standalone .srt/.ass/.vtt files."
|
||||
)
|
||||
|
||||
# Selection (video mode)
|
||||
parser.add_argument("--list", action="store_true",
|
||||
help="List subtitle streams in the input video and exit.")
|
||||
parser.add_argument("--indexes", type=str,
|
||||
help="Comma-separated list of subtitle stream indexes to extract (e.g., '0,2').")
|
||||
parser.add_argument("--langs", type=str,
|
||||
help="Comma-separated list of language codes to extract (e.g., 'eng,spa').")
|
||||
parser.add_argument("--all", action="store_true",
|
||||
help="Extract all subtitle streams.")
|
||||
parser.add_argument("--forced-only", action="store_true",
|
||||
help="Only include streams with 'forced' disposition.")
|
||||
parser.add_argument("--exclude-hi", action="store_true",
|
||||
help="Exclude streams with 'hearing_impaired' disposition.")
|
||||
|
||||
# Output control
|
||||
parser.add_argument("--format", choices=["srt","vtt","ass"],
|
||||
help="Target subtitle format for output. Required for standalone subtitle conversion; optional for video mode.")
|
||||
parser.add_argument("--outdir", type=str,
|
||||
help="Directory for multiple extracted subtitle files.")
|
||||
parser.add_argument("--outputfile", type=str,
|
||||
help="Single output file path (only valid when extracting a single stream or converting a single subtitle file).")
|
||||
|
||||
# Timing
|
||||
parser.add_argument("--time-shift", type=float, default=0.0,
|
||||
help="Apply time shift in seconds (can be negative).")
|
||||
|
||||
# Note: -i/--input and -F/--force are handled by top-level CLI.
|
||||
|
||||
# ------------------------------
|
||||
# Main execution
|
||||
# ------------------------------
|
||||
def run(args: argparse.Namespace):
|
||||
in_path = Path(args.input)
|
||||
if not in_path.exists():
|
||||
raise SystemExit(f"❌ Input not found: {in_path}")
|
||||
|
||||
is_sub_file = _is_subtitle_file(in_path)
|
||||
|
||||
# Standalone subtitle conversion mode
|
||||
if is_sub_file:
|
||||
if not args.format:
|
||||
raise SystemExit("❌ --format is required when input is a subtitle file.")
|
||||
if not args.outputfile:
|
||||
raise SystemExit("❌ --outputfile is required when input is a subtitle file.")
|
||||
|
||||
fmt = args.format.lower()
|
||||
out_path = Path(args.outputfile)
|
||||
if out_path.suffix.lower() != f".{fmt}":
|
||||
out_path = out_path.with_suffix(f".{fmt}")
|
||||
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# ffmpeg: sub file -> target format
|
||||
# Use -itsoffset if time-shift != 0
|
||||
cmd = ["ffmpeg"]
|
||||
if args.time_shift and args.time_shift != 0.0:
|
||||
cmd += _shift_args(args.time_shift)
|
||||
cmd += ["-i", str(in_path),
|
||||
"-map", "0:s:0",
|
||||
"-c:s", _target_codec_for(fmt),
|
||||
str(out_path)]
|
||||
if getattr(args, "force", False):
|
||||
cmd = cmd[:1] + ["-y"] + cmd[1:]
|
||||
run_ffmpeg_with_progress(cmd, args.input, out_path)
|
||||
return
|
||||
|
||||
# Video mode (container with possible subtitle streams)
|
||||
streams = _run_ffprobe_streams(in_path)
|
||||
|
||||
if args.list:
|
||||
_print_list(streams, in_path)
|
||||
return
|
||||
|
||||
# Build selection
|
||||
indexes = _parse_index_list(args.indexes) if args.indexes else None
|
||||
langs = _parse_langs(args.langs) if args.langs else None
|
||||
selected = _select_streams(streams, indexes, langs, args.forced_only, args.exclude_hi)
|
||||
|
||||
if args.all:
|
||||
selected = streams[:] # all subtitle streams (already filtered by ffprobe)
|
||||
|
||||
if not selected:
|
||||
raise SystemExit("❌ No subtitle streams matched your selection. Use --list to inspect indices and languages.")
|
||||
|
||||
# Output policy
|
||||
single_output = (len(selected) == 1)
|
||||
if single_output and args.outputfile:
|
||||
# Write exactly one file
|
||||
st = selected[0]
|
||||
fmt = (args.format or _infer_target_ext(st.get("codec_name"))[1:]).lower()
|
||||
out_path = Path(args.outputfile)
|
||||
if out_path.suffix.lower() != f".{fmt}":
|
||||
out_path = out_path.with_suffix(f".{fmt}")
|
||||
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Build command
|
||||
# If time-shift, we insert -itsoffset before the video input, then map the subtitle stream.
|
||||
# We use -map 0:s:<RELATIVE_INDEX> ? Careful: stream.index is global stream index, not 's' index.
|
||||
# In ffmpeg, selecting a specific subtitle by absolute index can be done via -map 0:<stream_index>.
|
||||
abs_index = st.get("index")
|
||||
if abs_index is None:
|
||||
raise SystemExit("❌ Unexpected: stream lacks 'index' field.")
|
||||
|
||||
fmt_target = (args.format or _infer_target_ext(st.get("codec_name"))[1:]).lower()
|
||||
cmd = ["ffmpeg"]
|
||||
if args.time_shift and args.time_shift != 0.0:
|
||||
cmd += _shift_args(args.time_shift)
|
||||
cmd += [
|
||||
"-i", str(in_path),
|
||||
"-map", f"0:{abs_index}",
|
||||
"-c:s", _target_codec_for(fmt_target),
|
||||
str(out_path)
|
||||
]
|
||||
if getattr(args, "force", False):
|
||||
cmd = cmd[:1] + ["-y"] + cmd[1:]
|
||||
run_ffmpeg_with_progress(cmd, args.input, out_path)
|
||||
return
|
||||
|
||||
# Multiple outputs → --outdir required
|
||||
if not args.outdir:
|
||||
raise SystemExit("❌ Multiple streams selected. Provide --outdir to write batch outputs.")
|
||||
|
||||
outdir = Path(args.outdir)
|
||||
outdir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Build and run per-stream extraction
|
||||
for st in selected:
|
||||
idx = st.get("index")
|
||||
codec = st.get("codec_name", "")
|
||||
tags = (st.get("tags", {}) or {})
|
||||
lang = (tags.get("language") or "und").lower()
|
||||
title = tags.get("title") or ""
|
||||
# Decide extension
|
||||
ext = _format_to_ext(args.format) if args.format else _infer_target_ext(codec)
|
||||
# out name: <basename>_sIDX_LANG[optional_title_sanitized].ext
|
||||
base = in_path.stem
|
||||
title_part = f"_{_sanitize_filename(title)}" if title else ""
|
||||
out_path = outdir / f"{base}_s{idx}_{lang}{title_part}{ext}"
|
||||
|
||||
# Build command
|
||||
cmd = ["ffmpeg"]
|
||||
if args.time_shift and args.time_shift != 0.0:
|
||||
cmd += _shift_args(args.time_shift)
|
||||
cmd += [
|
||||
"-i", str(in_path),
|
||||
"-map", f"0:{idx}",
|
||||
"-c:s", _target_codec_for((args.format or ext[1:]).lower()),
|
||||
str(out_path)
|
||||
]
|
||||
if getattr(args, "force", False):
|
||||
cmd = cmd[:1] + ["-y"] + cmd[1:]
|
||||
run_ffmpeg_with_progress(cmd, args.input, out_path)
|
||||
|
||||
|
||||
def _sanitize_filename(s: str) -> str:
|
||||
bad = '<>:"/\\|?*'
|
||||
out = "".join("_" if ch in bad else ch for ch in s)
|
||||
return out.strip()
|
||||
183
videobeaux/programs/thumbs.py
Normal file
183
videobeaux/programs/thumbs.py
Normal file
@@ -0,0 +1,183 @@
|
||||
#!/usr/bin/env python3
|
||||
# videobeaux/programs/thumbs.py
|
||||
# Thumbnail / Contact Sheet generator for videobeaux.
|
||||
#
|
||||
# Supports:
|
||||
# - Interval sampling (--fps)
|
||||
# - Scene-based sampling (--scene)
|
||||
# - Timestamp overlays (--timestamps)
|
||||
# - Custom fonts, colors, margins, padding
|
||||
# - Frame sequences (--outdir)
|
||||
# - Contact sheets (--outputfile)
|
||||
#
|
||||
# Example:
|
||||
# videobeaux -P thumbs -i ./media/bbb.mov --outputfile ./out/bbb_contact.jpg --fps 0.5 --tile 5x4 -F
|
||||
|
||||
from __future__ import annotations
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
from videobeaux.utils.ffmpeg_operations import run_ffmpeg_with_progress
|
||||
|
||||
DEFAULT_EXT = "jpg"
|
||||
|
||||
# ------------------------------
|
||||
# Helper functions
|
||||
# ------------------------------
|
||||
def _parse_tile(tile: str | None) -> tuple[int, int]:
|
||||
if not tile:
|
||||
return (6, 4)
|
||||
try:
|
||||
parts = tile.lower().replace("x", " ").split()
|
||||
c, r = int(parts[0]), int(parts[1])
|
||||
return (max(1, c), max(1, r))
|
||||
except Exception:
|
||||
raise SystemExit("❌ Invalid --tile format. Use like 6x4 (columns x rows).")
|
||||
|
||||
def _scale_expr(scale: str | None) -> str:
|
||||
if not scale:
|
||||
return "320:-1"
|
||||
if ":" not in scale:
|
||||
raise SystemExit("❌ Invalid --scale format. Use WIDTH:HEIGHT (e.g., 320:-1 or 360:360).")
|
||||
return scale
|
||||
|
||||
def _escape_text(s: str) -> str:
|
||||
return s.replace(":", r"\:").replace("'", r"\'")
|
||||
|
||||
def _sanitize_color(c: str) -> str:
|
||||
"""Accepts '#RRGGBB', '0xRRGGBB', or named colors like 'black'."""
|
||||
c = (c or "").strip()
|
||||
if not c:
|
||||
return "black"
|
||||
if c.startswith("#"):
|
||||
hx = c[1:]
|
||||
if len(hx) == 3:
|
||||
hx = "".join(ch * 2 for ch in hx)
|
||||
return "0x" + hx.lower()
|
||||
if c.lower().startswith("0x"):
|
||||
return c.lower()
|
||||
return c # Named colors
|
||||
|
||||
def _drawtext_chain(timestamps: bool, fontfile: str | None) -> list[str]:
|
||||
chain: list[str] = []
|
||||
if timestamps:
|
||||
dt = "drawtext=text='%{pts\\:hms}':x=10:y=h-th-10:fontsize=20:fontcolor=white"
|
||||
if fontfile:
|
||||
dt += f":fontfile='{_escape_text(fontfile)}'"
|
||||
chain.append(dt)
|
||||
return chain
|
||||
|
||||
def _ensure_parent(path: Path):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# ------------------------------
|
||||
# Argument registration
|
||||
# ------------------------------
|
||||
def register_arguments(parser: argparse.ArgumentParser):
|
||||
parser.description = (
|
||||
"Generate thumbnails and/or a tiled contact sheet from a video. "
|
||||
"Supports interval or scene-based selection, timestamps, custom fonts, and layout styling."
|
||||
)
|
||||
|
||||
# Sampling
|
||||
parser.add_argument("--fps", type=float, default=0.5, help="Frames per second to sample (e.g., 0.5 = one every 2s).")
|
||||
parser.add_argument("--scene", action="store_true", help="Use scene-based selection instead of fixed intervals.")
|
||||
parser.add_argument("--scene-threshold", type=float, default=0.4, help="Scene detection sensitivity (lower = more cuts).")
|
||||
|
||||
# Appearance
|
||||
parser.add_argument("--tile", type=str, default="6x4", help="Contact sheet grid 'COLUMNSxROWS' (e.g., 6x4).")
|
||||
parser.add_argument("--scale", type=str, default="320:-1", help="Per-thumb scale (e.g., 320:-1).")
|
||||
parser.add_argument("--timestamps", action="store_true", help="Overlay timestamps on thumbnails.")
|
||||
parser.add_argument("--label", action="store_true", help="Add a footer label with filename.")
|
||||
parser.add_argument("--fontfile", type=str, help="Custom font path for drawtext.")
|
||||
parser.add_argument("--bg", type=str, default="#000000", help="Background color ('black', '#111111', or '0x111111').")
|
||||
parser.add_argument("--margin", type=int, default=12, help="Outer margin (pixels).")
|
||||
parser.add_argument("--padding", type=int, default=6, help="Padding between tiles (pixels).")
|
||||
|
||||
# Outputs
|
||||
parser.add_argument("--outdir", type=str, help="Directory to export frame sequence.")
|
||||
parser.add_argument("--outputfile", type=str, help="Output contact sheet path (e.g., ./out/sheet.jpg).")
|
||||
parser.add_argument("--image-format", choices=["jpg", "png"], default=None, help="Output format if --outputfile has no extension.")
|
||||
parser.add_argument("--jpeg-quality", type=int, default=3, help="JPEG quality (2=high, 3=good, 5=ok).")
|
||||
|
||||
# ------------------------------
|
||||
# Main execution
|
||||
# ------------------------------
|
||||
def run(args: argparse.Namespace):
|
||||
in_path = Path(args.input)
|
||||
if not in_path.exists():
|
||||
raise SystemExit(f"❌ Input not found: {in_path}")
|
||||
|
||||
contactsheet_path: Path | None = None
|
||||
if getattr(args, "outputfile", None):
|
||||
contactsheet_path = Path(args.outputfile)
|
||||
if contactsheet_path and contactsheet_path.suffix == "":
|
||||
ext = args.image_format or DEFAULT_EXT
|
||||
contactsheet_path = contactsheet_path.with_suffix(f".{ext}")
|
||||
|
||||
outdir_path: Path | None = None
|
||||
if getattr(args, "outdir", None):
|
||||
outdir_path = Path(args.outdir)
|
||||
|
||||
if not contactsheet_path and not outdir_path:
|
||||
raise SystemExit("❌ Provide at least one output: --outputfile or --outdir.")
|
||||
|
||||
scale = _scale_expr(args.scale)
|
||||
draw_chain = _drawtext_chain(args.timestamps, args.fontfile)
|
||||
|
||||
# -------------- Frame Sequence --------------
|
||||
if outdir_path:
|
||||
outdir_path.mkdir(parents=True, exist_ok=True)
|
||||
seq_filters: list[str] = []
|
||||
if args.scene:
|
||||
seq_filters.append(f"select='gt(scene,{args.scene_threshold})'")
|
||||
else:
|
||||
seq_filters.append(f"fps={max(0.001, float(args.fps))}")
|
||||
seq_filters.append(f"scale={scale}")
|
||||
seq_filters.extend(draw_chain)
|
||||
seq_vf = ",".join(seq_filters)
|
||||
pattern = str(outdir_path / "frame_%06d.jpg")
|
||||
cmd = [
|
||||
"ffmpeg",
|
||||
"-i", str(in_path),
|
||||
"-vf", seq_vf,
|
||||
"-q:v", str(max(1, min(31, int(args.jpeg_quality)))),
|
||||
pattern
|
||||
]
|
||||
if getattr(args, "force", False):
|
||||
cmd = cmd[:1] + ["-y"] + cmd[1:]
|
||||
run_ffmpeg_with_progress(cmd, args.input, outdir_path / "frame_%06d.jpg")
|
||||
|
||||
# -------------- Contact Sheet --------------
|
||||
if contactsheet_path:
|
||||
_ensure_parent(contactsheet_path)
|
||||
cols, rows = _parse_tile(args.tile)
|
||||
bg_color = _sanitize_color(args.bg)
|
||||
|
||||
filters: list[str] = []
|
||||
if args.scene:
|
||||
filters.append(f"select='gt(scene,{args.scene_threshold})'")
|
||||
else:
|
||||
filters.append(f"fps={max(0.001, float(args.fps))}")
|
||||
filters.append(f"scale={scale}")
|
||||
filters.extend(draw_chain)
|
||||
filters.append(f"tile={cols}x{rows}:{int(args.margin)}:{int(args.padding)}:{bg_color}")
|
||||
|
||||
if args.label:
|
||||
label_txt = _escape_text(in_path.name)
|
||||
label = f"drawtext=text='{label_txt}':x=10:y=h-th-10:fontsize=22:fontcolor=white"
|
||||
if args.fontfile:
|
||||
label += f":fontfile='{_escape_text(args.fontfile)}'"
|
||||
filters.append(label)
|
||||
|
||||
full_chain = ",".join(filters)
|
||||
is_png = contactsheet_path.suffix.lower() == ".png"
|
||||
|
||||
cmd = ["ffmpeg", "-i", str(in_path), "-vf", full_chain, "-frames:v", "1"]
|
||||
if not is_png:
|
||||
cmd += ["-q:v", str(max(1, min(31, int(args.jpeg_quality))))]
|
||||
cmd += [str(contactsheet_path)]
|
||||
if getattr(args, "force", False):
|
||||
cmd = cmd[:1] + ["-y"] + cmd[1:]
|
||||
run_ffmpeg_with_progress(cmd, args.input, contactsheet_path)
|
||||
120
videobeaux/programs/tonemap_hdr_sdr.py
Normal file
120
videobeaux/programs/tonemap_hdr_sdr.py
Normal file
@@ -0,0 +1,120 @@
|
||||
# videobeaux/programs/tonemap_hdr_sdr.py
|
||||
# HDR → SDR tone mapping using zscale + tonemap (default: hable).
|
||||
# Matches videobeaux program structure: register_arguments() + run(args).
|
||||
|
||||
from videobeaux.utils.ffmpeg_operations import run_ffmpeg_with_progress
|
||||
|
||||
def register_arguments(parser):
|
||||
parser.description = (
|
||||
"HDR → SDR Tone Map\n"
|
||||
"Convert HDR (PQ/HLG) video to SDR (BT.709) using zscale + tonemap.\n"
|
||||
"Default mapping is Hable with mild desaturation and 1000-nit peak."
|
||||
)
|
||||
# IO
|
||||
parser.add_argument(
|
||||
"--outfile",
|
||||
required=True,
|
||||
help="Output file path for the SDR result (use this instead of the global -o)."
|
||||
)
|
||||
|
||||
# Tonemap controls
|
||||
parser.add_argument(
|
||||
"--algo",
|
||||
choices=["hable", "mobius", "reinhard", "clip"],
|
||||
default="hable",
|
||||
help="Tonemap operator. Default: hable"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--desat",
|
||||
type=float,
|
||||
default=0.0,
|
||||
help="Desaturate highlights during tonemap [0.0–1.0]. Default: 0.0"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--peak",
|
||||
type=float,
|
||||
default=1000.0,
|
||||
help="Nominal HDR peak (nits) for linearization (zscale npl). Default: 1000"
|
||||
)
|
||||
# Output color / dithering / pixfmt
|
||||
parser.add_argument(
|
||||
"--dither",
|
||||
choices=["none", "ordered", "random", "error_diffusion"],
|
||||
default="error_diffusion",
|
||||
help="Dither mode applied in zscale prior to format(). Default: error_diffusion"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--pix-fmt",
|
||||
default="yuv420p",
|
||||
help="Output pixel format. Common picks: yuv420p, yuv422p10le. Default: yuv420p"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--x264-preset",
|
||||
default="medium",
|
||||
help="libx264 preset (if re-encoding). Default: medium"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--crf",
|
||||
type=float,
|
||||
default=18.0,
|
||||
help="CRF when encoding with libx264. Default: 18"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--copy-audio",
|
||||
action="store_true",
|
||||
help="Copy audio stream instead of re-encoding."
|
||||
)
|
||||
|
||||
def run(args):
|
||||
"""
|
||||
Pipeline:
|
||||
1) zscale=transfer=linear:npl=PEAK # Convert to linear using nominal peak
|
||||
2) tonemap=ALGO:desat=DESAT # Apply tonemap curve
|
||||
3) zscale=primaries=bt709:transfer=bt709:matrix=bt709:dither=DITHER
|
||||
4) format=PIX_FMT
|
||||
Notes:
|
||||
- We set explicit BT.709 flags on the stream to keep players honest.
|
||||
- We re-encode video (libx264). Audio can be copied with --copy-audio.
|
||||
"""
|
||||
|
||||
outfile = args.outfile
|
||||
|
||||
# Build filtergraph
|
||||
filtergraph = (
|
||||
f"zscale=transfer=linear:npl={args.peak},"
|
||||
f"tonemap={args.algo}:desat={args.desat},"
|
||||
f"zscale=primaries=bt709:transfer=bt709:matrix=bt709:dither={args.dither},"
|
||||
f"format={args.pix_fmt}"
|
||||
)
|
||||
|
||||
# Core command
|
||||
command = [
|
||||
"ffmpeg",
|
||||
"-err_detect", "ignore_err",
|
||||
"-fflags", "+genpts+discardcorrupt",
|
||||
"-i", args.input,
|
||||
|
||||
"-vf", filtergraph,
|
||||
|
||||
# Color tags (make sure containers/players see BT.709 SDR)
|
||||
"-colorspace", "bt709",
|
||||
"-color_trc", "bt709",
|
||||
"-color_primaries", "bt709",
|
||||
|
||||
# Encode video
|
||||
"-c:v", "libx264",
|
||||
"-preset", f"{args.x264_preset}",
|
||||
"-crf", f"{args.crf}",
|
||||
|
||||
# Audio strategy
|
||||
"-c:a", "copy" if getattr(args, "copy_audio", False) else "aac",
|
||||
|
||||
# Output path from --outfile
|
||||
outfile,
|
||||
]
|
||||
|
||||
# Respect --force like other programs (inject -y right after 'ffmpeg')
|
||||
final_cmd = (command[:1] + ["-y"] + command[1:]) if getattr(args, "force", False) else command
|
||||
|
||||
# Progress helper consistent with other programs
|
||||
run_ffmpeg_with_progress(final_cmd, args.input, outfile)
|
||||
199
videobeaux/programs/watermark.py
Normal file
199
videobeaux/programs/watermark.py
Normal file
@@ -0,0 +1,199 @@
|
||||
#!/usr/bin/env python3
|
||||
# videobeaux/programs/watermark.py
|
||||
#
|
||||
# Overlay a static/animated watermark (PNG/JPG/GIF) onto a video.
|
||||
# - Robust GIF handling (looping, -ignore_loop, optional -stream_loop)
|
||||
# - Placement presets with margin
|
||||
# - Scale factor relative to watermark's intrinsic width (iw*scale)
|
||||
# - Opacity via colorchannelmixer (alpha)
|
||||
# - Optional spin (continuous rotation over time)
|
||||
# - Timed enable window (start/end seconds)
|
||||
# - Safe stream mapping and mp4-friendly output
|
||||
#
|
||||
# Example:
|
||||
# videobeaux -P watermark \
|
||||
# -i ./media/bbb.mov -o ./out/bbb_wm_windowed.mp4 \
|
||||
# --watermark ./media/badge.gif --placement bottom-right --margin 24 \
|
||||
# --scale 0.25 --opacity 0.7 --spin 12.0 --start 1.0 --end 7.0 \
|
||||
# --wm-loop -1 -F
|
||||
#
|
||||
# Notes:
|
||||
# - spin is degrees per second (float). angle(t) = spin_deg_per_sec * t * pi/180
|
||||
# - wm-loop behaves like ffmpeg -stream_loop for the watermark input:
|
||||
# -1 = infinite, 0 = no extra loops, N>0 loop N times after first play
|
||||
# - We ALWAYS pass -ignore_loop 0 for GIF so decoder honors intrinsic timing.
|
||||
# - For non-GIF stills, ffmpeg holds the frame; for sequences/GIF we add -stream_loop as requested.
|
||||
|
||||
from __future__ import annotations
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import Tuple
|
||||
|
||||
from videobeaux.utils.ffmpeg_operations import run_ffmpeg_with_progress
|
||||
|
||||
|
||||
def _placement_xy(placement: str, margin: int) -> Tuple[str, str]:
|
||||
pm = placement.lower().strip()
|
||||
m = int(margin)
|
||||
if pm == "top-left":
|
||||
return (f"{m}", f"{m}")
|
||||
if pm == "top-right":
|
||||
return (f"W-w-{m}", f"{m}")
|
||||
if pm == "bottom-left":
|
||||
return (f"{m}", f"H-h-{m}")
|
||||
if pm == "bottom-right":
|
||||
return (f"W-w-{m}", f"H-h-{m}")
|
||||
if pm == "center":
|
||||
return (f"(W-w)/2", f"(H-h)/2")
|
||||
# fallback
|
||||
return (f"W-w-{m}", f"H-h-{m}")
|
||||
|
||||
|
||||
def _sanitize_scale(scale: float) -> float:
|
||||
try:
|
||||
s = float(scale)
|
||||
except Exception:
|
||||
raise SystemExit("❌ --scale must be a number (e.g., 0.25).")
|
||||
if s <= 0:
|
||||
raise SystemExit("❌ --scale must be > 0.")
|
||||
return s
|
||||
|
||||
|
||||
def _sanitize_opacity(opacity: float) -> float:
|
||||
try:
|
||||
a = float(opacity)
|
||||
except Exception:
|
||||
raise SystemExit("❌ --opacity must be a number between 0.0 and 1.0.")
|
||||
if not (0.0 <= a <= 1.0):
|
||||
raise SystemExit("❌ --opacity must be between 0.0 and 1.0.")
|
||||
return a
|
||||
|
||||
|
||||
def _gif_input_flags(wm_path: Path, wm_loop: int, ignore_loop_flag: bool) -> list[str]:
|
||||
"""
|
||||
Build input flags for GIF/animated watermark.
|
||||
We default to respecting GIF's intrinsic loop: -ignore_loop 0
|
||||
Then optionally add -stream_loop <N> to extend looping.
|
||||
"""
|
||||
flags: list[str] = []
|
||||
if wm_path.suffix.lower() == ".gif":
|
||||
# If user asked to ignore the gif's intrinsic loop, set -ignore_loop 1
|
||||
if ignore_loop_flag:
|
||||
flags += ["-ignore_loop", "1"]
|
||||
else:
|
||||
flags += ["-ignore_loop", "0"]
|
||||
|
||||
# -stream_loop <N>: -1 infinite, 0 none, N>0 N extra loops after first play.
|
||||
# Only add when user provides a value different from None and not 0.
|
||||
# (If 0, we omit; if -1 or >0, we set it.)
|
||||
if wm_loop is not None and wm_loop != 0:
|
||||
flags += ["-stream_loop", str(int(wm_loop))]
|
||||
return flags
|
||||
|
||||
|
||||
def register_arguments(parser: argparse.ArgumentParser):
|
||||
parser.description = (
|
||||
"Burn a watermark (PNG/JPG/GIF) into a video with placement, scale, opacity, "
|
||||
"optional spin, and timed enable window."
|
||||
)
|
||||
parser.add_argument("--watermark", required=True, help="Path to watermark image (PNG/JPG/GIF).")
|
||||
parser.add_argument("--placement", default="bottom-right",
|
||||
choices=["top-left", "top-right", "bottom-left", "bottom-right", "center"],
|
||||
help="Watermark placement.")
|
||||
parser.add_argument("--margin", type=int, default=24, help="Margin (px) from edges for placement.")
|
||||
parser.add_argument("--scale", type=float, default=0.25,
|
||||
help="Scale factor relative to watermark intrinsic width (iw*scale).")
|
||||
parser.add_argument("--opacity", type=float, default=0.8,
|
||||
help="Watermark opacity (0.0–1.0).")
|
||||
parser.add_argument("--spin", type=float, default=0.0,
|
||||
help="Watermark spin in degrees per second (0 = no rotation).")
|
||||
parser.add_argument("--start", type=float, default=0.0, help="Enable overlay starting at t seconds.")
|
||||
parser.add_argument("--end", type=float, default=0.0,
|
||||
help="Disable overlay after t seconds (0 = until end).")
|
||||
|
||||
# GIF/animated controls
|
||||
parser.add_argument("--wm-loop", type=int, default=0,
|
||||
help="Additional loops for watermark input (-1=infinite, 0=none, N>0 times).")
|
||||
parser.add_argument("--ignore-loop", action="store_true",
|
||||
help="For GIF watermark: ignore intrinsic loop (use frames once).")
|
||||
|
||||
# Video encode controls (kept from your args)
|
||||
parser.add_argument("--video-crf", type=int, default=18, help="CRF for libx264.")
|
||||
parser.add_argument("--video-preset", type=str, default="fast", help="x264 preset.")
|
||||
|
||||
# NOTE: -i/--input, -o/--output, -F/--force are provided by the top-level CLI.
|
||||
|
||||
|
||||
def run(args: argparse.Namespace):
|
||||
in_path = Path(args.input)
|
||||
if not in_path.exists():
|
||||
raise SystemExit(f"❌ Input not found: {in_path}")
|
||||
|
||||
out_path = Path(args.output)
|
||||
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
wm_path = Path(args.watermark)
|
||||
if not wm_path.exists():
|
||||
raise SystemExit(f"❌ Watermark not found: {wm_path}")
|
||||
|
||||
# Validate numerics
|
||||
scale = _sanitize_scale(args.scale)
|
||||
opacity = _sanitize_opacity(args.opacity)
|
||||
|
||||
# Placement math
|
||||
x_expr, y_expr = _placement_xy(args.placement, args.margin)
|
||||
|
||||
# Enable expression
|
||||
if args.end and args.end > 0:
|
||||
enable_expr = f"between(t,{float(args.start)},{float(args.end)})"
|
||||
else:
|
||||
enable_expr = f"gte(t,{float(args.start)})"
|
||||
|
||||
# Build the watermark processing chain
|
||||
# 1) scale relative to its own width (iw*scale)
|
||||
# 2) convert to RGBA, then apply alpha multiplier via colorchannelmixer
|
||||
# 3) optional rotation with 'rotate' (angle in radians)
|
||||
wm_chain_parts = [f"scale=iw*{scale}:-1", "format=rgba", f"colorchannelmixer=aa={opacity}"]
|
||||
|
||||
spin = float(args.spin or 0.0)
|
||||
if spin != 0.0:
|
||||
# angle(t) = spin_deg_per_sec * t * pi/180
|
||||
# Use ffmpeg expr: (spin*pi/180)*t
|
||||
wm_chain_parts.append(f"rotate={(spin)}*PI/180*t:fillcolor=0x00000000")
|
||||
|
||||
wm_chain = ",".join(wm_chain_parts)
|
||||
|
||||
# Overlay (+ enable)
|
||||
overlay = f"overlay={x_expr}:{y_expr}:enable='{enable_expr}'"
|
||||
|
||||
# Assemble filter_complex with named pads
|
||||
# [1:v]wm_chain[wm];[0:v][wm]overlay=... (alpha premult handled by format=rgba)
|
||||
filter_complex = f"[1:v]{wm_chain}[wm];[0:v][wm]{overlay}"
|
||||
|
||||
# Inputs (include GIF flags when appropriate)
|
||||
input_flags: list[str] = ["-i", str(in_path)]
|
||||
gif_flags = _gif_input_flags(wm_path, int(args.wm_loop), bool(args.ignore_loop))
|
||||
input_flags += gif_flags + ["-i", str(wm_path)]
|
||||
|
||||
# Safe mapping: map main video/audio from #0, output yuv420p mp4 with x264
|
||||
command = [
|
||||
"ffmpeg",
|
||||
*input_flags,
|
||||
"-filter_complex", filter_complex,
|
||||
"-map", "0:v:0",
|
||||
"-map", "0:a?:0",
|
||||
"-c:v", "libx264",
|
||||
"-crf", str(int(args.video_crf)),
|
||||
"-preset", str(args.video_preset),
|
||||
"-pix_fmt", "yuv420p",
|
||||
"-c:a", "aac",
|
||||
"-b:a", "192k",
|
||||
"-shortest",
|
||||
str(out_path),
|
||||
]
|
||||
|
||||
if getattr(args, "force", False):
|
||||
command = command[:1] + ["-y"] + command[1:]
|
||||
|
||||
# Run
|
||||
run_ffmpeg_with_progress(command, args.input, out_path)
|
||||
Reference in New Issue
Block a user