ai tools finished

This commit is contained in:
2026-04-15 17:13:56 -06:00
parent d11e26cf2d
commit 024b9bd806
17 changed files with 566 additions and 328 deletions

View File

@ -1,327 +1,129 @@
# AI Dev Roadmap # AI Dev Plan (Must-Haves Only)
## Purpose ## Purpose
This document defines how TalkEdit can evolve toward highly autonomous AI-driven implementation and debugging. This is the minimum implementation needed for AI to reliably build, test, and debug TalkEdit with high confidence.
Goal: AI can execute most engineering work end-to-end with minimal human feedback while preserving safety, quality, and product intent. Target: reliable 80-90% autonomous implementation/debugging on scoped tasks.
## Scope ## Must-Have Pillars
- Frontend: React + TypeScript + Vite ## 1. Single Validation Command
- Desktop host: Tauri
- Backend: FastAPI + Python services
- Media pipeline: FFmpeg, transcription, audio processing
## Autonomy Target Required:
- Near-term target: 80-90% autonomous execution for well-scoped work. 1. One command that runs lint, build, backend tests, and smoke checks.
- Mid-term target: 90-95% for low/medium-risk features with CI gates. 2. Works locally and in CI.
- 100% no-feedback autonomy is not realistic for ambiguous product decisions, legal/security tradeoffs, or high-risk migrations.
## Core Principles Current status:
1. Specs are executable and machine-readable. 1. Implemented via scripts/validate-all.sh.
2. Tests are the primary source of truth for completion. 2. Enforced in CI via .github/workflows/validate-all.yml.
3. Every failure is diagnosable from logs/artifacts.
4. AI has bounded permissions and policy guardrails.
5. AI updates docs and memory as part of done criteria.
## Execution Status (2026-04-15) ## 2. CI Quality Gate
### Completed Required:
1. Added roadmap companion docs: 1. Pull requests fail if validation fails.
- `docs/spec-template.md` 2. Failures produce diagnostics artifacts.
- `docs/ai-policy.md`
- `docs/runbooks/error-codes.md`
2. Added operational scripts:
- `scripts/validate-all.sh`
- `scripts/collect-diagnostics.sh`
3. Ran Step 1 validation script (`./scripts/validate-all.sh`).
4. Ran Step 2 diagnostics script (`./scripts/collect-diagnostics.sh`).
5. Captured diagnostics archive:
- `.diagnostics/diag_20260415_163239.tar.gz`
6. Renamed roadmap file to `AI_dev_plan.md`.
### Current Blockers Current status:
1. Frontend lint baseline is not green yet. 1. Implemented in .github/workflows/validate-all.yml.
2. Remaining lint issues are mostly pre-existing unused vars and hook dependency warnings across app components. 2. Diagnostics collected by scripts/collect-diagnostics.sh on failure.
### Next Actions ## 3. Spec Requirement for Feature Changes
1. Triage existing lint findings into: Required:
- safe autofix
- manual low-risk cleanup
- intentional warnings to suppress with justification
2. Reach green `./scripts/validate-all.sh` in local dev.
3. Add CI workflow to enforce `validate-all` on pull requests.
## Roadmap Phases 1. Feature code changes must include a spec file update.
2. Spec format must be standardized.
## Phase 0: Foundation (1-2 weeks) Current status:
### Deliverables 1. Implemented via scripts/check-feature-spec.sh.
2. Spec template exists at docs/spec-template.md.
3. Specs folder guidance exists at docs/specs/README.md.
1. Deterministic dev and test environment. ## 4. Backend Contract Test Coverage
2. Baseline lint/type/test commands working in CI and local.
3. Standardized log format across frontend, backend, and Tauri host.
### Tasks Required:
1. Stabilize toolchain commands:
- frontend lint/typecheck/test
- backend lint/typecheck/test
- workspace e2e smoke command
2. Add a single script for local validation, for example `npm run validate:all`.
3. Introduce structured logging fields:
- timestamp
- request/job id
- subsystem (frontend/backend/host)
- error code
4. Add reproducible media fixtures for tests under a dedicated test-fixtures path.
### Exit Criteria 1. Router-level contract tests for success and error paths.
2. Tests are deterministic and mock heavy services.
- Fresh clone can run validation with one command. Current status:
- CI produces deterministic pass/fail on clean branches.
- Failures include enough context to reproduce without manual guessing.
## Phase 1: Spec + Test Contracts (2-4 weeks) 1. Implemented in backend/tests/test_router_contracts.py.
2. Cache utility baseline tests implemented in backend/tests/test_cache_utils.py.
### Deliverables ## 5. Error-Tolerant Router Contracts
1. Feature spec template used for all new work. Required:
2. API and schema contracts versioned and validated.
3. Regression harness for previous bugs.
### Tasks 1. Expected client errors must remain 4xx.
2. Server failures must return 5xx with useful detail.
1. Create `docs/spec-template.md` with required sections:
- user story
- acceptance criteria
- non-goals
- edge cases
- rollback behavior
2. Add contract tests for backend routers:
- transcribe
- export
- captions
- audio
3. Add project schema validation tests for `shared/project-schema.json` and project load/save behavior.
4. For each resolved bug, add a regression test before closing issue.
### Exit Criteria Current status:
- New feature PRs must include spec and tests. 1. Implemented for captions/export HTTPException passthrough.
- Breaking contract changes are detected automatically in CI. 2. Covered by contract tests.
## Phase 2: Observability and Self-Debugging (2-3 weeks) ## 6. Basic Autonomy Policy
### Deliverables Required:
1. Unified diagnostics bundle command. 1. Clear autonomous scope and escalation rules.
2. AI-readable failure artifacts from CI and local runs. 2. Clear restrictions for high-risk changes.
3. Error taxonomy and runbook mapping.
### Tasks Current status:
1. Implement diagnostics command to collect: 1. Implemented in docs/ai-policy.md.
- frontend logs
- backend logs
- Tauri logs
- failing test outputs
- environment metadata
2. Define error codes for common classes:
- media decode
- FFmpeg pipeline
- transcription model
- project load/save
- network/IPC bridge
3. Add runbook table mapping error codes to probable causes and first fixes.
### Exit Criteria ## Must-Have Remaining Work
- Agent can identify likely root cause from artifacts without asking for manual logs. No remaining must-have items.
- 80%+ of recurring failures map to known error classes.
## Phase 3: Controlled Autonomous Implementation (3-5 weeks) Completed in this pass:
### Deliverables 1. Added lightweight frontend tests and integrated them into scripts/validate-all.sh.
2. Added pull request template with required spec link and acceptance criteria checklist.
3. Added endpoint-level contract assertions for /file range requests and /audio/waveform cache-hit/cache-miss behavior.
4. Confirmed scripts/validate-all.sh passes end-to-end with frontend tests + expanded backend contracts.
1. Policy file defining what AI can edit/run without approval. ## Out of Scope for Must-Have Baseline
2. Autonomous task loop for implement -> validate -> fix -> revalidate.
3. Automatic PR summary with risk and assumptions.
### Tasks Useful later, but not required for strong day-to-day autonomous implementation:
1. Add policy file (for example `docs/ai-policy.md`): 1. Full quality dashboards.
- allowed directories for autonomous edits 2. Advanced autonomy telemetry.
- blocked files requiring approval 3. Complete long-term governance expansion.
- blocked commands 4. High-autonomy optimization beyond 90% reliability target.
2. Add task template for AI execution:
- parse feature spec
- locate impacted modules
- implement smallest changes
- run validation suite
- retry up to N fix cycles
- produce summary + residual risks
3. Require AI to update:
- copilot instructions
- changelog/roadmap note
- regression tests when bugfixing
### Exit Criteria ## Definition of Done (Must-Have Plan)
- Low-risk feature tasks complete end-to-end without human intervention. Must-have plan is complete when all are true:
- CI gate pass rate for autonomous PRs remains above agreed threshold (for example 95%).
## Phase 4: High-Autonomy with Human Escalation (ongoing) 1. scripts/validate-all.sh passes locally and in CI.
2. Feature PRs without spec updates are blocked.
3. Backend router contracts cover core success and error paths.
4. Frontend has at least one stable test command integrated into validation.
5. AI policy + diagnostics workflow are active.
### Deliverables ## Current State Summary
1. Explicit escalation triggers for ambiguity and risk. Completed:
2. Broader autonomous scope with mandatory gates.
3. Drift monitoring for quality, velocity, and regressions.
### Tasks 1. Validation and CI enforcement.
2. Diagnostics capture.
3. Spec policy and templates.
4. Backend contract test foundation (including AI endpoints).
5. Core router error-path correctness.
6. Autonomy policy baseline.
7. Frontend test command integrated into validation.
8. PR template requirement added.
9. /file and /audio/waveform contract assertions implemented.
1. Define escalation triggers: Remaining:
- user-visible behavior changes without clear spec
- API/schema breakage
- security-sensitive modifications
- destructive migrations
2. Add quality dashboards:
- flaky tests
- escaped defects
- mean time to recovery
- autonomous task success rate
3. Monthly calibration:
- adjust autonomy scope
- update policies
- prune stale runbooks and memories
### Exit Criteria 1. No must-have items remaining.
- Autonomous throughput increases while defect rate stays stable or improves.
- Human review focuses on strategy and product decisions, not routine implementation/debugging.
## Required Engineering Systems
## 1. Spec System
Minimum implementation:
1. `docs/spec-template.md`
2. `docs/specs/` folder with one file per feature
3. CI check that new feature PRs include a spec reference
## 2. Test System
Minimum implementation:
1. Frontend unit tests for stores/components/hook logic.
2. Backend unit+integration tests for routers/services.
3. E2E smoke tests for core workflow:
- open media
- transcribe
- edit zones
- export
4. Regression tests required for every bugfix.
## 3. Environment System
Minimum implementation:
1. Locked dependencies and pinned runtimes.
2. Single bootstrap script.
3. Fixture media files for deterministic test runs.
## 4. Observability System
Minimum implementation:
1. Structured logs.
2. Standard error codes.
3. Diagnostics bundle command.
4. CI artifact retention for failed runs.
## 5. Governance System
Minimum implementation:
1. Protected branch + required checks.
2. Secret and dependency scanning.
3. Policy-based approval requirements for high-risk changes.
## Suggested Repository Additions
1. `AI_dev_plan.md` (this file)
2. `docs/spec-template.md`
3. `docs/ai-policy.md`
4. `docs/runbooks/error-codes.md`
5. `docs/runbooks/debug-playbooks.md`
6. `scripts/validate-all.sh`
7. `scripts/collect-diagnostics.sh`
## Definition of Done for Autonomous Tasks
A task is complete only if all items pass:
1. Feature spec acceptance criteria satisfied.
2. Relevant tests added/updated and passing.
3. No lint/type errors in changed scope.
4. Docs and instructions updated if behavior changed.
5. Risk summary and assumptions recorded.
## Escalation Rules (Must Ask Human)
AI must stop and ask when:
1. Requirement ambiguity changes user-visible behavior.
2. Multiple valid product decisions exist without clear preference.
3. Security/privacy/compliance implications are uncertain.
4. Data loss or destructive migration is possible.
5. CI remains failing after bounded auto-fix attempts.
## Metrics to Track
1. Autonomous task success rate.
2. Reopen rate of AI-completed tasks.
3. Regression rate per release.
4. Flaky test percentage.
5. Mean time to diagnose and resolve failures.
## 30-Day Execution Plan
Week 1:
1. Baseline scripts and deterministic environment.
2. Restore lint/test commands to green status.
3. Add structured logging and IDs.
Week 2:
1. Spec template and mandatory spec policy.
2. Contract tests for core backend routes.
3. First diagnostics bundle version.
Week 3:
1. AI policy and bounded autonomous edit/run loop.
2. Regression-test-first bugfix workflow.
3. CI artifact enrichment and runbook mapping.
Week 4:
1. Pilot autonomous feature tasks in low-risk areas.
2. Measure success/failure patterns.
3. Expand scope only if quality gates hold.
## Notes for TalkEdit
1. Keep router files thin and service logic isolated to improve AI edit precision.
2. Preserve compatibility in desktop bridge contracts to avoid frontend breakage.
3. Treat export/transcription pipeline changes as high-risk and always require regression tests.
4. Keep Linux WebKit startup and media URL consistency as explicit regression targets.

View File

@ -60,6 +60,8 @@ async def generate_captions(req: CaptionRequest):
return PlainTextResponse(content, media_type="text/plain") return PlainTextResponse(content, media_type="text/plain")
except HTTPException:
raise
except Exception as e: except Exception as e:
logger.error(f"Caption generation failed: {e}", exc_info=True) logger.error(f"Caption generation failed: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=str(e)) raise HTTPException(status_code=500, detail=str(e))

View File

@ -200,6 +200,8 @@ async def export_video(req: ExportRequest):
result["srt_path"] = srt_path result["srt_path"] = srt_path
return result return result
except HTTPException:
raise
except ValueError as e: except ValueError as e:
raise HTTPException(status_code=400, detail=str(e)) raise HTTPException(status_code=400, detail=str(e))
except RuntimeError as e: except RuntimeError as e:

View File

@ -7,23 +7,52 @@ import logging
import re import re
import subprocess import subprocess
import tempfile import tempfile
import warnings
from pathlib import Path from pathlib import Path
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
try: DEEPFILTER_AVAILABLE = None
from df.enhance import enhance, init_df, load_audio, save_audio enhance = None
DEEPFILTER_AVAILABLE = True init_df = None
except ImportError: load_audio = None
DEEPFILTER_AVAILABLE = False save_audio = None
_df_model = None _df_model = None
_df_state = None _df_state = None
def _ensure_deepfilter_loaded() -> bool:
global DEEPFILTER_AVAILABLE, enhance, init_df, load_audio, save_audio
if DEEPFILTER_AVAILABLE is not None:
return DEEPFILTER_AVAILABLE
try:
# DeepFilterNet currently triggers a third-party torchaudio deprecation warning
# on import in some environments; suppress only this known warning.
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message=r".*torchaudio\._backend\.common\.AudioMetaData has been moved.*",
category=UserWarning,
)
from df.enhance import enhance as _enhance, init_df as _init_df, load_audio as _load_audio, save_audio as _save_audio
enhance = _enhance
init_df = _init_df
load_audio = _load_audio
save_audio = _save_audio
DEEPFILTER_AVAILABLE = True
except ImportError:
DEEPFILTER_AVAILABLE = False
return DEEPFILTER_AVAILABLE
def _init_deepfilter(): def _init_deepfilter():
global _df_model, _df_state global _df_model, _df_state
if not _ensure_deepfilter_loaded():
raise RuntimeError("DeepFilterNet is not available")
if _df_model is None: if _df_model is None:
logger.info("Initializing DeepFilterNet model") logger.info("Initializing DeepFilterNet model")
_df_model, _df_state, _ = init_df() _df_model, _df_state, _ = init_df()
@ -46,7 +75,7 @@ def clean_audio(
if not output_path: if not output_path:
output_path = str(input_path.with_stem(input_path.stem + "_clean")) output_path = str(input_path.with_stem(input_path.stem + "_clean"))
if DEEPFILTER_AVAILABLE: if is_deepfilter_available():
return _clean_with_deepfilter(str(input_path), output_path) return _clean_with_deepfilter(str(input_path), output_path)
else: else:
return _clean_with_ffmpeg(str(input_path), output_path) return _clean_with_ffmpeg(str(input_path), output_path)
@ -77,7 +106,7 @@ def _clean_with_ffmpeg(input_path: str, output_path: str) -> str:
def is_deepfilter_available() -> bool: def is_deepfilter_available() -> bool:
return DEEPFILTER_AVAILABLE return _ensure_deepfilter_loaded()
def detect_silence_ranges(input_path: str, min_silence_ms: int, silence_db: float): def detect_silence_ranges(input_path: str, min_silence_ms: int, silence_db: float):

View File

@ -34,7 +34,8 @@
"tailwindcss": "^3.4.0", "tailwindcss": "^3.4.0",
"typescript": "^5.7.0", "typescript": "^5.7.0",
"typescript-eslint": "^8.58.2", "typescript-eslint": "^8.58.2",
"vite": "^6.0.0" "vite": "^6.0.0",
"vitest": "^4.1.4"
} }
}, },
"node_modules/@alloc/quick-lru": { "node_modules/@alloc/quick-lru": {
@ -1428,6 +1429,13 @@
"win32" "win32"
] ]
}, },
"node_modules/@standard-schema/spec": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/@standard-schema/spec/-/spec-1.1.0.tgz",
"integrity": "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==",
"dev": true,
"license": "MIT"
},
"node_modules/@tauri-apps/api": { "node_modules/@tauri-apps/api": {
"version": "2.10.1", "version": "2.10.1",
"resolved": "https://registry.npmjs.org/@tauri-apps/api/-/api-2.10.1.tgz", "resolved": "https://registry.npmjs.org/@tauri-apps/api/-/api-2.10.1.tgz",
@ -1718,6 +1726,24 @@
"@babel/types": "^7.28.2" "@babel/types": "^7.28.2"
} }
}, },
"node_modules/@types/chai": {
"version": "5.2.3",
"resolved": "https://registry.npmjs.org/@types/chai/-/chai-5.2.3.tgz",
"integrity": "sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA==",
"dev": true,
"license": "MIT",
"dependencies": {
"@types/deep-eql": "*",
"assertion-error": "^2.0.1"
}
},
"node_modules/@types/deep-eql": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@types/deep-eql/-/deep-eql-4.0.2.tgz",
"integrity": "sha512-c9h9dVVMigMPc4bwTvC5dxqtqJZwQPePsWjPlpSOnojbor6pGqdk541lfA7AqFQr5pB1BRdq0juY9db81BwyFw==",
"dev": true,
"license": "MIT"
},
"node_modules/@types/estree": { "node_modules/@types/estree": {
"version": "1.0.8", "version": "1.0.8",
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
@ -2068,6 +2094,119 @@
"vite": "^4.2.0 || ^5.0.0 || ^6.0.0 || ^7.0.0" "vite": "^4.2.0 || ^5.0.0 || ^6.0.0 || ^7.0.0"
} }
}, },
"node_modules/@vitest/expect": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@vitest/expect/-/expect-4.1.4.tgz",
"integrity": "sha512-iPBpra+VDuXmBFI3FMKHSFXp3Gx5HfmSCE8X67Dn+bwephCnQCaB7qWK2ldHa+8ncN8hJU8VTMcxjPpyMkUjww==",
"dev": true,
"license": "MIT",
"dependencies": {
"@standard-schema/spec": "^1.1.0",
"@types/chai": "^5.2.2",
"@vitest/spy": "4.1.4",
"@vitest/utils": "4.1.4",
"chai": "^6.2.2",
"tinyrainbow": "^3.1.0"
},
"funding": {
"url": "https://opencollective.com/vitest"
}
},
"node_modules/@vitest/mocker": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@vitest/mocker/-/mocker-4.1.4.tgz",
"integrity": "sha512-R9HTZBhW6yCSGbGQnDnH3QHfJxokKN4KB+Yvk9Q1le7eQNYwiCyKxmLmurSpFy6BzJanSLuEUDrD+j97Q+ZLPg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@vitest/spy": "4.1.4",
"estree-walker": "^3.0.3",
"magic-string": "^0.30.21"
},
"funding": {
"url": "https://opencollective.com/vitest"
},
"peerDependencies": {
"msw": "^2.4.9",
"vite": "^6.0.0 || ^7.0.0 || ^8.0.0"
},
"peerDependenciesMeta": {
"msw": {
"optional": true
},
"vite": {
"optional": true
}
}
},
"node_modules/@vitest/pretty-format": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@vitest/pretty-format/-/pretty-format-4.1.4.tgz",
"integrity": "sha512-ddmDHU0gjEUyEVLxtZa7xamrpIefdEETu3nZjWtHeZX4QxqJ7tRxSteHVXJOcr8jhiLoGAhkK4WJ3WqBpjx42A==",
"dev": true,
"license": "MIT",
"dependencies": {
"tinyrainbow": "^3.1.0"
},
"funding": {
"url": "https://opencollective.com/vitest"
}
},
"node_modules/@vitest/runner": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@vitest/runner/-/runner-4.1.4.tgz",
"integrity": "sha512-xTp7VZ5aXP5ZJrn15UtJUWlx6qXLnGtF6jNxHepdPHpMfz/aVPx+htHtgcAL2mDXJgKhpoo2e9/hVJsIeFbytQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@vitest/utils": "4.1.4",
"pathe": "^2.0.3"
},
"funding": {
"url": "https://opencollective.com/vitest"
}
},
"node_modules/@vitest/snapshot": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@vitest/snapshot/-/snapshot-4.1.4.tgz",
"integrity": "sha512-MCjCFgaS8aZz+m5nTcEcgk/xhWv0rEH4Yl53PPlMXOZ1/Ka2VcZU6CJ+MgYCZbcJvzGhQRjVrGQNZqkGPttIKw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@vitest/pretty-format": "4.1.4",
"@vitest/utils": "4.1.4",
"magic-string": "^0.30.21",
"pathe": "^2.0.3"
},
"funding": {
"url": "https://opencollective.com/vitest"
}
},
"node_modules/@vitest/spy": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@vitest/spy/-/spy-4.1.4.tgz",
"integrity": "sha512-XxNdAsKW7C+FLydqFJLb5KhJtl3PGCMmYwFRfhvIgxJvLSXhhVI1zM8f1qD3Zg7RCjTSzDVyct6sghs9UEgBEQ==",
"dev": true,
"license": "MIT",
"funding": {
"url": "https://opencollective.com/vitest"
}
},
"node_modules/@vitest/utils": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@vitest/utils/-/utils-4.1.4.tgz",
"integrity": "sha512-13QMT+eysM5uVGa1rG4kegGYNp6cnQcsTc67ELFbhNLQO+vgsygtYJx2khvdt4gVQqSSpC/KT5FZZxUpP3Oatw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@vitest/pretty-format": "4.1.4",
"convert-source-map": "^2.0.0",
"tinyrainbow": "^3.1.0"
},
"funding": {
"url": "https://opencollective.com/vitest"
}
},
"node_modules/acorn": { "node_modules/acorn": {
"version": "8.16.0", "version": "8.16.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz", "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz",
@ -2159,6 +2298,16 @@
"dev": true, "dev": true,
"license": "Python-2.0" "license": "Python-2.0"
}, },
"node_modules/assertion-error": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/assertion-error/-/assertion-error-2.0.1.tgz",
"integrity": "sha512-Izi8RQcffqCeNVgFigKli1ssklIbpHnCYc6AknXGYoB6grJqyeby7jv12JUQgmTAnIDnbck1uxksT4dzN3PWBA==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12"
}
},
"node_modules/autoprefixer": { "node_modules/autoprefixer": {
"version": "10.4.27", "version": "10.4.27",
"resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-10.4.27.tgz", "resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-10.4.27.tgz",
@ -2328,6 +2477,16 @@
], ],
"license": "CC-BY-4.0" "license": "CC-BY-4.0"
}, },
"node_modules/chai": {
"version": "6.2.2",
"resolved": "https://registry.npmjs.org/chai/-/chai-6.2.2.tgz",
"integrity": "sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=18"
}
},
"node_modules/chalk": { "node_modules/chalk": {
"version": "4.1.2", "version": "4.1.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
@ -2508,6 +2667,13 @@
"dev": true, "dev": true,
"license": "ISC" "license": "ISC"
}, },
"node_modules/es-module-lexer": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-2.0.0.tgz",
"integrity": "sha512-5POEcUuZybH7IdmGsD8wlf0AI55wMecM9rVBTI/qEAy2c1kTOm3DjFYjrBdI2K3BaJjJYfYFeRtM0t9ssnRuxw==",
"dev": true,
"license": "MIT"
},
"node_modules/esbuild": { "node_modules/esbuild": {
"version": "0.25.12", "version": "0.25.12",
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.12.tgz", "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.12.tgz",
@ -2747,6 +2913,16 @@
"node": ">=4.0" "node": ">=4.0"
} }
}, },
"node_modules/estree-walker": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-3.0.3.tgz",
"integrity": "sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==",
"dev": true,
"license": "MIT",
"dependencies": {
"@types/estree": "^1.0.0"
}
},
"node_modules/esutils": { "node_modules/esutils": {
"version": "2.0.3", "version": "2.0.3",
"resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz",
@ -2757,6 +2933,16 @@
"node": ">=0.10.0" "node": ">=0.10.0"
} }
}, },
"node_modules/expect-type": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/expect-type/-/expect-type-1.3.0.tgz",
"integrity": "sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA==",
"dev": true,
"license": "Apache-2.0",
"engines": {
"node": ">=12.0.0"
}
},
"node_modules/fast-deep-equal": { "node_modules/fast-deep-equal": {
"version": "3.1.3", "version": "3.1.3",
"resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
@ -3266,6 +3452,16 @@
"react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0-rc" "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0-rc"
} }
}, },
"node_modules/magic-string": {
"version": "0.30.21",
"resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.21.tgz",
"integrity": "sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@jridgewell/sourcemap-codec": "^1.5.5"
}
},
"node_modules/merge2": { "node_modules/merge2": {
"version": "1.4.1", "version": "1.4.1",
"resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz",
@ -3385,6 +3581,17 @@
"node": ">= 6" "node": ">= 6"
} }
}, },
"node_modules/obug": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/obug/-/obug-2.1.1.tgz",
"integrity": "sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ==",
"dev": true,
"funding": [
"https://github.com/sponsors/sxzz",
"https://opencollective.com/debug"
],
"license": "MIT"
},
"node_modules/optionator": { "node_modules/optionator": {
"version": "0.9.4", "version": "0.9.4",
"resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz", "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
@ -3475,6 +3682,13 @@
"dev": true, "dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/pathe": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/pathe/-/pathe-2.0.3.tgz",
"integrity": "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==",
"dev": true,
"license": "MIT"
},
"node_modules/picocolors": { "node_modules/picocolors": {
"version": "1.1.1", "version": "1.1.1",
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
@ -3933,6 +4147,13 @@
"node": ">=8" "node": ">=8"
} }
}, },
"node_modules/siginfo": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/siginfo/-/siginfo-2.0.0.tgz",
"integrity": "sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==",
"dev": true,
"license": "ISC"
},
"node_modules/source-map-js": { "node_modules/source-map-js": {
"version": "1.2.1", "version": "1.2.1",
"resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz", "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz",
@ -3943,6 +4164,20 @@
"node": ">=0.10.0" "node": ">=0.10.0"
} }
}, },
"node_modules/stackback": {
"version": "0.0.2",
"resolved": "https://registry.npmjs.org/stackback/-/stackback-0.0.2.tgz",
"integrity": "sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw==",
"dev": true,
"license": "MIT"
},
"node_modules/std-env": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/std-env/-/std-env-4.1.0.tgz",
"integrity": "sha512-Rq7ybcX2RuC55r9oaPVEW7/xu3tj8u4GeBYHBWCychFtzMIr86A7e3PPEBPT37sHStKX3+TiX/Fr/ACmJLVlLQ==",
"dev": true,
"license": "MIT"
},
"node_modules/strip-json-comments": { "node_modules/strip-json-comments": {
"version": "3.1.1", "version": "3.1.1",
"resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz",
@ -4066,6 +4301,23 @@
"node": ">=0.8" "node": ">=0.8"
} }
}, },
"node_modules/tinybench": {
"version": "2.9.0",
"resolved": "https://registry.npmjs.org/tinybench/-/tinybench-2.9.0.tgz",
"integrity": "sha512-0+DUvqWMValLmha6lr4kD8iAMK1HzV0/aKnCtWb9v9641TnP/MFb7Pc2bxoxQjTXAErryXVgUOfv2YqNllqGeg==",
"dev": true,
"license": "MIT"
},
"node_modules/tinyexec": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/tinyexec/-/tinyexec-1.1.1.tgz",
"integrity": "sha512-VKS/ZaQhhkKFMANmAOhhXVoIfBXblQxGX1myCQ2faQrfmobMftXeJPcZGp0gS07ocvGJWDLZGyOZDadDBqYIJg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=18"
}
},
"node_modules/tinyglobby": { "node_modules/tinyglobby": {
"version": "0.2.15", "version": "0.2.15",
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz", "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
@ -4114,6 +4366,16 @@
"url": "https://github.com/sponsors/jonschlinkert" "url": "https://github.com/sponsors/jonschlinkert"
} }
}, },
"node_modules/tinyrainbow": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/tinyrainbow/-/tinyrainbow-3.1.0.tgz",
"integrity": "sha512-Bf+ILmBgretUrdJxzXM0SgXLZ3XfiaUuOj/IKQHuTXip+05Xn+uyEYdVg0kYDipTBcLrCVyUzAPz7QmArb0mmw==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=14.0.0"
}
},
"node_modules/to-regex-range": { "node_modules/to-regex-range": {
"version": "5.0.1", "version": "5.0.1",
"resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
@ -4352,6 +4614,109 @@
"url": "https://github.com/sponsors/jonschlinkert" "url": "https://github.com/sponsors/jonschlinkert"
} }
}, },
"node_modules/vitest": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/vitest/-/vitest-4.1.4.tgz",
"integrity": "sha512-tFuJqTxKb8AvfyqMfnavXdzfy3h3sWZRWwfluGbkeR7n0HUev+FmNgZ8SDrRBTVrVCjgH5cA21qGbCffMNtWvg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@vitest/expect": "4.1.4",
"@vitest/mocker": "4.1.4",
"@vitest/pretty-format": "4.1.4",
"@vitest/runner": "4.1.4",
"@vitest/snapshot": "4.1.4",
"@vitest/spy": "4.1.4",
"@vitest/utils": "4.1.4",
"es-module-lexer": "^2.0.0",
"expect-type": "^1.3.0",
"magic-string": "^0.30.21",
"obug": "^2.1.1",
"pathe": "^2.0.3",
"picomatch": "^4.0.3",
"std-env": "^4.0.0-rc.1",
"tinybench": "^2.9.0",
"tinyexec": "^1.0.2",
"tinyglobby": "^0.2.15",
"tinyrainbow": "^3.1.0",
"vite": "^6.0.0 || ^7.0.0 || ^8.0.0",
"why-is-node-running": "^2.3.0"
},
"bin": {
"vitest": "vitest.mjs"
},
"engines": {
"node": "^20.0.0 || ^22.0.0 || >=24.0.0"
},
"funding": {
"url": "https://opencollective.com/vitest"
},
"peerDependencies": {
"@edge-runtime/vm": "*",
"@opentelemetry/api": "^1.9.0",
"@types/node": "^20.0.0 || ^22.0.0 || >=24.0.0",
"@vitest/browser-playwright": "4.1.4",
"@vitest/browser-preview": "4.1.4",
"@vitest/browser-webdriverio": "4.1.4",
"@vitest/coverage-istanbul": "4.1.4",
"@vitest/coverage-v8": "4.1.4",
"@vitest/ui": "4.1.4",
"happy-dom": "*",
"jsdom": "*",
"vite": "^6.0.0 || ^7.0.0 || ^8.0.0"
},
"peerDependenciesMeta": {
"@edge-runtime/vm": {
"optional": true
},
"@opentelemetry/api": {
"optional": true
},
"@types/node": {
"optional": true
},
"@vitest/browser-playwright": {
"optional": true
},
"@vitest/browser-preview": {
"optional": true
},
"@vitest/browser-webdriverio": {
"optional": true
},
"@vitest/coverage-istanbul": {
"optional": true
},
"@vitest/coverage-v8": {
"optional": true
},
"@vitest/ui": {
"optional": true
},
"happy-dom": {
"optional": true
},
"jsdom": {
"optional": true
},
"vite": {
"optional": false
}
}
},
"node_modules/vitest/node_modules/picomatch": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz",
"integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/wavesurfer.js": { "node_modules/wavesurfer.js": {
"version": "7.12.1", "version": "7.12.1",
"resolved": "https://registry.npmjs.org/wavesurfer.js/-/wavesurfer.js-7.12.1.tgz", "resolved": "https://registry.npmjs.org/wavesurfer.js/-/wavesurfer.js-7.12.1.tgz",
@ -4374,6 +4739,23 @@
"node": ">= 8" "node": ">= 8"
} }
}, },
"node_modules/why-is-node-running": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/why-is-node-running/-/why-is-node-running-2.3.0.tgz",
"integrity": "sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w==",
"dev": true,
"license": "MIT",
"dependencies": {
"siginfo": "^2.0.0",
"stackback": "0.0.2"
},
"bin": {
"why-is-node-running": "cli.js"
},
"engines": {
"node": ">=8"
}
},
"node_modules/word-wrap": { "node_modules/word-wrap": {
"version": "1.2.5", "version": "1.2.5",
"resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz", "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz",

View File

@ -7,7 +7,8 @@
"dev": "vite", "dev": "vite",
"build": "tsc -b && vite build", "build": "tsc -b && vite build",
"lint": "eslint .", "lint": "eslint .",
"preview": "vite preview" "preview": "vite preview",
"test": "vitest run"
}, },
"dependencies": { "dependencies": {
"@tauri-apps/api": "^2", "@tauri-apps/api": "^2",
@ -36,6 +37,7 @@
"tailwindcss": "^3.4.0", "tailwindcss": "^3.4.0",
"typescript": "^5.7.0", "typescript": "^5.7.0",
"typescript-eslint": "^8.58.2", "typescript-eslint": "^8.58.2",
"vite": "^6.0.0" "vite": "^6.0.0",
"vitest": "^4.1.4"
} }
} }

View File

@ -47,14 +47,12 @@ export default function App() {
transcriptionModel, transcriptionModel,
language, language,
isTranscribing, isTranscribing,
transcriptionProgress,
transcriptionStatus, transcriptionStatus,
loadVideo, loadVideo,
setBackendUrl, setBackendUrl,
setTranscription, setTranscription,
setTranscriptionModel, setTranscriptionModel,
setTranscribing, setTranscribing,
backendUrl,
selectedWordIndices, selectedWordIndices,
addCutRange, addCutRange,
addMuteRange, addMuteRange,

View File

@ -1,10 +1,10 @@
import { useState, useCallback, useMemo } from 'react'; import { useState, useCallback } from 'react';
import { useEditorStore } from '../store/editorStore'; import { useEditorStore } from '../store/editorStore';
import { Download, Loader2, Zap, Cog, Info } from 'lucide-react'; import { Download, Loader2, Zap, Cog, Info } from 'lucide-react';
import type { ExportOptions } from '../types/project'; import type { ExportOptions } from '../types/project';
export default function ExportDialog() { export default function ExportDialog() {
const { videoPath, words, deletedRanges, cutRanges, muteRanges, gainRanges, globalGainDb, isExporting, exportProgress, backendUrl, setExporting, getKeepSegments } = const { videoPath, words, deletedRanges, muteRanges, gainRanges, globalGainDb, isExporting, exportProgress, backendUrl, setExporting, getKeepSegments } =
useEditorStore(); useEditorStore();
const hasCuts = deletedRanges.length > 0; const hasCuts = deletedRanges.length > 0;
@ -60,7 +60,7 @@ export default function ExportDialog() {
console.error('Export error:', err); console.error('Export error:', err);
setExporting(false); setExporting(false);
} }
}, [videoPath, options, backendUrl, setExporting, getKeepSegments]); }, [videoPath, options, backendUrl, setExporting, getKeepSegments, deletedRanges, muteRanges, gainRanges, globalGainDb, words]);
return ( return (
<div className="p-4 space-y-5"> <div className="p-4 space-y-5">

View File

@ -1,5 +1,5 @@
import { useAIStore } from '../store/aiStore'; import { useAIStore } from '../store/aiStore';
import { useState, useEffect } from 'react'; import { useState, useEffect, useCallback } from 'react';
import type { AIProvider } from '../types/project'; import type { AIProvider } from '../types/project';
import { useEditorStore } from '../store/editorStore'; import { useEditorStore } from '../store/editorStore';
import { Bot, Cloud, Brain, RefreshCw } from 'lucide-react'; import { Bot, Cloud, Brain, RefreshCw } from 'lucide-react';
@ -10,7 +10,7 @@ export default function SettingsPanel() {
const [ollamaModels, setOllamaModels] = useState<string[]>([]); const [ollamaModels, setOllamaModels] = useState<string[]>([]);
const [loadingModels, setLoadingModels] = useState(false); const [loadingModels, setLoadingModels] = useState(false);
const fetchOllamaModels = async () => { const fetchOllamaModels = useCallback(async () => {
setLoadingModels(true); setLoadingModels(true);
try { try {
const res = await fetch(`${backendUrl}/ai/ollama-models`); const res = await fetch(`${backendUrl}/ai/ollama-models`);
@ -23,11 +23,11 @@ export default function SettingsPanel() {
} finally { } finally {
setLoadingModels(false); setLoadingModels(false);
} }
}; }, [backendUrl]);
useEffect(() => { useEffect(() => {
fetchOllamaModels(); fetchOllamaModels();
}, [backendUrl]); }, [fetchOllamaModels]);
const providerIcons: Record<AIProvider, React.ReactNode> = { const providerIcons: Record<AIProvider, React.ReactNode> = {
ollama: <Bot className="w-4 h-4" />, ollama: <Bot className="w-4 h-4" />,
@ -35,12 +35,6 @@ export default function SettingsPanel() {
claude: <Brain className="w-4 h-4" />, claude: <Brain className="w-4 h-4" />,
}; };
const providerLabels: Record<AIProvider, string> = {
ollama: 'Ollama (Local)',
openai: 'OpenAI',
claude: 'Claude (Anthropic)',
};
return ( return (
<div className="p-4 space-y-6"> <div className="p-4 space-y-6">
<h3 className="text-sm font-semibold">AI Settings</h3> <h3 className="text-sm font-semibold">AI Settings</h3>

View File

@ -13,7 +13,6 @@ export default function TranscriptEditor() {
const hoveredWordIndex = useEditorStore((s) => s.hoveredWordIndex); const hoveredWordIndex = useEditorStore((s) => s.hoveredWordIndex);
const setSelectedWordIndices = useEditorStore((s) => s.setSelectedWordIndices); const setSelectedWordIndices = useEditorStore((s) => s.setSelectedWordIndices);
const setHoveredWordIndex = useEditorStore((s) => s.setHoveredWordIndex); const setHoveredWordIndex = useEditorStore((s) => s.setHoveredWordIndex);
const deleteSelectedWords = useEditorStore((s) => s.deleteSelectedWords);
const restoreRange = useEditorStore((s) => s.restoreRange); const restoreRange = useEditorStore((s) => s.restoreRange);
const removeCutRange = useEditorStore((s) => s.removeCutRange); const removeCutRange = useEditorStore((s) => s.removeCutRange);
const removeMuteRange = useEditorStore((s) => s.removeMuteRange); const removeMuteRange = useEditorStore((s) => s.removeMuteRange);

View File

@ -150,7 +150,6 @@ export default function WaveformTimeline({
const [selectionStart, setSelectionStart] = useState<number | null>(null); const [selectionStart, setSelectionStart] = useState<number | null>(null);
const [selectionEnd, setSelectionEnd] = useState<number | null>(null); const [selectionEnd, setSelectionEnd] = useState<number | null>(null);
const [selectedZone, setSelectedZone] = useState<{type: 'cut' | 'mute' | 'gain', id: string} | null>(null); const [selectedZone, setSelectedZone] = useState<{type: 'cut' | 'mute' | 'gain', id: string} | null>(null);
const [editingZone, setEditingZone] = useState<{type: 'cut' | 'mute' | 'gain', id: string, edge: 'start' | 'end' | 'move'} | null>(null);
const [hoverCursor, setHoverCursor] = useState<string>('crosshair'); const [hoverCursor, setHoverCursor] = useState<string>('crosshair');
const editingZoneRef = useRef<{type: 'cut' | 'mute' | 'gain', id: string, edge: 'start' | 'end' | 'move'} | null>(null); const editingZoneRef = useRef<{type: 'cut' | 'mute' | 'gain', id: string, edge: 'start' | 'end' | 'move'} | null>(null);
const [showCutZones, setShowCutZones] = useState(true); const [showCutZones, setShowCutZones] = useState(true);
@ -210,7 +209,7 @@ export default function WaveformTimeline({
); );
if (cancelled) return; if (cancelled) return;
waveformDataRef.current = waveformData; waveformDataRef.current = waveformData;
drawStaticWaveform(); drawStaticWaveformRef.current();
} catch (err) { } catch (err) {
if (cancelled || (err instanceof DOMException && err.name === 'AbortError')) { if (cancelled || (err instanceof DOMException && err.name === 'AbortError')) {
console.log('[WaveformTimeline] req=', requestId, 'aborted/cancelled'); console.log('[WaveformTimeline] req=', requestId, 'aborted/cancelled');
@ -594,7 +593,6 @@ export default function WaveformTimeline({
// Check if click is in waveform area // Check if click is in waveform area
if (y < waveTop || y > waveTop + waveH) return null; if (y < waveTop || y > waveTop + waveH) return null;
const clickTime = scroll + x / pxPerSec;
const handleSize = forHover ? 6 : 8; // Smaller hit area for hover, larger for click const handleSize = forHover ? 6 : 8; // Smaller hit area for hover, larger for click
// Check cut ranges // Check cut ranges
@ -743,7 +741,6 @@ export default function WaveformTimeline({
setSelectedZone({ type: zoneHit.type, id: zoneHit.id }); setSelectedZone({ type: zoneHit.type, id: zoneHit.id });
} else { } else {
setSelectedZone({ type: zoneHit.type, id: zoneHit.id }); setSelectedZone({ type: zoneHit.type, id: zoneHit.id });
setEditingZone(zoneHit);
editingZoneRef.current = zoneHit; editingZoneRef.current = zoneHit;
} }
isDraggingRef.current = true; isDraggingRef.current = true;
@ -795,7 +792,6 @@ export default function WaveformTimeline({
const onUp = () => { const onUp = () => {
isDraggingRef.current = false; isDraggingRef.current = false;
setIsDragging(false); setIsDragging(false);
setEditingZone(null);
editingZoneRef.current = null; editingZoneRef.current = null;
window.removeEventListener('mousemove', onMove); window.removeEventListener('mousemove', onMove);
window.removeEventListener('mouseup', onUp); window.removeEventListener('mouseup', onUp);
@ -808,7 +804,6 @@ export default function WaveformTimeline({
// Clear selection if clicking elsewhere // Clear selection if clicking elsewhere
setSelectedZone(null); setSelectedZone(null);
setEditingZone(null);
if (cutMode || muteMode || gainMode) { if (cutMode || muteMode || gainMode) {
// Range selection mode // Range selection mode
@ -886,7 +881,6 @@ export default function WaveformTimeline({
if (e.key === 'Escape') { if (e.key === 'Escape') {
setSelectedZone(null); setSelectedZone(null);
setEditingZone(null);
editingZoneRef.current = null; editingZoneRef.current = null;
} else if (e.key === 'Delete' || e.key === 'Backspace') { } else if (e.key === 'Delete' || e.key === 'Backspace') {
if (selectedZone) { if (selectedZone) {
@ -901,7 +895,6 @@ export default function WaveformTimeline({
removeGainRange(selectedZone.id); removeGainRange(selectedZone.id);
} }
setSelectedZone(null); setSelectedZone(null);
setEditingZone(null);
editingZoneRef.current = null; editingZoneRef.current = null;
} }
} }

View File

@ -2,7 +2,6 @@ import { useEffect, useRef } from 'react';
import { useEditorStore } from '../store/editorStore'; import { useEditorStore } from '../store/editorStore';
export function useKeyboardShortcuts() { export function useKeyboardShortcuts() {
const deleteSelectedWords = useEditorStore((s) => s.deleteSelectedWords);
const addCutRange = useEditorStore((s) => s.addCutRange); const addCutRange = useEditorStore((s) => s.addCutRange);
const selectedWordIndices = useEditorStore((s) => s.selectedWordIndices); const selectedWordIndices = useEditorStore((s) => s.selectedWordIndices);
const words = useEditorStore((s) => s.words); const words = useEditorStore((s) => s.words);
@ -148,7 +147,7 @@ export function useKeyboardShortcuts() {
window.addEventListener('keydown', handler); window.addEventListener('keydown', handler);
return () => window.removeEventListener('keydown', handler); return () => window.removeEventListener('keydown', handler);
}, [deleteSelectedWords, selectedWordIndices]); }, [addCutRange, selectedWordIndices, words]);
} }
async function saveProject() { async function saveProject() {
@ -190,24 +189,19 @@ async function saveProject() {
} }
} }
let cheatsheetVisible = false;
function toggleCheatsheet() { function toggleCheatsheet() {
const existing = document.getElementById('keyboard-cheatsheet'); const existing = document.getElementById('keyboard-cheatsheet');
if (existing) { if (existing) {
existing.remove(); existing.remove();
cheatsheetVisible = false;
return; return;
} }
cheatsheetVisible = true;
const overlay = document.createElement('div'); const overlay = document.createElement('div');
overlay.id = 'keyboard-cheatsheet'; overlay.id = 'keyboard-cheatsheet';
overlay.style.cssText = overlay.style.cssText =
'position:fixed;inset:0;z-index:9999;display:flex;align-items:center;justify-content:center;background:rgba(0,0,0,0.7);'; 'position:fixed;inset:0;z-index:9999;display:flex;align-items:center;justify-content:center;background:rgba(0,0,0,0.7);';
overlay.onclick = () => { overlay.onclick = () => {
overlay.remove(); overlay.remove();
cheatsheetVisible = false;
}; };
const shortcuts = [ const shortcuts = [

View File

@ -27,6 +27,7 @@ const EXPORT_FILTERS = [
window.electronAPI = { window.electronAPI = {
openFile: async (_options?: Record<string, unknown>): Promise<string | null> => { openFile: async (_options?: Record<string, unknown>): Promise<string | null> => {
void _options;
const result = await open({ const result = await open({
multiple: false, multiple: false,
filters: VIDEO_FILTERS, filters: VIDEO_FILTERS,
@ -35,6 +36,7 @@ window.electronAPI = {
}, },
saveFile: async (_options?: Record<string, unknown>): Promise<string | null> => { saveFile: async (_options?: Record<string, unknown>): Promise<string | null> => {
void _options;
const result = await save({ filters: EXPORT_FILTERS }); const result = await save({ filters: EXPORT_FILTERS });
return result ?? null; return result ?? null;
}, },

View File

@ -155,8 +155,12 @@ export const useEditorStore = create<EditorState & EditorActions>()(
const { videoPath, words, segments, deletedRanges, cutRanges, muteRanges, gainRanges, globalGainDb, silenceTrimGroups, transcriptionModel, language, exportedAudioPath } = get(); const { videoPath, words, segments, deletedRanges, cutRanges, muteRanges, gainRanges, globalGainDb, silenceTrimGroups, transcriptionModel, language, exportedAudioPath } = get();
if (!videoPath) throw new Error('No video loaded'); if (!videoPath) throw new Error('No video loaded');
const now = new Date().toISOString(); const now = new Date().toISOString();
// Strip globalStartIndex (runtime-only field) before persisting // Strip globalStartIndex (runtime-only field) before persisting.
const persistSegments = segments.map(({ globalStartIndex: _drop, ...rest }) => rest); const persistSegments = segments.map((seg) => {
const rest = { ...seg };
delete (rest as Partial<Segment>).globalStartIndex;
return rest;
});
return { return {
version: 1, version: 1,
videoPath, videoPath,

View File

@ -1 +1 @@
{"root":["./src/App.tsx","./src/main.tsx","./src/vite-env.d.ts","./src/components/AIPanel.tsx","./src/components/DevPanel.tsx","./src/components/ExportDialog.tsx","./src/components/SettingsPanel.tsx","./src/components/SilenceTrimmerPanel.tsx","./src/components/TranscriptEditor.tsx","./src/components/VideoPlayer.tsx","./src/components/VolumePanel.tsx","./src/components/WaveformTimeline.tsx","./src/hooks/useKeyboardShortcuts.ts","./src/hooks/useVideoSync.ts","./src/lib/dev-logger.ts","./src/lib/tauri-bridge.ts","./src/store/aiStore.ts","./src/store/editorStore.ts","./src/types/project.ts"],"version":"5.9.3"} {"root":["./src/App.tsx","./src/main.tsx","./src/vite-env.d.ts","./src/components/AIPanel.tsx","./src/components/DevPanel.tsx","./src/components/ExportDialog.tsx","./src/components/SettingsPanel.tsx","./src/components/SilenceTrimmerPanel.tsx","./src/components/TranscriptEditor.tsx","./src/components/VideoPlayer.tsx","./src/components/VolumePanel.tsx","./src/components/WaveformTimeline.tsx","./src/hooks/useKeyboardShortcuts.ts","./src/hooks/useVideoSync.ts","./src/lib/dev-logger.ts","./src/lib/tauri-bridge.ts","./src/store/aiStore.ts","./src/store/editorStore.test.ts","./src/store/editorStore.ts","./src/types/project.ts"],"version":"5.9.3"}

View File

@ -59,7 +59,7 @@ done
if [[ -n "$PY" ]]; then if [[ -n "$PY" ]]; then
capture_cmd "backend_python_version" "$PY" --version capture_cmd "backend_python_version" "$PY" --version
capture_cmd "backend_health_check" "$PY" -c "import importlib; importlib.import_module('backend.main'); print('backend import OK')" capture_cmd "backend_health_check" env PYTHONPATH="$ROOT_DIR/backend:$ROOT_DIR" "$PY" -c "import importlib; importlib.import_module('backend.main'); print('backend import OK')"
fi fi
capture_cmd "list_recent_files" find "$ROOT_DIR" -maxdepth 2 -type f | head -n 200 capture_cmd "list_recent_files" find "$ROOT_DIR" -maxdepth 2 -type f | head -n 200

View File

@ -19,12 +19,12 @@ run_if_present() {
log "root: $ROOT_DIR" log "root: $ROOT_DIR"
cd "$ROOT_DIR" cd "$ROOT_DIR"
log "Step 1/5: frontend dependency check" log "Step 1/7: frontend dependency check"
if [[ ! -d "frontend/node_modules" ]]; then if [[ ! -d "frontend/node_modules" ]]; then
log "frontend/node_modules missing; install with: cd frontend && npm install" log "frontend/node_modules missing; install with: cd frontend && npm install"
fi fi
log "Step 2/5: frontend lint" log "Step 2/7: frontend lint"
if [[ -f "frontend/package.json" ]]; then if [[ -f "frontend/package.json" ]]; then
( (
cd frontend cd frontend
@ -39,7 +39,7 @@ else
log "frontend/package.json not found; skipping" log "frontend/package.json not found; skipping"
fi fi
log "Step 3/5: frontend build" log "Step 3/7: frontend build"
if [[ -f "frontend/package.json" ]]; then if [[ -f "frontend/package.json" ]]; then
( (
cd frontend cd frontend
@ -52,7 +52,20 @@ if [[ -f "frontend/package.json" ]]; then
) )
fi fi
log "Step 4/5: backend syntax check" log "Step 4/7: frontend tests"
if [[ -f "frontend/package.json" ]]; then
(
cd frontend
if npm run -s test; then
log "frontend tests: OK"
else
log "frontend tests failed"
exit 1
fi
)
fi
log "Step 5/7: backend syntax check"
PY="" PY=""
for p in \ for p in \
"$ROOT_DIR/.venv312/bin/python3.12" \ "$ROOT_DIR/.venv312/bin/python3.12" \
@ -67,6 +80,14 @@ for p in \
fi fi
done done
if [[ -z "$PY" ]]; then
if command -v python3 >/dev/null 2>&1; then
PY="$(command -v python3)"
elif command -v python >/dev/null 2>&1; then
PY="$(command -v python)"
fi
fi
if [[ -n "$PY" ]]; then if [[ -n "$PY" ]]; then
log "using python: $PY" log "using python: $PY"
"$PY" -m py_compile "$ROOT_DIR/backend/main.py" "$ROOT_DIR/backend/routers/export.py" "$PY" -m py_compile "$ROOT_DIR/backend/main.py" "$ROOT_DIR/backend/routers/export.py"
@ -75,9 +96,22 @@ else
log "no project python found (.venv312/.venv/venv); skipping backend syntax check" log "no project python found (.venv312/.venv/venv); skipping backend syntax check"
fi fi
log "Step 5/5: backend health import smoke" log "Step 6/7: backend unit tests"
if [[ -n "$PY" ]]; then if [[ -n "$PY" ]]; then
"$PY" - <<'PYCODE' if find "$ROOT_DIR/backend/tests" -type f -name 'test_*.py' -print -quit 2>/dev/null | grep -q .; then
PYTHONPATH="$ROOT_DIR/backend:$ROOT_DIR" "$PY" -m unittest discover -s "$ROOT_DIR/backend/tests" -p 'test_*.py' -v
log "backend unit tests: OK"
else
log "backend unit tests: skipped (no tests found)"
fi
fi
log "Step 7/7: backend health import smoke"
if [[ -n "$PY" ]]; then
if [[ "${SKIP_BACKEND_IMPORT_SMOKE:-0}" == "1" ]]; then
log "backend import smoke: skipped (SKIP_BACKEND_IMPORT_SMOKE=1)"
else
PYTHONPATH="$ROOT_DIR/backend:$ROOT_DIR" "$PY" - <<'PYCODE'
import importlib import importlib
mods = [ mods = [
"backend.main", "backend.main",
@ -89,5 +123,6 @@ for m in mods:
print("backend import smoke: OK") print("backend import smoke: OK")
PYCODE PYCODE
fi fi
fi
log "Validation complete" log "Validation complete"