added tk gui

This commit is contained in:
2025-12-23 18:12:01 -07:00
parent 550f61b1a8
commit 43a34aaf00
2 changed files with 54 additions and 557 deletions

View File

@ -1,30 +1,23 @@
# Saw Mill Knot Detection (YOLOX/YOLO) # Saw Mill Knot Detection
This repository contains a complete wood defect detection system using YOLOX/YOLO models, trained to detect 10 different types of wood surface defects. The system includes a web-based annotation GUI, automated training pipeline, and is optimized for deployment on OAK-D cameras. This repository contains a complete wood defect detection system with a web-based annotation GUI and separate training/deployment scripts. Supports multiple model frameworks (RF-DETR, RT-DETR, YOLOv6, YOLOX) and is optimized for deployment on OAK-D cameras.
## 🎯 Project Overview ## 🎯 Project Overview
- **Model**: YOLOX-nano (Ultralytics YOLO framework) - **Models**: RF-DETR, RT-DETR, YOLOv6, YOLOX (all MIT/Apache 2.0 licensed)
- **Dataset**: 20,276 wood surface defect images with 10 defect categories - **Dataset**: 20,276 wood surface defect images
- **Training**: 5 epochs, mAP50: 0.612, mAP50-95: 0.357 - **Annotation GUI**: Gradio-based web interface for manual annotation
- **Deployment Target**: OAK-D 4 Pro camera - **Training Scripts**: Separate Python scripts for model training
- **Framework**: Ultralytics 8.3.240 - **Deployment**: OAK-D camera optimization with OpenVINO conversion
- **License**: All models free for commercial use
## 📊 Dataset Information ## 📊 Dataset Information
**Source**: [Kaggle Wood Surface Defects Dataset](https://www.kaggle.com/datasets/kirs0816/wood-surface-defects) **Source**: [Kaggle Wood Surface Defects Dataset](https://www.kaggle.com/datasets/kirs0816/wood-surface-defects)
**Classes** (10 total): **Classes** (10 total):
- Live knot - Live knot, Dead knot, Knot with crack, Crack, Resin
- Dead knot - Marrow, Quartzity, Knot missing, Blue stain, Overgrown
- Knot with crack
- Crack
- Resin
- Marrow
- Quartzity
- Knot missing
- Blue stain
- Overgrown
**Dataset Split**: **Dataset Split**:
- Train: 16,220 images - Train: 16,220 images
@ -65,7 +58,8 @@ source .venv/bin/activate # or conda activate your_env
python annotation_gui.py python annotation_gui.py
``` ```
# Install dependencies Install dependencies:
```bash
pip install -U pip pip install -U pip
pip install ultralytics gradio rfdetr pip install ultralytics gradio rfdetr
``` ```
@ -88,44 +82,45 @@ python setup_datasets.py # Creates dataset_coco/ and updates configs
python annotation_gui.py python annotation_gui.py
``` ```
Tkinter version (new):
```bash
python tk_annotation_gui.py
# or
./run_tk_gui.sh
```
Open http://localhost:7860 in your browser to access the web-based annotation interface with: Open http://localhost:7860 in your browser to access the web-based annotation interface with:
- Image navigation with index display - Image navigation with index display
- Auto-labeling with trained YOLOX model - Auto-labeling with trained models
- Manual annotation tools - Manual annotation tools with delete buttons
- Real-time result visualization - Real-time result visualization
- Export to COCO format
### 4. Train Models ### 4. Train Models
Choose from three different frameworks: Use the dedicated training script for all frameworks:
#### RF-DETR (Highest accuracy, slower training)
```bash ```bash
python train_rfdetr.py \ # Prepare dataset from annotations (optional)
--dataset-dir dataset_coco \ python train_model.py --prepare-dataset --images-dir IMAGE --annotations annotations.json --dataset dataset_prepared
--output-dir runs/rfdetr_medium \
--model medium \ # Train models with different frameworks
--epochs 50 \ python train_model.py --framework rf-detr --dataset dataset_prepared --output runs/rfdetr_training --model-size medium --epochs 50
--batch-size 4 \ python train_model.py --framework rtdetr --dataset dataset_prepared --output runs/rtdetr_training --model-size small --epochs 30
--grad-accum-steps 4 \ python train_model.py --framework yolox --dataset dataset_prepared --output runs/yolox_training --model-size nano --epochs 50
--lr 1e-4 python train_model.py --framework yolov6 --dataset dataset_prepared --output runs/yolov6_training --model-size nano --epochs 50
# See TRAINING_README.md for detailed training options
``` ```
#### YOLOX (Balanced performance/speed) ### 5. Convert for OAK-D Deployment
```bash
python train_yolox.py \
--dataset-dir dataset_yolo \
--model yolox-nano \
--epochs 50 \
--batch-size 8
```
#### YOLOv6 (Fastest, edge-optimized)
```bash ```bash
python train_yolov6.py \ # Convert trained model for edge deployment
--dataset-dir dataset_yolo \ python convert_for_deployment.py --model runs/training/weights/best.pt --output oak_d_deployment --img-size 640
--model yolov6n \
--epochs 50 \ # See TRAINING_README.md for deployment instructions
--batch-size 8
``` ```
## 📁 Project Structure ## 📁 Project Structure
@ -133,9 +128,9 @@ python train_yolov6.py \
``` ```
saw_mill_knot_detection/ saw_mill_knot_detection/
├── annotation_gui.py # Gradio web interface for annotation ├── annotation_gui.py # Gradio web interface for annotation
├── train_rfdetr.py # RF-DETR training script ├── train_model.py # Unified training script for all frameworks
├── train_yolox.py # YOLOX training script ├── convert_for_deployment.py # Model conversion for OAK-D deployment
├── train_yolov6.py # YOLOv6 training script ├── TRAINING_README.md # Detailed training and deployment guide
├── setup_datasets.py # Multi-format dataset setup script ├── setup_datasets.py # Multi-format dataset setup script
├── split_coco_dataset.py # Dataset splitting utility ├── split_coco_dataset.py # Dataset splitting utility
├── config.py # Configuration settings ├── config.py # Configuration settings

View File

@ -296,9 +296,6 @@ class AnnotationApp:
self.current_idx = 0 self.current_idx = 0
self.annotations = {} # image_name -> list of boxes self.annotations = {} # image_name -> list of boxes
self.model = None self.model = None
self.training_process = None
self.training_thread = None
self.training_status = "Not training"
# Load images if directory provided # Load images if directory provided
if images_dir and images_dir.exists(): if images_dir and images_dir.exists():
@ -958,7 +955,13 @@ class AnnotationApp:
# Add annotations # Add annotations
boxes = self.annotations.get(filename, []) boxes = self.annotations.get(filename, [])
for box in boxes: for box in boxes:
x1, y1, x2, y2 = box["bbox"] if isinstance(box, dict) and "bbox" in box:
x1, y1, x2, y2 = box["bbox"]
score = box.get("confidence", 1.0)
else:
# Backward/experimental compatibility: [x1, y1, x2, y2]
x1, y1, x2, y2 = box
score = 1.0
w = x2 - x1 w = x2 - x1
h = y2 - y1 h = y2 - y1
@ -969,7 +972,7 @@ class AnnotationApp:
"bbox": [x1, y1, w, h], "bbox": [x1, y1, w, h],
"area": w * h, "area": w * h,
"iscrowd": 0, "iscrowd": 0,
"score": box.get("confidence", 1.0) "score": score
}) })
ann_id += 1 ann_id += 1
@ -978,224 +981,6 @@ class AnnotationApp:
return f"✓ Exported {len(coco_data['annotations'])} annotations to {output_path}" return f"✓ Exported {len(coco_data['annotations'])} annotations to {output_path}"
def prepare_training_dataset(self, output_dir: Path, train_split: float = 0.8, valid_split: float = 0.1):
"""Prepare dataset in RF-DETR format (train/valid/test splits)."""
output_dir.mkdir(parents=True, exist_ok=True)
# Create splits
import random
annotated_images = [img for img in self.image_paths if img.name in self.annotations and self.annotations[img.name]]
if len(annotated_images) < 10:
return f"⚠️ Need at least 10 annotated images, have {len(annotated_images)}"
random.shuffle(annotated_images)
n = len(annotated_images)
train_n = int(n * train_split)
valid_n = int(n * valid_split)
splits = {
"train": annotated_images[:train_n],
"valid": annotated_images[train_n:train_n + valid_n],
"test": annotated_images[train_n + valid_n:]
}
# Create directories and copy images
import shutil
for split_name, split_images in splits.items():
split_dir = output_dir / split_name
split_dir.mkdir(exist_ok=True)
# Prepare COCO JSON for this split
coco_data = {
"images": [],
"annotations": [],
"categories": [{"id": 0, "name": "knot", "supercategory": "defect"}]
}
ann_id = 0
for img_id, img_path in enumerate(split_images):
# Copy image
dest = split_dir / img_path.name
shutil.copy2(img_path, dest)
# Add to COCO
img = Image.open(img_path)
width, height = img.size
coco_data["images"].append({
"id": img_id,
"file_name": img_path.name,
"width": width,
"height": height
})
# Add annotations
boxes = self.annotations.get(img_path.name, [])
for box in boxes:
x1, y1, x2, y2 = box["bbox"]
w = x2 - x1
h = y2 - y1
coco_data["annotations"].append({
"id": ann_id,
"image_id": img_id,
"category_id": 0,
"bbox": [x1, y1, w, h],
"area": w * h,
"iscrowd": 0
})
ann_id += 1
# Save COCO JSON
with (split_dir / "_annotations.coco.json").open("w") as f:
json.dump(coco_data, f, indent=2)
return f"✓ Dataset prepared: {len(splits['train'])} train, {len(splits['valid'])} valid, {len(splits['test'])} test"
def start_training(self, framework: str, dataset_dir: str, output_dir: str, model_size: str,
epochs: int, batch_size: int, lr: float, progress=gr.Progress()):
"""Start training in background."""
dataset_path = Path(dataset_dir)
output_path = Path(output_dir)
if not dataset_path.exists():
return "❌ Dataset directory not found"
if self.training_process and self.training_process.poll() is None:
return "⚠️ Training already in progress"
output_path.mkdir(parents=True, exist_ok=True)
# Build training command based on framework
venv_python = Path(__file__).parent / ".venv/bin/python"
if framework == "RF-DETR":
train_script = Path(__file__).parent / "train_rfdetr.py"
# Map sizes: nano->nano, small->small, medium->medium, base->base
size_map = {"nano": "nano", "small": "small", "medium": "medium", "base": "base"}
model_arg = size_map.get(model_size, "medium")
cmd = [
str(venv_python),
str(train_script),
"--dataset-dir", str(dataset_path),
"--output-dir", str(output_path),
"--model", model_arg,
"--epochs", str(epochs),
"--batch-size", str(batch_size),
"--grad-accum-steps", "2", # Default grad accum
"--lr", str(lr)
]
elif framework == "RT-DETR":
train_script = Path(__file__).parent / "train_rtdetr.py"
# Map sizes: nano->r18, small->r34, medium->r50, base->l
size_map = {"nano": "rtdetr-r18", "small": "rtdetr-r34", "medium": "rtdetr-r50", "base": "rtdetr-l"}
model_arg = size_map.get(model_size, "rtdetr-r18")
cmd = [
str(venv_python),
str(train_script),
"--dataset-dir", str(dataset_path),
"--output-dir", str(output_path),
"--model", model_arg,
"--epochs", str(epochs),
"--batch-size", str(batch_size),
"--lr", str(lr)
]
elif framework == "YOLOv6":
train_script = Path(__file__).parent / "train_yolov6.py"
# Map sizes: nano->n, small->s, medium->m, base->l
size_map = {"nano": "yolov6n", "small": "yolov6s", "medium": "yolov6m", "base": "yolov6l"}
model_arg = size_map.get(model_size, "yolov6n")
cmd = [
str(venv_python),
str(train_script),
"--dataset-dir", str(dataset_path),
"--output-dir", str(output_path),
"--model", model_arg,
"--epochs", str(epochs),
"--batch-size", str(batch_size),
"--lr", str(lr)
]
elif framework == "YOLOX":
train_script = Path(__file__).parent / "train_yolox.py"
# Map sizes: nano->nano, small->s, medium->m, base->l
size_map = {"nano": "yolox-nano", "small": "yolox-s", "medium": "yolox-m", "base": "yolox-l"}
model_arg = size_map.get(model_size, "yolox-nano")
cmd = [
str(venv_python),
str(train_script),
"--dataset-dir", str(dataset_path),
"--output-dir", str(output_path),
"--model", model_arg,
"--epochs", str(epochs),
"--batch-size", str(batch_size),
"--lr", str(lr)
]
else:
return f"❌ Unknown framework: {framework}"
# Start training process
log_file = output_path / "training.log"
self.training_status = f"🚀 Starting {framework} training..."
def run_training():
try:
with log_file.open("w") as f:
self.training_process = subprocess.Popen(
cmd,
stdout=f,
stderr=subprocess.STDOUT,
text=True
)
self.training_status = f"⏳ Training in progress (PID: {self.training_process.pid})"
self.training_process.wait()
if self.training_process.returncode == 0:
self.training_status = "[OK] Training completed successfully!"
# Reload model with new weights
if framework == "RF-DETR":
# RF-DETR uses checkpoint_best_total.pth
best_weights = output_path / "checkpoint_best_total.pth"
model_type = "rf-detr"
elif framework == "RT-DETR":
# RT-DETR uses best.pt in weights/ subdirectory (Ultralytics)
best_weights = output_path / "weights" / "best.pt"
model_type = "rt-detr"
elif framework == "YOLOv6":
best_weights = output_path / "weights" / "best.pt"
model_type = "yolov6"
elif framework == "YOLOX":
best_weights = output_path / "weights" / "best.pt"
model_type = "yolox"
if best_weights.exists():
self._load_model(best_weights, model_type)
else:
self.training_status = f"❌ Training failed (exit code {self.training_process.returncode})"
except Exception as e:
self.training_status = f"❌ Error: {e}"
self.training_thread = threading.Thread(target=run_training, daemon=True)
self.training_thread.start()
return f"✓ Training started! Check {log_file} for progress"
def get_training_status(self):
"""Get current training status."""
return self.training_status
def stop_training(self):
"""Stop the training process."""
if self.training_process and self.training_process.poll() is None:
self.training_process.terminate()
self.training_status = "⏹️ Training stopped by user"
return "✓ Training process terminated"
return "⚠️ No training in progress"
def get_model_path_from_display(self, model_display: str) -> Path | None: def get_model_path_from_display(self, model_display: str) -> Path | None:
"""Get the actual model path from a display name.""" """Get the actual model path from a display name."""
if not hasattr(self, 'available_models') or not self.available_models: if not hasattr(self, 'available_models') or not self.available_models:
@ -1206,104 +991,6 @@ class AnnotationApp:
return model['path'] return model['path']
return None return None
def export_for_oak_d(self, model_display: str, output_dir: str = "oak_d_export", img_size: int = 640):
"""Export trained model for OAK-D camera deployment."""
try:
# Convert display name to actual path
weights_path = self.get_model_path_from_display(model_display)
if not weights_path:
return f"❌ Model '{model_display}' not found. Try clicking '🔍 Scan for Models' first."
output_path = Path(output_dir)
if not weights_path.exists():
return f"❌ Model weights not found at: {weights_path}"
output_path.mkdir(parents=True, exist_ok=True)
# Determine model type
model_type = self._guess_model_type_from_path(weights_path)
print(f"Exporting {model_type} model for OAK-D...")
if model_type == "rf-detr":
# RF-DETR export - use existing export_onnx.py logic
from rfdetr import RFDETRBase
model = RFDETRBase(pretrain_weights=str(weights_path))
model.export() # Creates output/model.onnx
# Move to output directory
onnx_source = Path("output/model.onnx")
if onnx_source.exists():
onnx_dest = output_path / "rf_detr_model.onnx"
onnx_source.rename(onnx_dest)
return f"✓ RF-DETR exported for OAK-D!\n📁 Output: {output_path}\n🔗 Next: Convert ONNX to blob using blobconverter.luxonis.com"
else:
return "❌ ONNX export failed"
else:
# Ultralytics models (RT-DETR, YOLOv6, YOLOX)
if model_type == "rt-detr":
from ultralytics import RTDETR
model = RTDETR(str(weights_path))
else:
from ultralytics import YOLO
model = YOLO(str(weights_path))
# Export to ONNX
onnx_path = model.export(
format="onnx",
imgsz=img_size,
simplify=True,
opset=11, # OAK-compatible opset
)
# Move ONNX to output directory
if Path(onnx_path).exists():
final_onnx = output_path / f"{model_type}_model.onnx"
Path(onnx_path).rename(final_onnx)
onnx_path = final_onnx
# Try to export to OpenVINO if available
try:
openvino_path = model.export(
format="openvino",
imgsz=img_size,
half=False, # Use FP32 for better compatibility
)
# Move OpenVINO files to output directory
if Path(openvino_path).exists():
import shutil
openvino_dir = Path(openvino_path)
for file in openvino_dir.glob("*"):
if file.is_file():
shutil.move(str(file), str(output_path / file.name))
openvino_dir.rmdir() # Remove empty dir
return f"{model_type.upper()} exported for OAK-D!\n📁 Output: {output_path}\n🔗 Next: Convert .xml/.bin to blob using blobconverter.luxonis.com"
except Exception as e:
# OpenVINO not available, just return ONNX
import shutil
docker_hint = ""
if shutil.which("docker") is None:
docker_hint = "\n⚠️ Docker not found (needed for offline conversion via ModelConverter)."
return (
f"{model_type.upper()} exported to ONNX!\n"
f"📁 Output: {output_path}\n"
f"Next: Convert ONNX -> RVC using HubAI (online) or ModelConverter (offline).\n"
f"Docs: https://docs.luxonis.com/software-v3/ai-inference/conversion/\n"
f"💡 Offline conversion: Use Luxonis ModelConverter with Docker\n"
f"⚠️ OpenVINO export not available: {str(e)}"
f"{docker_hint}"
)
except Exception as e:
return f"❌ Export failed: {str(e)}"
def create_ui(app: AnnotationApp) -> gr.Blocks: def create_ui(app: AnnotationApp) -> gr.Blocks:
@ -1312,13 +999,11 @@ def create_ui(app: AnnotationApp) -> gr.Blocks:
with gr.Blocks(title="Knot Annotation Tool") as demo: with gr.Blocks(title="Knot Annotation Tool") as demo:
gr.Markdown(""" gr.Markdown("""
# Wood Knot Annotation Tool # Wood Knot Annotation Tool
**Label -> Train -> Auto-Label -> Repeat** **Label -> Auto-Label -> Export**
- Manually annotate images or use **Auto-Label** with your trained model - Manually annotate images or use **Auto-Label** with your trained model
- Export and prepare dataset for training - Export annotations to COCO format for training
- Train **RF-DETR, RT-DETR, YOLOv6, or YOLOX** (all free for commercial use!) - Use separate training and deployment scripts for model development
- Optimized for OAK-D camera deployment
- Use trained model to auto-label more images
""") """)
# Settings section at the top # Settings section at the top
@ -1398,7 +1083,7 @@ def create_ui(app: AnnotationApp) -> gr.Blocks:
delete_btn = gr.Button("🗑️ Delete Last") delete_btn = gr.Button("🗑️ Delete Last")
clear_btn = gr.Button("❌ Clear All") clear_btn = gr.Button("❌ Clear All")
gr.Markdown("### Export & Training") gr.Markdown("### Export Annotations")
export_path = gr.Textbox( export_path = gr.Textbox(
label="Export Path", label="Export Path",
value="annotations_coco.json" value="annotations_coco.json"
@ -1406,151 +1091,6 @@ def create_ui(app: AnnotationApp) -> gr.Blocks:
export_btn = gr.Button("Export COCO") export_btn = gr.Button("Export COCO")
export_result = gr.Textbox(label="Export Result", lines=1) export_result = gr.Textbox(label="Export Result", lines=1)
# Training tab
with gr.Tab("Training"):
gr.Markdown("""
### Train Object Detection Model
**Choose your framework:**
- **RF-DETR** (MIT): Custom transformer, high accuracy
- **RT-DETR** (Apache 2.0): Ultralytics transformer, great accuracy
- **YOLOv6** (MIT): Fast, proven on OAK cameras
- **YOLOX** (MIT): Similar to YOLOv6, slight differences
**All MIT/Apache 2.0 licensed - free for commercial use!**
**Steps:**
1. Annotate at least 50-100 images in the Annotation tab
2. Click "Prepare Dataset" to create train/valid/test splits
3. Select your framework and configure training parameters
4. Click "Start Training" (runs in background)
5. After training, export for OAK-D deployment
""")
with gr.Row():
with gr.Column():
dataset_prep_dir = gr.Textbox(
label="Dataset Output Directory",
value="dataset_prepared"
)
train_split = gr.Slider(0.5, 0.9, 0.8, label="Train Split Ratio")
valid_split = gr.Slider(0.05, 0.3, 0.1, label="Valid Split Ratio")
prep_btn = gr.Button("📦 Prepare Dataset", variant="secondary")
prep_result = gr.Textbox(label="Preparation Result", lines=2)
with gr.Column():
gr.Markdown("### Training Configuration")
model_framework = gr.Dropdown(
choices=["RF-DETR", "RT-DETR", "YOLOv6", "YOLOX"],
value="RT-DETR",
label="Model Framework",
info="All MIT/Apache 2.0 licensed - free for commercial use. Optimized for OAK cameras."
)
train_dataset_dir = gr.Textbox(
label="Dataset Directory",
value="dataset_prepared"
)
train_output_dir = gr.Textbox(
label="Output Directory",
value="runs/gui_training"
)
model_size = gr.Dropdown(
choices=["nano", "small", "medium", "base"],
value=DEFAULT_MODEL_SIZE,
label="Model Size"
)
epochs = gr.Slider(5, 100, DEFAULT_TRAIN_EPOCHS, step=5, label="Epochs")
batch_size = gr.Slider(1, 16, DEFAULT_BATCH_SIZE, step=1, label="Batch Size")
learning_rate = gr.Number(value=DEFAULT_LEARNING_RATE, label="Learning Rate")
with gr.Row():
start_train_btn = gr.Button("🚀 Start Training", variant="primary")
stop_train_btn = gr.Button("⏹️ Stop Training", variant="stop")
refresh_status_btn = gr.Button("🔄 Refresh Status")
training_status = gr.Textbox(
label="Training Status",
value="Not training",
lines=3
)
gr.Markdown("""
**Note**: Training runs in the background. You can continue annotating while training.
Check the training log file for detailed progress.
""")
# OAK-D Deployment tab
with gr.Tab("🚀 OAK-D Deployment"):
gr.Markdown("""
### Deploy Trained Model to OAK-D Camera
Convert your trained model to work with the **OAK-D 4 Pro** camera for real-time edge inference.
**Supported Models**: RF-DETR, RT-DETR, YOLOv6, YOLOX
**Process**:
1. Select a trained model from your runs/ directory
2. Export to ONNX and OpenVINO formats
3. Convert OpenVINO model to blob for OAK-D
4. Deploy blob to your OAK-D camera
""")
with gr.Row():
with gr.Column():
oak_model_selector = gr.Dropdown(
choices=app.get_available_models_list(),
value=None,
label="Select Trained Model",
info="Choose a model from your training runs",
allow_custom_value=True
)
oak_output_dir = gr.Textbox(
label="Output Directory",
value="oak_d_deployment",
placeholder="oak_d_deployment"
)
oak_img_size = gr.Dropdown(
choices=[320, 416, 512, 640, 800, 1024],
value=640,
label="Image Size",
info="Input size for the model (should match training)"
)
with gr.Row():
oak_scan_btn = gr.Button("🔍 Scan for Models")
oak_export_btn = gr.Button("🚀 Export for OAK-D", variant="primary")
oak_status = gr.Textbox(
label="Export Status",
value="Ready to export",
lines=4
)
with gr.Column():
gr.Markdown("""
### 📋 Deployment Instructions
**After Export:**
1. **Test OpenVINO Model** (optional):
```bash
python -c "from openvino.runtime import Core; core = Core(); model = core.read_model('model.xml'); print('✓ Model loaded')"
```
2. **Convert to RVC compiled format** (recommended by Luxonis):
- Online: HubAI conversion (fastest setup)
- Offline: ModelConverter (requires Docker)
- Docs: https://docs.luxonis.com/software-v3/ai-inference/conversion/
3. **Deploy to OAK-D**:
- Use DepthAI Python API
- Or use OAK-D examples with your blob
### 💡 Tips
- **Nano models** work best on edge devices
- If you quantize, use real calibration images for best accuracy
- Test inference speed vs accuracy trade-off
""")
# Event handlers # Event handlers
def on_load(): def on_load():
return app.load_image("current") return app.load_image("current")
@ -1643,44 +1183,6 @@ def create_ui(app: AnnotationApp) -> gr.Blocks:
outputs=[export_result] outputs=[export_result]
) )
# Training handlers
prep_btn.click(
lambda out, train, valid: app.prepare_training_dataset(Path(out), train, valid),
inputs=[dataset_prep_dir, train_split, valid_split],
outputs=[prep_result]
)
start_train_btn.click(
app.start_training,
inputs=[model_framework, train_dataset_dir, train_output_dir, model_size, epochs, batch_size, learning_rate],
outputs=[training_status]
)
stop_train_btn.click(
app.stop_training,
outputs=[training_status]
)
refresh_status_btn.click(
app.get_training_status,
outputs=[training_status]
)
# OAK-D Deployment handlers
oak_scan_btn.click(
app.scan_for_models,
outputs=[oak_status]
).then(
app.get_available_models_list,
outputs=[oak_model_selector]
)
oak_export_btn.click(
app.export_for_oak_d,
inputs=[oak_model_selector, oak_output_dir, oak_img_size],
outputs=[oak_status]
)
# Load first image on start # Load first image on start
demo.load(on_load, outputs=[image_display, boxes_html, image_index_text, info_text]) demo.load(on_load, outputs=[image_display, boxes_html, image_index_text, info_text])