3.6 KiB
3.6 KiB
Training and Deployment Scripts
This directory contains separate scripts for training models and converting them for deployment, extracted from the annotation GUI.
Training Script
train_model.py
Train object detection models for wood knot detection.
Supported frameworks:
- RF-DETR (MIT license)
- RT-DETR (Apache 2.0 license)
- YOLOv6 (MIT license)
- YOLOX (MIT license)
Usage:
# Basic usage
python train_model.py --framework rtdetr --dataset dataset_prepared --output runs/training
# Full options
python train_model.py \
--framework rtdetr \
--dataset dataset_prepared \
--output runs/training \
--model-size small \
--epochs 20 \
--batch-size 4 \
--lr 0.001 \
--prepare-dataset \
--images-dir IMAGE \
--annotations annotations.json
Options:
--framework: Model framework (rf-detr, rt-detr, yolov6, yolox)--dataset: Path to prepared dataset directory--output: Output directory for trained model--model-size: Model size/variant (nano, small, medium, base)--epochs: Number of training epochs--batch-size: Batch size for training--lr: Learning rate--prepare-dataset: Prepare dataset from annotations first--images-dir: Images directory (for --prepare-dataset)--annotations: Annotations file (for --prepare-dataset)
Deployment Conversion Script
convert_for_deployment.py
Convert trained models for OAK-D deployment.
Supported conversions:
- ONNX export
- OpenVINO IR export
- Model optimization for edge devices
Usage:
# Basic usage
python convert_for_deployment.py --model runs/training/weights/best.pt --output oak_d_deployment
# Full options
python convert_for_deployment.py \
--model runs/training/weights/best.pt \
--output oak_d_deployment \
--img-size 640 \
--framework auto
Options:
--model: Path to trained model weights (.pt file)--output: Output directory for converted models--img-size: Input image size for the model (320, 416, 512, 640, 800, 1024)--framework: Model framework (auto-detect if not specified)
Workflow
- Annotate images using the Tkinter GUI (
./run_tk_gui.shorpython tk_annotation_gui.py) - Export annotations to COCO format from the GUI
- Prepare dataset (optional, can be done by training script):
python train_model.py --prepare-dataset --images-dir IMAGE --annotations annotations.json --dataset dataset_prepared - Train model:
python train_model.py --framework rtdetr --dataset dataset_prepared --output runs/training - Convert for deployment:
python convert_for_deployment.py --model runs/training/weights/best.pt --output oak_d_deployment
Next Steps After Conversion
After running convert_for_deployment.py:
-
Test OpenVINO Model (optional):
python -c "from openvino.runtime import Core; core = Core(); model = core.read_model('model.xml'); print('✓ Model loaded')" -
Convert to RVC compiled format (recommended by Luxonis):
- Online: HubAI conversion (fastest setup)
- Offline: ModelConverter (requires Docker)
- Docs: https://docs.luxonis.com/software-v3/ai-inference/conversion/
-
Deploy to OAK-D:
- Use DepthAI Python API
- Or use OAK-D examples with your blob
Tips
- Nano models work best on edge devices
- If you quantize, use real calibration images for best accuracy
- Test inference speed vs accuracy trade-off
- All models are MIT/Apache 2.0 licensed - free for commercial use! /home/dillon/_code/saw_mill_knot_detection/TRAINING_README.md