This commit is contained in:
2025-12-22 14:20:32 -07:00
parent 661d63699d
commit 8590f1495d

276
README.md
View File

@ -1,153 +1,215 @@
# Saw Mill Knot Detection (RF-DETR)
# Saw Mill Knot Detection (YOLOX/YOLO)
This repo contains a minimal training pipeline to fine-tune **RF-DETR** to detect knots in wood.
This repository contains a complete wood defect detection system using YOLOX/YOLO models, trained to detect 10 different types of wood surface defects. The system includes a web-based annotation GUI, automated training pipeline, and is optimized for deployment on OAK-D cameras.
**Dataset Source**: The wood defect images and annotations used in this project come from [Kaggle Wood Surface Defects Dataset](https://www.kaggle.com/datasets/kirs0816/wood-surface-defects?resource=download).
## 🎯 Project Overview
## 1) Dataset format (required)
RF-DETR expects **COCO format**, split into `train/`, `valid/`, `test/`, each with its own `_annotations.coco.json`.
- **Model**: YOLOX-nano (Ultralytics YOLO framework)
- **Dataset**: 20,276 wood surface defect images with 10 defect categories
- **Training**: 5 epochs, mAP50: 0.612, mAP50-95: 0.357
- **Deployment Target**: OAK-D 4 Pro camera
- **Framework**: Ultralytics 8.3.240
Example:
## 📊 Dataset Information
```
dataset/
├── train/
│ ├── _annotations.coco.json
│ ├── 0001.jpg
│ └── ...
├── valid/
│ ├── _annotations.coco.json
│ ├── 0101.jpg
│ └── ...
└── test/
├── _annotations.coco.json
├── 0201.jpg
└── ...
```
**Source**: [Kaggle Wood Surface Defects Dataset](https://www.kaggle.com/datasets/kirs0816/wood-surface-defects)
Your COCO JSON should include a `categories` entry for your class(es), e.g. `knot`.
**Classes** (10 total):
- Live knot
- Dead knot
- Knot with crack
- Crack
- Resin
- Marrow
- Quartzity
- Knot missing
- Blue stain
- Overgrown
## 2) Setup
**Dataset Split**:
- Train: 16,220 images
- Valid: 2,027 images
- Test: 2,029 images
Create venv (already created if you used the VS Code prompt) and install deps:
**Format**: YOLO format (images/ and labels/ subdirectories with data.yaml configuration)
## 🚀 Quick Start
### 1. Environment Setup
```bash
/home/dillon/_code/saw_mill_knot_detection/.venv/bin/python -m pip install -U pip
/home/dillon/_code/saw_mill_knot_detection/.venv/bin/python -m pip install -r requirements.txt
# Clone the repository
git clone git@143.244.157.110:dillon_stuff/saw_mill_knot_detection.git
cd saw_mill_knot_detection
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -U pip
pip install ultralytics gradio
```
## 3) Validate dataset
### 2. Download Dataset
The dataset is not included in the repository due to size. Download from Kaggle and organize as follows:
```bash
/home/dillon/_code/saw_mill_knot_detection/.venv/bin/python validate_coco_dataset.py --dataset-dir /path/to/dataset
# Download from Kaggle (requires Kaggle API)
kaggle datasets download -d kirs0816/wood-surface-defects
unzip wood-surface-defects.zip
# Run the dataset preparation script
python split_coco_dataset.py
python reorganize_dataset.py
```
## 4) Train
### 3. Launch Annotation GUI
```bash
/home/dillon/_code/saw_mill_knot_detection/.venv/bin/python train_rfdetr.py \
--dataset-dir /path/to/dataset \
--output-dir runs/knot_rfdetr_medium \
--model medium \
--epochs 50 \
--batch-size 4 \
--grad-accum-steps 4 \
--lr 1e-4
python annotation_gui.py
```
Notes:
- Keep **effective batch size** near 16: `batch_size * grad_accum_steps * num_gpus ≈ 16`.
- Checkpoints are written into `--output-dir` (including `checkpoint_best_total.pth`).
Open http://localhost:7860 in your browser to access the web-based annotation interface with:
- Image navigation with index display
- Auto-labeling with trained YOLOX model
- Manual annotation tools
- Real-time result visualization
## 5) Auto-label new images (automatic)
Use your trained model to generate annotations on unlabeled images:
### 4. Train Model
```bash
/home/dillon/_code/saw_mill_knot_detection/.venv/bin/python auto_label_images.py \
--weights runs/knot_rfdetr_medium/checkpoint_best_total.pth \
--images-dir /path/to/new_images \
--output-json auto_labeled.json \
--threshold 0.4
python train_yolox.py --dataset-dir dataset_split --model yolox-nano --epochs 5 --batch-size 4
```
This outputs a COCO JSON with predicted bounding boxes. You can then review/correct them manually.
## 📁 Project Structure
## 6) Manual labeling (recommended tools)
```
saw_mill_knot_detection/
├── annotation_gui.py # Gradio web interface for annotation
├── train_yolox.py # YOLOX training script
├── split_coco_dataset.py # Dataset splitting utility
├── reorganize_dataset.py # Dataset reorganization to YOLO format
├── config.py # Configuration settings
├── dataset_split/ # Training data (excluded from git)
│ ├── train/
│ │ ├── images/ # Training images
│ │ └── labels/ # YOLO format labels
│ ├── valid/
│ │ ├── images/ # Validation images
│ │ └── labels/ # YOLO format labels
│ ├── test/
│ │ ├── images/ # Test images
│ │ └── labels/ # YOLO format labels
│ └── data.yaml # YOLO dataset configuration
├── runs/ # Training outputs (excluded from git)
│ └── yolox_training/
│ └── training/
│ └── weights/
│ ├── best.pt # Best model weights
│ └── last.pt # Latest model weights
├── bbox_coco_dataset.json # Original COCO annotations
├── requirements.txt # Python dependencies
├── .gitignore # Excludes large data files
└── README.md # This file
```
**Don't build your own GUI** - use these proven open-source tools instead:
## 🛠️ Usage Guide
### Option A: Label Studio (Recommended - Easiest)
**Best for**: Quick setup, modern UI, ML-assisted labeling
### Annotation GUI Features
The Gradio-based annotation interface provides:
- **Image Navigation**: Browse through dataset with current index display
- **Auto-Labeling**: One-click defect detection using trained YOLOX model
- **Manual Annotation**: Draw bounding boxes for corrections
- **Real-time Visualization**: Immediate display of detection results
- **Export Options**: Save annotations in multiple formats
### Training
```bash
# Install Label Studio
pip install label-studio
# Basic training
python train_yolox.py --dataset-dir dataset_split --model yolox-nano --epochs 10
# Start the server
label-studio start
# Advanced training with custom parameters
python train_yolox.py \
--dataset-dir dataset_split \
--model yolox-nano \
--epochs 20 \
--batch-size 8 \
--img-size 640
```
Then open http://localhost:8080 in your browser:
1. Create a new project for "Object Detection with Bounding Boxes"
2. Import your images
3. Start labeling manually OR:
- Use the auto-label script to generate initial annotations:
```bash
/home/dillon/_code/saw_mill_knot_detection/.venv/bin/python auto_label_images.py \
--weights runs/knot_rfdetr_medium/checkpoint_best_total.pth \
--images-dir /path/to/images \
--output-json predictions.json \
--threshold 0.3
```
- Import the predictions into Label Studio
- Review and correct them
4. Export in COCO format when done
### Inference
### Option B: CVAT (Most Powerful)
**Best for**: Large-scale projects, team collaboration
```python
from ultralytics import YOLO
```bash
# Using Docker (easiest)
git clone https://github.com/opencv/cvat
cd cvat
docker compose up -d
# Load trained model
model = YOLO('runs/yolox_training/training/weights/best.pt')
# Predict on image
results = model.predict('path/to/image.jpg', conf=0.4)
# Process results
for result in results:
boxes = result.boxes # Bounding boxes
for box in boxes:
cls = int(box.cls) # Class index
conf = float(box.conf) # Confidence score
xyxy = box.xyxy.tolist()[0] # Box coordinates
```
Open http://localhost:8080:
- Create project → upload images → annotate
- Supports keyboard shortcuts, interpolation, and advanced features
- Export directly to COCO JSON
## 🔧 Configuration
[CVAT Documentation](https://opencv.github.io/cvat/docs/)
Key settings in `config.py`:
### Option C: labelImg (Simplest Desktop App)
**Best for**: Offline labeling, no server needed
```bash
pip install labelImg
labelImg
```python
DEFAULT_MODEL_WEIGHTS = "runs/yolox_training/training/weights/best.pt"
DEFAULT_IMAGES_DIR = "IMAGE/"
WOOD_DEFECT_CLASSES = [
'Live knot', 'Dead knot', 'Knot with crack', 'Crack',
'Resin', 'Marrow', 'Quartzity', 'Knot missing',
'Blue stain', 'Overgrown'
]
```
- Simple desktop app with no web server
- Exports to Pascal VOC (needs conversion to COCO)
- Good for small datasets
## 📈 Model Performance
### Workflow with Model Assistance:
1. **Initial batch**: Manually label 50-100 images
2. **Train RF-DETR**: Use your training script
3. **Auto-label**: Run `auto_label_images.py` on remaining images
4. **Review**: Import predictions into Label Studio/CVAT
5. **Correct**: Fix any mistakes (much faster than labeling from scratch)
6. **Iterate**: Retrain with corrected labels, repeat
**YOLOX-nano Results** (5 epochs):
- mAP50: 0.612
- mAP50-95: 0.357
- Precision: 0.68
- Recall: 0.55
This semi-supervised approach is **10-20x faster** than manual labeling alone.
## 🎯 Deployment on OAK-D
## 7) Quick inference sanity check
The trained model can be exported for OAK-D deployment:
```bash
/home/dillon/_code/saw_mill_knot_detection/.venv/bin/python predict_rfdetr.py \
--weights runs/knot_rfdetr_medium/checkpoint_best_total.pth \
--image /path/to/example.jpg \
--threshold 0.4
```python
from ultralytics import YOLO
# Load and export model
model = YOLO('runs/yolox_training/training/weights/best.pt')
model.export(format='onnx') # Export to ONNX for OAK-D
```
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Test thoroughly
5. Submit a pull request
## 📄 License
This project uses the Kaggle Wood Surface Defects dataset. Please refer to the original dataset license for usage terms.
## 🙏 Acknowledgments
- Kaggle for providing the wood surface defects dataset
- Ultralytics for the YOLO framework
- Gradio for the web interface framework