Files
saw_mill_knot_detection/README.md

271 lines
8.3 KiB
Markdown
Raw Normal View History

2025-12-23 18:12:01 -07:00
# Saw Mill Knot Detection
2025-12-23 18:12:01 -07:00
This repository contains a complete wood defect detection system with a web-based annotation GUI and separate training/deployment scripts. Supports multiple model frameworks (RF-DETR, RT-DETR, YOLOv6, YOLOX) and is optimized for deployment on OAK-D cameras.
2025-12-22 14:20:32 -07:00
## 🎯 Project Overview
2025-12-23 18:12:01 -07:00
- **Models**: RF-DETR, RT-DETR, YOLOv6, YOLOX (all MIT/Apache 2.0 licensed)
- **Dataset**: 20,276 wood surface defect images
- **Annotation GUI**: Gradio-based web interface for manual annotation
- **Training Scripts**: Separate Python scripts for model training
- **Deployment**: OAK-D camera optimization with OpenVINO conversion
- **License**: All models free for commercial use
2025-12-22 14:20:32 -07:00
## 📊 Dataset Information
2025-12-22 14:20:32 -07:00
**Source**: [Kaggle Wood Surface Defects Dataset](https://www.kaggle.com/datasets/kirs0816/wood-surface-defects)
**Classes** (10 total):
2025-12-23 18:12:01 -07:00
- Live knot, Dead knot, Knot with crack, Crack, Resin
- Marrow, Quartzity, Knot missing, Blue stain, Overgrown
2025-12-22 14:20:32 -07:00
**Dataset Split**:
- Train: 16,220 images
- Valid: 2,027 images
- Test: 2,029 images
**Formats Available**:
- `dataset_coco/` → COCO format for RF-DETR
- `dataset_yolo/` → YOLO format for YOLOX, YOLOv6, YOLOv8
2025-12-22 14:20:32 -07:00
## 🚀 Quick Start
2025-12-22 14:20:32 -07:00
### 1. Environment Setup
```bash
2025-12-22 14:20:32 -07:00
# Clone the repository
git clone git@143.244.157.110:dillon_stuff/saw_mill_knot_detection.git
cd saw_mill_knot_detection
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
2025-12-23 16:17:19 -07:00
# Install dependencies
pip install -r requirements.txt
```
### 2. Run the Annotation GUI
The repository includes an automated script that handles virtual environment activation:
```bash
# Run the GUI (automatically detects and activates venv/conda environment)
./run_gui.sh
# Or run manually
source .venv/bin/activate # or conda activate your_env
python annotation_gui.py
```
2025-12-23 18:12:01 -07:00
Install dependencies:
```bash
2025-12-22 14:20:32 -07:00
pip install -U pip
pip install ultralytics gradio rfdetr
```
### 2. Setup Datasets
```bash
# Download dataset from Kaggle (requires Kaggle API)
2025-12-22 14:20:32 -07:00
kaggle datasets download -d kirs0816/wood-surface-defects
unzip wood-surface-defects.zip
# Create multi-format datasets
python split_coco_dataset.py # Creates dataset_yolo/
python setup_datasets.py # Creates dataset_coco/ and updates configs
```
2025-12-22 14:20:32 -07:00
### 3. Launch Annotation GUI
```bash
2025-12-22 14:20:32 -07:00
python annotation_gui.py
```
2025-12-23 18:12:01 -07:00
Tkinter version (new):
```bash
python tk_annotation_gui.py
# or
./run_tk_gui.sh
```
2025-12-22 14:20:32 -07:00
Open http://localhost:7860 in your browser to access the web-based annotation interface with:
- Image navigation with index display
2025-12-23 18:12:01 -07:00
- Auto-labeling with trained models
- Manual annotation tools with delete buttons
2025-12-22 14:20:32 -07:00
- Real-time result visualization
2025-12-23 18:12:01 -07:00
- Export to COCO format
### 4. Train Models
2025-12-23 18:12:01 -07:00
Use the dedicated training script for all frameworks:
```bash
2025-12-23 18:12:01 -07:00
# Prepare dataset from annotations (optional)
python train_model.py --prepare-dataset --images-dir IMAGE --annotations annotations.json --dataset dataset_prepared
2025-12-23 18:12:01 -07:00
# Train models with different frameworks
python train_model.py --framework rf-detr --dataset dataset_prepared --output runs/rfdetr_training --model-size medium --epochs 50
python train_model.py --framework rtdetr --dataset dataset_prepared --output runs/rtdetr_training --model-size small --epochs 30
python train_model.py --framework yolox --dataset dataset_prepared --output runs/yolox_training --model-size nano --epochs 50
python train_model.py --framework yolov6 --dataset dataset_prepared --output runs/yolov6_training --model-size nano --epochs 50
# See TRAINING_README.md for detailed training options
```
2025-12-23 18:12:01 -07:00
### 5. Convert for OAK-D Deployment
```bash
2025-12-23 18:12:01 -07:00
# Convert trained model for edge deployment
python convert_for_deployment.py --model runs/training/weights/best.pt --output oak_d_deployment --img-size 640
# See TRAINING_README.md for deployment instructions
```
2025-12-22 14:20:32 -07:00
## 📁 Project Structure
2025-12-22 14:20:32 -07:00
```
saw_mill_knot_detection/
├── annotation_gui.py # Gradio web interface for annotation
2025-12-23 18:12:01 -07:00
├── train_model.py # Unified training script for all frameworks
├── convert_for_deployment.py # Model conversion for OAK-D deployment
├── TRAINING_README.md # Detailed training and deployment guide
├── setup_datasets.py # Multi-format dataset setup script
├── split_coco_dataset.py # Dataset splitting utility
├── config.py # Configuration settings
├── dataset_coco/ # RF-DETR dataset (COCO format)
│ ├── train/
│ │ ├── *.jpg # Training images
│ │ └── _annotations.coco.json
│ ├── valid/
│ │ ├── *.jpg # Validation images
│ │ └── _annotations.coco.json
│ └── test/
│ ├── *.jpg # Test images
│ └── _annotations.coco.json
├── dataset_yolo/ # YOLOX/YOLOv6/YOLOv8 dataset (YOLO format)
2025-12-22 14:20:32 -07:00
│ ├── train/
│ │ ├── images/ # Training images
│ │ └── labels/ # YOLO format labels
│ ├── valid/
│ │ ├── images/ # Validation images
│ │ └── labels/ # YOLO format labels
│ ├── test/
│ │ ├── images/ # Test images
│ │ └── labels/ # YOLO format labels
│ └── data.yaml # YOLO dataset configuration
├── runs/ # Training outputs (excluded from git)
├── bbox_coco_dataset.json # Original COCO annotations
├── requirements.txt # Python dependencies
├── .gitignore # Excludes large data files
└── README.md # This file
```
## 🤖 Framework Comparison
| Framework | Accuracy | Speed | Memory | Deployment | Best For |
|-----------|----------|-------|--------|------------|----------|
| **RF-DETR** | ⭐⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ | CPU/GPU | Highest accuracy, research |
| **YOLOX** | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ | Edge devices | Balanced performance |
| **YOLOv6** | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐ | Mobile/Edge | Fast inference, production |
2025-12-22 14:20:32 -07:00
## 🛠️ Usage Guide
2025-12-22 14:20:32 -07:00
### Annotation GUI Features
2025-12-22 14:20:32 -07:00
The Gradio-based annotation interface provides:
2025-12-22 14:20:32 -07:00
- **Image Navigation**: Browse through dataset with current index display
- **Auto-Labeling**: One-click defect detection using trained YOLOX model
- **Manual Annotation**: Draw bounding boxes for corrections
- **Real-time Visualization**: Immediate display of detection results
- **Export Options**: Save annotations in multiple formats
2025-12-22 14:20:32 -07:00
### Training
```bash
2025-12-22 14:20:32 -07:00
# Basic training
python train_yolox.py --dataset-dir dataset_split --model yolox-nano --epochs 10
# Advanced training with custom parameters
python train_yolox.py \
--dataset-dir dataset_split \
--model yolox-nano \
--epochs 20 \
--batch-size 8 \
--img-size 640
```
2025-12-22 14:20:32 -07:00
### Inference
2025-12-22 14:20:32 -07:00
```python
from ultralytics import YOLO
2025-12-22 14:20:32 -07:00
# Load trained model
model = YOLO('runs/yolox_training/training/weights/best.pt')
2025-12-22 14:20:32 -07:00
# Predict on image
results = model.predict('path/to/image.jpg', conf=0.4)
# Process results
for result in results:
boxes = result.boxes # Bounding boxes
for box in boxes:
cls = int(box.cls) # Class index
conf = float(box.conf) # Confidence score
xyxy = box.xyxy.tolist()[0] # Box coordinates
```
2025-12-22 14:20:32 -07:00
## 🔧 Configuration
Key settings in `config.py`:
2025-12-22 14:20:32 -07:00
```python
DEFAULT_MODEL_WEIGHTS = "runs/yolox_training/training/weights/best.pt"
DEFAULT_IMAGES_DIR = "IMAGE/"
WOOD_DEFECT_CLASSES = [
'Live knot', 'Dead knot', 'Knot with crack', 'Crack',
'Resin', 'Marrow', 'Quartzity', 'Knot missing',
'Blue stain', 'Overgrown'
]
```
2025-12-22 14:20:32 -07:00
## 📈 Model Performance
2025-12-22 14:20:32 -07:00
**YOLOX-nano Results** (5 epochs):
- mAP50: 0.612
- mAP50-95: 0.357
- Precision: 0.68
- Recall: 0.55
2025-12-22 14:20:32 -07:00
## 🎯 Deployment on OAK-D
The trained model can be exported for OAK-D deployment:
```python
from ultralytics import YOLO
# Load and export model
model = YOLO('runs/yolox_training/training/weights/best.pt')
model.export(format='onnx') # Export to ONNX for OAK-D
```
2025-12-22 14:20:32 -07:00
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Test thoroughly
5. Submit a pull request
## 📄 License
This project uses the Kaggle Wood Surface Defects dataset. Please refer to the original dataset license for usage terms.
## 🙏 Acknowledgments
- Kaggle for providing the wood surface defects dataset
- Ultralytics for the YOLO framework
- Gradio for the web interface framework