← All work
FLAGSHIPLive2024
EyeRace Pro
An AI grading app for racing-pigeon eyes. End-to-end ML pipeline — capture, quality check, ONNX inference, 5-dimension scoring — in one app.
- Role
- Lead + full-stack
- Duration
- 2024 — present
- Reading
- 2 min read
Stack
FlutterRiverpodGoGinPyTorchONNX RuntimePostgreSQL
Product Shots
SCREENSHOTS · COMING SOON
Screenshots coming soon. Drop captures into public/works/<slug>/ and add a screenshots array to the mdx frontmatter.
public/works/eyerace-pro/01.pngThe pigeon-racing community has long relied on master breeders to "read the eye" of a pigeon to judge its quality — but that expertise is hard to pass on, judgments drift between experts, and there's no easy entry point for newcomers. EyeRace Pro turns this tacit knowledge into a quantified AI scoring system, delivering an expert-grade evaluation in 30 seconds.
Problem
- Master-level evaluation skills are hard to transfer; the next generation has no efficient path to learn
- Different experts give widely varying scores for the same pigeon — no shared baseline
- 1:1 manual evaluation doesn't scale for breeding farms with hundreds of birds
Solution
A four-step pipeline running entirely inside a Flutter app: Capture → Quality check → ONNX inference → 5-dimension scoring.
- Smart capture engine: Real-time accelerometer-based shake detection; below-threshold blur or exposure blocks the shot. A three-mode flash auto-adjusts to lighting.
- Quality check: Post-shot brightness/contrast/blur thresholds; failures auto-retry. Garbage in, garbage out — solved at the source.
- ONNX inference: A PyTorch-trained EfficientNet-B0 exported to ONNX runs on-device via ONNX Runtime mobile. Average inference < 400ms.
- 5-dimension scoring: Each eye scored across pupil, inner ring, correlation ring, iris, and outer ring; final weighted recommendation per pigeon.
Stack
- Mobile: Flutter 3.10+, Dart 3, Riverpod 3, GoRouter, Camera + VisionKit, accelerometer
- Backend: Go 1.26 + Gin, GORM, PostgreSQL, JWT, Sign in with Apple / Google
- ML pipeline: PyTorch + EfficientNet-B0, Label Studio annotation, ONNX Runtime mobile
- Billing: StoreKit 2 subscriptions
Outcomes
- iOS / Android MVP shipped, stable for 6+ months
- Backend at eyebird.ailoop.uk, < 200ms average response
- Single-photo evaluation in 30 seconds (capture + upload + inference + render)
- 5-dimension score correlates with senior breeder ratings at r > 0.85
Lessons
- Mobile ONNX has trade-offs: int8 quantization shrinks the model from 50MB to 12MB, but accuracy drops 4%. We landed on a hybrid: cloud inference primary, on-device fallback.
- Capture quality > model quality: The smart capture engine improved overall accuracy more than backend model tuning. GIGO matters at every layer.
- Vertical SaaS wins on depth: Generic OCR or image-classification APIs can't solve domain problems. Going deep is the moat.