What you will learn
By the end of this workshop, participants will be able to:
- decide when to use open-vocabulary detection vs. distillation
- deploy YOLOE-26 with text or visual prompts for zero-shot detection
- execute the Autodistill pipeline from raw images to trained model
- train YOLO26 models optimized for edge deployment
- configure Multi-GPU training for HPC environments
- export models to ONNX, TensorRT, CoreML, TFLite
Key technologies covered
- YOLO26
- YOLOE-26
- YOLO11
- SAM 3
- Grounding DINO
- Autodistill
- RF-DETR
Tuesday, 3 March 2026
| 15:45 | Join in |
| 16:00 | Welcome, Motivation & Introduction |
| 16:05 | The 2026 Computer-Vision Landscape |
| | – Distillation vs. Open-Vocabulary: When to use what? |
| | Talk |
| 16:20 | Conceptual Foundations |
| | – CLIP, Vision Transformers, Zero-Shot Learning |
| | Talk & Demo |
| 16:40 | Demo: YOLOE-26 |
| | – Text Prompts, Visual Prompts, Prompt-Free Detection |
| | Demo |
| 17:00 | Demo: Full Distillation Pipeline |
| | – GroundedSAM -> YOLO26 |
| | Demo |
| 17:25 | Q&A, Setup Verification & Environment Check |
| | – Answer open questions and check prerequisites for Hands-on lab |
| | Discussion & Hands-on |
| 18:00 | End of first day |
Wednesday, 4 March 2026
| 15:45 | Join in |
| 16:00 | Welcome to Day 2 |
| 16:05 | Hands-on Exercise |
| | – Choose your path: Direct Prompting OR Distillation |
| | Hands-on lab |
| 16:35 | HPC Deployment |
| | – Multi-GPU Training, Slurm, Batch Inference |
| | Hands-on lab |
| 16:55 | Decision Framework |
| | – Matching approach to use case |
| | Hands-on lab |
| 17:15 | Q&A, Hands-On Playground |
| | – Matching approach to use case |
| | Discussion & Hands-on lab |
| 18:00 | End of second day (course) |