3–4 Mar 2026
ONLINE
Europe/Vienna timezone

Agenda & Content

What you will learn

By the end of this workshop, participants will be able to:

  • decide when to use open-vocabulary detection vs. distillation
  • deploy YOLOE-26 with text or visual prompts for zero-shot detection
  • execute the Autodistill pipeline from raw images to trained model
  • train YOLO26 models optimized for edge deployment
  • configure Multi-GPU training for HPC environments
  • export models to ONNX, TensorRT, CoreML, TFLite

Key technologies covered

  • YOLO26 
  • YOLOE-26
  • YOLO11
  • SAM 3
  • Grounding DINO
  • Autodistill
  • RF-DETR

Tuesday, 3 March 2026

15:45 Join in
16:00Welcome, Motivation & Introduction
16:05The 2026 Computer-Vision Landscape 
 – Distillation vs. Open-Vocabulary: When to use what?
 Talk
16:20Conceptual Foundations 
 – CLIP, Vision Transformers, Zero-Shot Learning
 Talk & Demo
16:40Demo: YOLOE-26
 – Text Prompts, Visual Prompts, Prompt-Free Detection
  Demo
17:00Demo: Full Distillation Pipeline
 – GroundedSAM -> YOLO26
  Demo
17:25Q&A, Setup Verification & Environment Check
 – Answer open questions and check prerequisites for Hands-on lab
 Discussion & Hands-on
18:00End of first day

Wednesday, 4 March 2026

15:45 Join in
16:00Welcome to Day 2 
16:05Hands-on Exercise
 – Choose your path: Direct Prompting OR Distillation
 Hands-on lab
16:35HPC Deployment 
 – Multi-GPU Training, Slurm, Batch Inference
 Hands-on lab
16:55Decision Framework 
 – Matching approach to use case
 Hands-on lab
17:15Q&A, Hands-On Playground
 – Matching approach to use case
 Discussion & Hands-on lab
18:00End of second day (course)