Pre-Conference Workshop Day | Tuesday, October 14
8:00 am Check In & Breakfast
09:00 am | Workshop A | Lab-in-the-Loop AI Validation Systems for Improving Accuracy & Developability in De Novo Design
Synopsis
When experimental results are inconsistent, siloed, or simply too limited, how can we meaningfully train or refine AI models for biologics? Without standardized benchmarks or validation metrics, how do we know which models perform best, or even perform reliably at all? And if lab feedback is critical, what happens when infrastructure gaps, slow turnaround times, and limited experimental capacity make integration into AI pipelines difficult, slowing iteration and impeding improvement?
This workshop will gather experts to discuss:
- Explore strategies for integrating experimental feedback into AI pipelines to refine de novo biologic designs in real time
- Evaluate key metrics and benchmarks for validating AI-generated candidates across affinity, stability, and manufacturability
- Discuss infrastructure and workflow challenges in setting up scalable lab-in-the-loop systems across R&D environments
12:00 pm Lunch Break & Networking
1:30 pm | Workshop B | Benchmarking Generative AI for Biologics to Define Metrics & Deliver Meaningful Models
Synopsis
How do we know if a generative AI model for biologics is truly effective? With new models emerging daily, what metrics reflect biological relevance, therapeutic potential, or developability? And when experimental validation is limited, how can we confidently compare, benchmark, and trust these models to guide critical drug design decisions?
This workshop will gather experts to discuss:
- Identify and prioritize key benchmarking criteria for generative biologic models, including developability, stability, and target affinity
- Examine approaches for validating generative model outputs in the absence of large-scale experimental datasets
- Discuss emerging frameworks and community efforts for establishing shared standards in biologics AI benchmarking