Encord Blog
Webinar: Are Visual Foundation Models (VFMs) on par with SOTA?
With Foundational Models increasing in prominence, Encord's President and Co-Founder sat down with our Lead ML Engineer to dissect Meta's new Visual Foundation Model, Segment Anything Model (SAM). After combining the model with Grounding-DINO to allow for zero-shot segmentation, the team will compare it to a SOTA Mask-RCNN model to see whether the development of SAM really is revolutionary for segmentation. You'll get insights into the following:
- The rise of VFMs and how they differ from standard models
- How SAM and Grounding-DINO compare to previous segmentation models for performance and predictions
- What Meta's release of DINOv2 means for Grounding-DINO + SAM
- Evaluating model performance using Encord Active
_________________
Ulrik is the President & Co-Founder of Encord. Ulrik started his career in the Emerging Markets team at J.P. Morgan. Ulrik holds an M.S. in Computer Science from Imperial College London.
In his spare time, Ulrik enjoys writing ultra-low latency software applications in C++ and enjoys experimental sushi making.
Frederik is the Machine Learning Lead at Encord. He has an extensive computer vision and deep learning background and has completed a Ph.D. in Explainable Deep Learning and Generative Models at Aarhus University, and published research in Efficient Counterfactuals from Invertible Neural Networks and Back-propagation through Fréchet Inception Distance. Before his P.hD., Frederik studied for an M.Sc. in computer science while being a teaching assistant for "Introduction to databases" and "Pervasive computing and Software Architecture."
Frederik enjoys spending time with his two kids in his spare time and occasionally goes for long hikes around his hometown in the west of Denmark.
Power your AI models with the right data
Automate your data curation, annotation and label validation workflows.
Get startedWritten by
Ulrik Stig Hansen
Explore our products