profile

Luke Davies-Cooke April 8, 2022

Startup Taps Finance Micromodels for Data Annotation Automation

blog image

After meeting at an entrepreneur matchmaking event, Ulrik Hansen and Eric Landau teamed up to parlay their experience in financial trading systems into a platform for faster data labeling.

In 2020, the pair of finance industry veterans founded Encord to adapt micromodels typical in finance to automated data annotation. Micromodels are neural networks that require less time to deploy because they’re trained on less data and used for specific tasks.

Encord’s NVIDIA GPU-driven service promises to automate as much as 99 percent of businesses’ manual data labeling with its micromodels.

“Instead of building one big model that does everything, we’re just combining a lot of smaller models together, and that’s very similar to how a lot of these trading systems work,” said Landau.

The startup, based in London, recently landed $12.5 million in Series A funding.

Encord is an NVIDIA Metropolis partner and a member of NVIDIA Inception, a program that offers go-to-market support, expertise and technology for AI, data science and HPC startups. NVIDIA Metropolis is an application framework that makes it easier for developers to combine video cameras and sensors with AI-enabled video analytics.

The company said it has attracted business in gastrointestinal endoscopy, radiology, thermal imaging, smart cities, agriculture, autonomous transportation and retail applications.

‘Augmenting Doctors’ for SurgEase

Back in 2021, the partners hunkered down near Laguna Beach, Calif., at the home of Landau’s parents, to build Encord while attending Y Combinator. And they had also just landed a first customer, SurgEase.

London-based SurgEase offers telepresence technology for gastroenterology. The company’s hardware device and software enable remote physicians to monitor high-definition images and video captured in colonoscopies.

“You could have a doctor in an emerging economy do the diagnostics or  detection, as well as a doctor from one of the very best hospitals in the U.S.,” said Hansen.

To improve diagnostics, SurgEase is also applying video data to training AI models for detection. Encord’s micromodels are being applied to annotate the video data that’s used for SurgEase’s models.The idea is to give doctors a second set of eyes on procedures.

“Encord’s software has been instrumental in aiding us in solving some of the hardest problems in endoscopic disease assessment,” said SurgEase CEO Fareed Iqbal.

With AI-aided diagnostics, clinicians using SurgEase might spot more things sooner so that people don’t need more severe procedures down the line, said Hansen. Doctors also don’t always agree, so it can help cut through the noise with another opinion, said Landau.

“It’s really augmenting doctors,” said Landau.

King’s College of London: 6x Faster

King’s College of London had a challenge of finding a way to annotate images in precancerous polyp videos. So it turned to Encord for annotation automation because using highly skilled clinicians was costly on such large datasets.

The result was that the micro models could be used to annotate about 6.4x faster than manual labeling. It was capable of handling about 97 percent of the datasets with automated annotation, the rest requiring manual labeling from clinicians.

Encord enabled King’s College of London to cut model development time from one year to two months, moving AI into production faster.

Triton: Quickly Into Inference

Encord was initially setting out to build its own inference engine, running on its API server. But Hansen and Landau decided using NVIDIA Triton would save a lot of engineering time and get them quickly into production.

Triton offers open-source software for taking AI into production by simplifying how models run in any framework and on any GPU or CPU for all inference types.

Also, it allowed them to focus on their early customers by not having to build inference engine architecture themselves.

People using Encord’s platform can train a micromodel and run inference very soon after that, enabled by Triton, Hansen said.

“With Triton, we get the native support for all these machine learning libraries like PyTorch and it’s compatible with CUDA,” said  Hansen. “It saved us a lot of time and hassles.”