Announcing our Series C with $110M in total funding. Read more →.

ADAS (Advanced Driver Assistance Systems)

Encord Computer Vision Glossary

What are Advanced Driver Assistance Systems (ADAS)?

ADAS is the collective term for safety and convenience systems that use sensors and AI to support the driver. The category spans a wide range, from lane departure warnings and automatic emergency braking to adaptive cruise control and traffic jam assist. What they share is a reliance on real-time perception: cameras, radar, LiDAR, and ultrasonic sensors feeding data into models that detect, classify, and respond to the environment.

Most modern vehicles include some ADAS features. The more advanced the system, the more it resembles the full AV stack, with the key difference that a human remains responsible for the vehicle.

How ADAS Perception Works

ADAS systems typically run multiple perception tasks in parallel: detecting lane markings, tracking nearby vehicles, identifying pedestrians and cyclists, reading road signs, and monitoring the driver's attention. Each task is a separate model, trained on labeled data specific to that function.

The challenge is that these tasks have to work reliably across an enormous range of conditions, different lighting, weather, road types, and geographies. A lane detection system trained primarily on motorway data will struggle on unmarked rural roads. An emergency braking system that works in clear daylight needs to perform equally well in rain and low visibility. Edge case coverage is what separates ADAS systems that are genuinely safe from those that are safe only in favourable conditions.

Encord for ADAS Data

Encord supports the multimodal annotation workflows that ADAS development requires, camera image labeling, LiDAR point cloud annotation, radar overlays, and sensor fusion views, all in one platform. Active learning surfaces edge cases that are underrepresented in training data, and the data flywheel ensures that deployment data feeds back into annotation queues. For teams managing large-scale ADAS annotation programs, Encord's quality review and workflow automation tools keep consistency high across distributed teams.

Explore Encord for Physical AI

Explore Annotation & Labeling

Related Terms

See also: Autonomous Vehicle (AV) · Sensor Fusion · Object Detection (AV context) · LiDAR · 3D Bounding Box / Cuboid · Bird's Eye View (BEV)

Related Resources

Informational Guides:

Technical Documentations:

Webinars and video content:

Frequently Asked Questions:

Q: What's the difference between ADAS and autonomous driving?

ADAS assists a human driver with specific tasks; the human remains in control and responsible. Autonomous driving removes the human from the control loop entirely. Most ADAS features map to SAE Levels 1–2; full autonomy begins at Level 4.

Q: What sensors do ADAS systems use?

Typically, a combination of cameras (for visual perception), radar (for distance and speed of objects), LiDAR (for precise 3D mapping in more advanced systems), and ultrasonic sensors (for close-range detection like parking). The exact combination depends on the feature and the vehicle platform.

Q: Why do ADAS systems still fail in edge cases?

Because training data doesn't cover them adequately. ADAS models learn from labeled examples, if rare scenarios like unusual lighting, degraded road markings, or atypical pedestrian behaviour are underrepresented in training data, the model hasn't learned to handle them reliably. Edge case identification and targeted data collection are ongoing processes, not a one-time fix.

cta banner
Automate 97% of your annotation tasks with 99% accuracy