Announcing our Series C with $110M in total funding. Read more →.

LiDAR

Encord Computer Vision Glossary

What Is LiDAR?

LiDAR — Light Detection and Ranging is a remote sensing technology that measures distance by emitting laser pulses and timing their return. Each pulse that reflects off a surface returns a point with a precise (x, y, z) coordinate in 3D space. Fire enough pulses in all directions, fast enough, and you get a dense 3D map of the environment: a point cloud.

Modern automotive LiDAR sensors generate hundreds of thousands to millions of points per second, rotating or using solid-state arrays to cover 360 degrees around the vehicle. The result is a real-time 3D representation of everything within range, typically 50 to 200 metres, depending on the sensor.

Why LiDAR Matters in Physical AI

Cameras are passive; they capture reflected ambient light, which means their output changes dramatically with lighting conditions, shadows, and glare. LiDAR is active; it generates its own light and measures the return, making it robust to most lighting conditions. At night, in tunnels, or in direct sunlight, LiDAR continues to produce reliable 3D measurements where cameras struggle.

For physical AI systems that need to know precisely where objects are in 3D space, not just that they exist in an image, LiDAR is the primary source of ground truth. It's what makes precise 3D bounding box annotation tractable, and what enables accurate distance and velocity estimation for downstream planning.

LiDAR in AV and Robotics Pipelines

In autonomous vehicles, LiDAR handles the geometry: precise object localisation, ground plane estimation, free-space detection, and distance measurement. Cameras handle the semantics: classification, colour, text recognition, traffic light state. Sensor fusion combines both into a complete scene representation.

In robotics, LiDAR plays a similar role, providing the 3D map of the workspace that manipulation and navigation systems reason over. For drones, LiDAR enables precise terrain following and obstacle avoidance in GPS-denied environments.

LiDAR Data Annotation

Annotating LiDAR data means working with point clouds, sparse, unstructured 3D data that requires different tooling and expertise than image annotation. The primary annotation task is 3D bounding box (cuboid) labeling: drawing a precisely fitted box around each object of interest, with correct heading orientation and dimensions.

The challenge is density. A distant object might be represented by only a handful of points; annotators need to infer the full object shape from limited evidence and place an accurate cuboid regardless. Consistency across frames, tracking objects and maintaining stable cuboid sizes as they move is essential for training data quality.

Encord for LiDAR Annotation

Encord's 3D annotation tools are built for LiDAR point cloud labeling at scale, supporting cuboid annotation, segmentation, and object tracking across sequences, with simultaneous camera view projection for cross-sensor verification. Automated pre-labeling seeds initial cuboid placements, reducing manual workload on high-density scenes. Quality review workflows catch inconsistencies across frames before they reach training.

Explore Encord for Physical AI

Explore Annotation & Labeling

Related Resources

Informational Guides:

Technical Documentations:

Webinars and video content:

Frequently Asked Questions

Q1: Why do AV systems use LiDAR when cameras are cheaper and higher resolution?

Cameras produce rich images but don't directly measure depth or work reliably in all lighting conditions. LiDAR provides precise 3D geometry regardless of ambient light. For safety-critical applications where knowing exact distances is essential, LiDAR's reliability justifies the cost. Many production AV systems use both, fusing camera semantics with LiDAR geometry.

Q2: What's the difference between mechanical spinning LiDAR and solid-state LiDAR?

Mechanical LiDAR rotates a laser array to cover 360 degrees; effective but expensive and mechanically complex. Solid-state LiDAR uses no moving parts, making it cheaper and more durable, but it typically covers a narrower field of view. Most production AV programs are transitioning toward solid-state as the technology matures.

Q3: How is LiDAR data annotated differently from camera data?

LiDAR produces point clouds, unstructured 3D data, rather than 2D images. Annotation requires 3D tools that let annotators place and adjust cuboids in 3D space, view the data from multiple angles, and handle the sparsity of distant objects. The expertise required is different from 2D image annotation.

cta banner
Automate 97% of your annotation tasks with 99% accuracy