DRIVEN BASE
Vehicles in lab for testing

Pittsburgh Innovation Lab Data Set

An autonomous driving dataset from Downtown Pittsburgh for testing self-driving technologies.





Overview

City of Pittsburgh Perception Data Set

This dataset from the Pittsburgh area is valuable for training and testing algorithms for object detection, tracking, and perception,
providing realistic, real-world data for developing autonomous driving and computer vision technologies.

  • Car in the city

Data Set Details

car icon

17 diverse routes in the Pittsburgh area

car icon

Comprising a total of 360,000 frames, the dataset provides ample material for training and evaluating" perception algorithms.

car icon

10% of the data (36,000 frames) has been meticulously labeled with 3D bounding boxes, identifying key elements like vehicles, pedestrians, and cyclists, enabling the development and validation of object detection and tracking models.

car icon

Minimum Segment Length = 31 minutes, Maximum Segment Length = 95 minutes

car icon

All data is synchronized at 10Hz

car icon

Data set Includes Cyclists, Pedestrians and Construction

Pittsburgh Domain Diversity

Outlined below is a list of the unique aspects and focus areas encompassed by the dataset. Seventeen diverse routes in the Pittsburgh area have been meticulously labeled, identifying key elements like vehicles, pedestrians, and cyclists, enabling the development and validation of object detection and tracking models. The dataset provides ample material for training and evaluating" perception algorithms.

 

 

  • map

Perception Object Assets

Sensor Data

The sensor data in this dataset is derived from the seamless integration of multiple advanced sources, including high-resolution cameras, LiDAR systems for precise 3D mapping, GPS for accurate geolocation, and CAN bus data capturing vehicle dynamics. These diverse inputs are combined to create a comprehensive dataset that supports robust analysis and development in the field of autonomous vehicle technology.

    • Vehicle in city environment scanning vicinity

    Cameras

    High-resolution images captured from six cameras, provide a 360-degree view of the surrounding environment. This comprehensive visual coverage enables detailed analysis of road conditions, traffic patterns, object detection, and scene understanding.

    • lidar image

    LiDARs

    A 360-degree view constructed using data from seven SPAD LiDARs (Single Photon Avalanche Diode LiDARs). These advanced sensors provide detailed and accurate 3D spatial data, enabling precise mapping of the environment, obstacle detection, and depth perception. This data is essential for ensuring the safety and navigation capabilities of autonomous vehicles, even in challenging conditions such as low light or adverse weather.

    • Top down map of city with location markers

    GPS Data

    The system employs high-precision GPS to provide accurate localization, ensuring the vehicle's position is reliably tracked. GPS data localizes the image and supports in time synchronization of sensor data.

    • Car Driving in city

    Vehicle Dynamic Data

    Vehicle dynamic data is utilized to deliver real-time insights into the internal state of the vehicle, offering detailed information on critical parameters such as engine performance, battery health, and sensor functionality. This data ensures proactive monitoring and diagnostics, enabling timely interventions to maintain optimal vehicle performance and support reliable autonomous operation.

  • Driving scenes with digital boxes around them

Bounding Box Labels

In this dataset, 10% of the data, totaling 36,000 frames, has been carefully labeled with 3D bounding boxes to identify key elements such as vehicles, pedestrians, and cyclists. This labeled data is essential for training and validating object detection and tracking models, enhancing the accuracy and reliability of autonomous vehicle systems in dynamic environments.