Pittsburgh Innovation Lab Data Set
An autonomous driving dataset from Downtown Pittsburgh for testing self-driving technologies.
Overview
City of Pittsburgh Perception Data Set
This dataset from the Pittsburgh area is valuable for training and testing algorithms for object detection, tracking, and perception,
providing realistic, real-world data for developing autonomous driving and computer vision technologies.
Data Set Details
|
17 diverse routes in the Pittsburgh area |
|
|
Comprising a total of 360,000 frames, the dataset provides ample material for training and evaluating" perception algorithms. |
|
|
10% of the data (36,000 frames) has been meticulously labeled with 3D bounding boxes, identifying key elements like vehicles, pedestrians, and cyclists, enabling the development and validation of object detection and tracking models. |
|
|
Minimum Segment Length = 31 minutes, Maximum Segment Length = 95 minutes |
|
|
All data is synchronized at 10Hz |
|
|
Data set Includes Cyclists, Pedestrians and Construction |
Pittsburgh Domain Diversity
Outlined below is a list of the unique aspects and focus areas encompassed by the dataset. Seventeen diverse routes in the Pittsburgh area have been meticulously labeled, identifying key elements like vehicles, pedestrians, and cyclists, enabling the development and validation of object detection and tracking models. The dataset provides ample material for training and evaluating" perception algorithms.
Perception Object Assets
Sensor Data
The sensor data in this dataset is derived from the seamless integration of multiple advanced sources, including high-resolution cameras, LiDAR systems for precise 3D mapping, GPS for accurate geolocation, and CAN bus data capturing vehicle dynamics. These diverse inputs are combined to create a comprehensive dataset that supports robust analysis and development in the field of autonomous vehicle technology.
-
Cameras
High-resolution images captured from six cameras, provide a 360-degree view of the surrounding environment. This comprehensive visual coverage enables detailed analysis of road conditions, traffic patterns, object detection, and scene understanding. -
LiDARs
A 360-degree view constructed using data from seven SPAD LiDARs (Single Photon Avalanche Diode LiDARs). These advanced sensors provide detailed and accurate 3D spatial data, enabling precise mapping of the environment, obstacle detection, and depth perception. This data is essential for ensuring the safety and navigation capabilities of autonomous vehicles, even in challenging conditions such as low light or adverse weather.
-
GPS Data
The system employs high-precision GPS to provide accurate localization, ensuring the vehicle's position is reliably tracked. GPS data localizes the image and supports in time synchronization of sensor data. -
Vehicle Dynamic Data
Vehicle dynamic data is utilized to deliver real-time insights into the internal state of the vehicle, offering detailed information on critical parameters such as engine performance, battery health, and sensor functionality. This data ensures proactive monitoring and diagnostics, enabling timely interventions to maintain optimal vehicle performance and support reliable autonomous operation.
Bounding Box Labels
In this dataset, 10% of the data, totaling 36,000 frames, has been carefully labeled with 3D bounding boxes to identify key elements such as vehicles, pedestrians, and cyclists. This labeled data is essential for training and validating object detection and tracking models, enhancing the accuracy and reliability of autonomous vehicle systems in dynamic environments.