Data details

The Apollo-SouthBay dataset contains six routes, BaylandsToSeafood, ColumbiaPark, Highway237, MathildaAVE, SanJoseDowntown, and SunnyvaleBigLoop covering different scenarios including but not limited to residential areas, urban downtown areas and highways. What's more, the dataset contains 3D LiDAR scans(as well as point clouds generated by accumulating scans with motion compensation), post-processed ground truth poses and online estimated poses from the GNSS/IMU integrated solution. In Figure 1, the six routes in southern SanFrancisco Bay area are shown for reference. In Figure 2, the camera images of four selected scenarios in the datasets are shown to demonstrate the coverage diversity of our datasets.
Figure 1. The six routes of the collected datasets in southern San Francisco Bay area. The routes are carefully selected to cover the different scenarios.
  • Bridge

  • Overpass

  • Tree-lined road

  • Intersection

  • Highway

  • Downtown

Figure 2. Illustration of the different scenarios in our datasets. The multiple trials of repetitive data collection in these different scenarios makes our datasets ideal for testing localization systems.

Folder description

Apollo-SouthBay dataset provides three folders, TrainData for training model, TestData for testing model, and MapData for generating maps. The main storing structure of Apollo-SourthBay is organised by SceneName/SamplingDate/{pcds, poses}, the subfolder named pcds includes the PCD files and the other one named poses includes the pose files which every line of it is the information of LiDAR, for details in 2.2.1. TrainData has the main storing structure, but in MapData and TestData, there are some special cases. For examples, the storing structure of SunnyvaleBigloop and ColumbiaPark in MapData are different. SunnyvaleBigloop has 7 subfolders and each one of them has the same storing structure as the main storing structure. Under the date folder of ColumbiaPark, there are four subfolders, named 1,2,3,4. For details, please jump to 2.1 and 2.2.
1. Organization of folders
MapData
  └───BaylandsToSeafood
  │    └───2018-09-26
  │          └─── pcds
  │          │   1.pcd
  │          │   2.pcd
  │          │  ...
  │          └─── poses
  │             gt_poses.txt
  └───OTHER Folders
  │ ...
  └───SunnyvaleBigloop
  │    └───Borrgas
  │    │    └───2017-12-13
  │    │    │   └─── pcds
  │    │    │   │  1.pcd
  │    │    │   │  2.pcd
  │    │    │   │   ...
  │    │    │   └─── poses
  │    │    │   │  gt_poses.txt
  │    └───OTHER Folders
  └───ColumbiaPark
  │    └───2018-09-21
  │    │     └─── 1
  │    │     │ └─── pcds
  │    │     │ │  1.pcd
  │    │     │ │  2.pcd
  │    │     │ │  ...
  │    │     │ └─── poses
  │    │     │ │  gt_poses.txt
  │    │     └───OTHER Folders


TestData
  └───BaylandsToSeafood
  │    └───2018-10-12
  │         └─── pcds
  │         │    1.pcd
  │         │    2.pcd
  │         │    ...
  │         └─── poses
  │         │   gt_poses.txt
  │         │   init_poses.txt
  │         │   valid_range.txt
  └───OTHER Folders
  │ ...
  └───SanJoseDowntown
  │    └───2018-10-11
  │    │     └─── 1
  │    │     │   └───pcds
  │    │     │   │   1.pcd
  │    │     │   │   2.pcd
  │    │     │   │   ...
  │    │     │   └───poses
  │    │     │   │   gt_poses.txt
  │    │     │   │   init_poses.txt
  │    │     │   │   valid_range.txt
  │    │     └───OTHER Folders
2. Files in poses folder
2.1 gt-poses.txt and init-poses.txt
The gt-poses.txt includes the post-processed ground truth poses and init-poses.txt includes the online estimated poses from the GNSS/IMU integrated solution.
2.2 valid-range.txt
The valid-range.txt includes the range of the index of PCD file for validating the algorithm. The first column is the starting index and the second column is the ending index.
3. File format header
We use a special PCD file format header showed in Figure 3 which is not contained in PCL and provide a method to load this special format, 3.2 in details.
Figure 3. The PCD file format header of Apollo-SourthBay dataset

Code for loading data

1. API for loading pose files
The API will generate poses with the type of Eigen::Affine3d which can be used to tranform the corresponding PCD files into the global cooridate system directly.
**@brief load poses file.
* @param <pose_file_path> path of pose file.
* @param <indexs> index of PCD files.
* @param <timestamps> timestamp of PCD files.
* @param <poses> the pose of PCD files, including translation and quaternion.
*/
bool load_pcd_poses(
        const std::string& pose_file_path, 
        std::vector<int>& indexs,
        std::vector<std::string>& timestamps,
        std::vector<Eigen::Affine3d>& poses) {
    std::ifstream inf(pose_file_path.c_str());
    if (!inf) {
        return false;
    }
    int index = 0;
    std::string timestamp;
    double tran_x = 0.0;
    double tran_y = 0.0;
    double tran_z = 0.0;
    double quat_x = 0.0;
    double quat_y = 0.0;
    double quat_z = 0.0;
    double quat_r = 0.0;
    
    while(inf >> index >> timestamp >> tran_x >> tran_y 
            >> tran_z >> quat_x >> quat_y >> quat_z >> quat_r) {
        indexs.push_back(index);
        timestamps.push_back(timestamp);

        Eigen::Quaterniond quat(quat_r, quat_x, quat_y, quat_z);
        Eigen::Translation3d trans(tran_x, tran_y, tran_z);
        Eigen::Affine3d pose = trans * quat;   
        poses.push_back(pose);
    }
    inf.close();
    if (poses.size() > 0) {
        return true;
    } else {
        return false;
    }
}        
            
2. API for loading pcd files
The API will generate a point cloud with the type of PointXYZIT clarified in 2.3.
/**@brief register the format header of PCD file of dataset into PCL. */
struct PointXYZIT {
    float x;
    float y;
    float z;
    unsigned char intensity;
    double timestamp;
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW 
} EIGEN_ALIGN16;

POINT_CLOUD_REGISTER_POINT_STRUCT (PointXYZIT,
                                    (float, x, x)
                                    (float, y, y)
                                    (float, z, z)
                                    (uint8_t, intensity, intensity)
                                    (double, timestamp, timestamp)
                                    )
    
/**@brief load pcd file.
    * @param <pose_file_path> path of pcd file.
    * @param <cloud> a PointXYZIT type pointer to store the content of PCD file.
    */
bool load_pcd_file(
        const std::string& load_pcd_path, 
        pcl::PointCloud<PointXYZIT>::Ptr cloud) {
    if (pcl::io::loadPCDFile(load_pcd_path.c_str(), *cloud) != 0) {
        std::cerr << "Can't load pcd file from " << load_pcd_path << std::endl;   
        return false;            
    }    
    return true;
}
            
3. API for transforming pcd files according to pose
The API will transform a point cloud into the same coordinate system according to its corresponding pose.
/**@brief transform a source cloud according to the given pose into the global coordinate system. * @param <pose> The pose of PCD file.
* @param <src_cloud> The source point cloud will be transformed.
* @param <dst_cloud> The transformed result of source point cloud.
*/
void transform_pointcloud(
const Eigen::Affine3d& pose, pcl::PointCloud<PointXYZIT>::Ptr src_cloud, pcl::PointCloud<PointXYZIT>::Ptr dst_cloud) {
pcl::transformPointCloud(*src_cloud, *dst_cloud, pose); }
            
4. Demo result
At last, we provide a demo code to explain how to use this dataset detailly. The code performs the following functions.

● (1) load poses from pose file.
● (2) load PCD files from PCD folder.
● (3) transform point clouds from (2) into global coordinate system according to their corresponding poses from (1) respectly.
● (4) combine transformed point clouds,color and display them in PCL window.

One of results from the demo code is showed in Figure 4.
Figure 4.The result of combining multiple PCD files

Publication


To use this dataset in your publications, please cite:
@inproceedings{L3NET_2019_CVPR,
    title={L3-net: Towards learning based lidar localization for autonomous driving},
    author={Lu, Weixin and Zhou, Yao and Wan, Guowei and Hou, Shenhua and Song, Shiyu},
    booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
    pages={6389--6398},
    year={2019}
}