Key features of our system
High-Resolution Analysis
High Robustness
High Accuracy
Large-Scale Image Analytics
Semantic Segmentation
Depth Estimation
3D Driving Environment Estimation
Fusion of Deep Learning Detection and Explainable Machine Learning
This system, is to extract and understand driving visual environment along roads by using street view images (e.g., Google Street View Panorama) and videos from cameras on vehicles. Semantic segmentation and depth estimation are conducted first to get the clustering and depth information at each pixel in images or videos. Then, the orthographic transformation is applied to transfer the 2D images to 3D images, which reflect driving visual view in the real world. Based on the proposed system, the following information could be generated from street view images and videos:
The system can be applied for images and videos at the street-level which are collected at different types of road facilities, such as freeways, arterials, intersections, bike lanes, and sidewalks
The ones who makes this happen
P.E., F.ASCE Trustee Chair
Research Associate Professor
Research Assistant Professor
Software Engineer
Accuracy
Scalability Level
Robustness
What we’ve done for safety