City Surfaces: City-scale Semantic Segmentation of Sidewalk Surfaces
Paper
Temporary GitHub page for City Surfaces paper. More soon!
While designing sustainable and resilient urban built environment is increasingly promoted around the world, significant data gaps have made research on pressing sus- tainability issues challenging to carry out. Surface pavements are known to have strong economic, environmental and social implications, however, most cities still lack a spatial catalogue of their surfaces due to the cost-prohibitive and time-consuming nature of data collection. Recent advancements in computer vision, together with the growing availability of street-level images, provide new opportunities for cities to extract large-scale built environment data with lower implementation costs and higher accuracy. In this paper, we propose the CitySurfaces framework, which adopts an active learning strategy combined with computer vision techniques for spatial localization and granular categorization of sidewalk materials using widely available street-level images. Through an iterative, active learning scheme with expert feedback, we train the proposed CitySurfaces framework on New York City and Boston to achieve a segmentation accuracy of 90.5% mean Intersection over Union (mIoU) on held-out test images. CitySurfaces can provide researchers as well as city agencies with a low-cost, accurate, and extendable method to collect sidewalk surface data which plays a critical role in addressing two major sustainability issues: climate change and water management.