Our re-se-arch has been generously supported by ARO, NSF, ARFL, IARPA, BlueHalo and Salesforce.
2018
Qi, Hang; Xu, Yuanlu; Yuan, Tao; Wu, Tianfu; Zhu, Song-Chun
Joint Parsing of Cross-view Scenes with Spatio-temporal Semantic Parse Graphs Proceedings Article
In: Proceedings of The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), New Orleans, Lousiana, USA., February 2–7, pp. 1–4, 2018.
@inproceedings{JointParsing,
title = {Joint Parsing of Cross-view Scenes with Spatio-temporal Semantic Parse Graphs},
author = {Hang Qi and Yuanlu Xu and Tao Yuan and Tianfu Wu and Song-Chun Zhu},
url = {https://arxiv.org/pdf/1709.05436.pdf},
year = {2018},
date = {2018-01-01},
booktitle = {Proceedings of The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), New Orleans, Lousiana, USA., February 2\textendash7},
pages = {1--4},
abstract = {Cross-view video understanding is an important yet underexplored area in computer vision. In this paper, we introduce a joint parsing method that takes view-centric proposals from pre-trained computer vision models and produces spatiotemporal parse graphs that represents a coherent scene-centric understanding of cross-view scenes. Our key observations are that overlapping fields of views embed rich appearance and geometry correlations and that knowledge segments corresponding to individual vision tasks are governed by consistency constraints available in commonsense knowledge. The proposed joint parsing framework models such correlations and constraints explicitly and generates semantic parse graphs about the scene. Quantitative experiments show that scene-centric predictions in the parse graph outperform viewcentric predictions.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Cross-view video understanding is an important yet underexplored area in computer vision. In this paper, we introduce a joint parsing method that takes view-centric proposals from pre-trained computer vision models and produces spatiotemporal parse graphs that represents a coherent scene-centric understanding of cross-view scenes. Our key observations are that overlapping fields of views embed rich appearance and geometry correlations and that knowledge segments corresponding to individual vision tasks are governed by consistency constraints available in commonsense knowledge. The proposed joint parsing framework models such correlations and constraints explicitly and generates semantic parse graphs about the scene. Quantitative experiments show that scene-centric predictions in the parse graph outperform viewcentric predictions.