In this dissertation, I present my work towards exploring temporal information for better video understanding. Specifically, I have worked on two problems: action recognition and semantic segmentation. For action recognition, I have proposed a framework, termed hidden two-stream networks, to learn an optimal motion representation that does not require the computation of optical flow. My framework alleviates several challenges faced in video classification, such as learning motion representations, real-time inference, multi-framerate handling, generalizability to unseen actions, etc. For semantic segmentation, I have introduced a general framework that uses video prediction models to synthesize new training samples. By scaling up the training dataset, my trained models are more accurate and robust than previous models even without modifications to the network architectures or objective functions.
Along these lines of research, I have worked on several related problems. I performed the first investigation into depth for large-scale video action recognition where the depth cues are estimated from the videos themselves. I further improved my hidden two-stream networks for action recognition through several strategies, including a novel random temporal skipping data sampling method, an occlusion-aware motion estimation network and a global segment framework. For zero-shot action recognition, I proposed a pipeline using a large-scale training source to achieve a universal representation that can generalize to more realistic cross-dataset unseen action recognition scenarios. To learn better motion information in a video, I introduced several techniques to improve optical flow estimation, including guided learning, DenseNet upsampling and occlusion-aware estimation.
I believe videos have much more potential to be mined, and temporal information is one of the most important cues for machines to perceive the visual world better.
Author
Advisor