Learning to Understand Dynamic Scenes
Dynamic scene understanding englobes many of the classic problems in Computer Vision. Semantically labelling every pixel in a video sequence is a big step towards understanding the world around us, and is key to applications such as autonomous driving or human-robot interaction. In this talk, I will present several works that bring us closer to solving the problem of dynamic scene understanding, focusing especially on the role of Deep Learning for video analysis. I will present our recent works on multiple object tracking, video object segmentation and visual localisation. Finally, I will briefly discuss future research plans and the accepted project socialMaps.
Prof. Laura Leal-Taixé is leading the Dynamic Vision and Learning group at the Technical University of Munich, Germany. She received her Bachelor and Master degrees in Telecommunications Engineering from the Technical University of Catalonia (UPC), Barcelona. She did her Master Thesis at Northeastern University, Boston, USA and received her PhD degree (Dr.-Ing.) from the Leibniz University Hannover, Germany. During her PhD she did a one-year visit at the Vision Lab at the University of Michigan, USA. She also spent two years as a postdoc at the Institute of Geodesy and Photogrammetry of ETH Zurich, Switzerland and one year at the Technical University of Munich. In 2017, she won the Sofja Kovalevskaja Award of 1.65 million euros from the presitgious Humboldt Foundation for her project socialMaps. Her research interests are dynamic scene understanding, in particular multiple object tracking and segmentation, as well as machine learning for video analysis.