Автор: Ying Yang, Michael (Editor), Bodo Rosenhahn (Editor), Vittorio Murino (Editor)
Издательство: Academic Press
Год: 2019
Формат: pdf(conv.)
Страниц: 422
Размер: 81.9 Mb
Язык: English
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms.
Researchers collecting and analyzing multi-sensory data collections - for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful.
Contains state-of-the-art developments on multi-modal computing
Shines a focus on algorithms and applications
Presents novel deep learning topics on multi-sensor fusion and multi-modal deep learning