Machine Learning Systems for Multimodal Affect Recognition

Автор: literator от 20-11-2019, 20:40, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Название: Machine Learning Systems for Multimodal Affect Recognition
Автор: Markus Kachele
Издательство: Springer Vieweg
Год: 2020
Страниц: 198
Язык: английский
Формат: pdf (true), djvu
Размер: 10.1 MB

Markus Kachele offers a detailed view on the different steps in the affective computing pipeline, ranging from corpus design and recording over annotation and feature extraction to post-processing, classification of individual modalities and fusion in the context of ensemble classifiers. He focuses on multimodal recognition of discrete and continuous emotional and medical states. As such, specifically the peculiarities that arise during annotation and processing of continuous signals are highlighted. Furthermore, methods are presented that allow personalization of datasets and adaptation of classifiers to new situations and persons.

With the rise of smart devices and the Internet of Things (IoT), computers continue to advance into everyday life. The common personal computer that many people use at home or at work is only one of many devices that offer computational power and its dominant position has started to waver with the introduction of powerful portable devices. Smart phones, watches, glasses, tablets and fitness trackers can be seen everywhere and even the head-up displays of consumer cars are equipped with powerful processors. Those devices are not just tools for work anymore. They can be used to keep us healthy, to support us with our daily tasks or to entertain us. They act as companions with which we share information about us such as daily routines, our preferences and what we dislike. Interaction with those companion devices has mostly shifted towards touch, voice or gesture input, leaving the typical keyboard and mouse setup aside. The ways in which we interact with smart devices gave rise to new research questions, helping fields such as human computer interaction (HCI) to become more prominent.

While progress in technology has not advanced to this point yet, currently available devices show similar functionality already (although in a more simple, less intuitive way). To achieve smart, personalized companions that are able to detect our mood and decipher our affective states, effort has to be put into data recording, feature extraction, design of machine learning algorithms but also into understanding the human in front of the system. The focus of this work is to present an in-depth look at each of the stages that are necessary to build an affect recognition system. This includes signal processing and machine learning algorithms but also data recording and its annotation. Furthermore, as an important point, a discussion is presented how to properly measure the prediction quality of such systems.

Скачать Machine Learning Systems for Multimodal Affect Recognition




ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!


Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.