Автор: Rahul Chandnani
Издательство: O'Reilly
Год: 2018
Видео: H264 MP4 AVC, 1280x720
Аудио: AAC 44100 KHz 2 Ch
Продолжительность: 01:30:00
Размер: 190 mb
Язык: English
Machine learning models are exposed to a variety of different tasks and are required to perform well on these tasks over time. However, what if the datasets change dramatically over time? For example, in an image classification system, the machine learning model was trained on one distribution yet over time, new images arrive which force a different distribution. Machine learning systems must adapt to accommodate these new examples without forgetting what they have learned previously. If the machine learning model does not adapt, its performance can be impacted drastically. This ability to learn sequentially and continuously by injecting new knowledge without forgetting previous knowledge is known as “continuous learning”. Continuous learning is a domain of machine learning which tries to mimic the human cognitive system.
This video series explores continuous learning through nine clips:
• Continuous Learning Overview. This first video in the series introduces continuous learning and provides a comparative analysis of the human mind with machine learning models.
• Continuous Learning Properties. This second video in the series defines continuous learning and introduces the challenge of Catastrophic Forgetting (CF) due to a degree of dissimilarity between tasks, along with the importance of Formal Learning (FL).
• Memory Replay Continuous Learning Methods. This third video in the series explains naive methods for continuous learning including Rehearsal and Pseudo-Rehearsal based methods. The main idea is to reuse the old data when training on new data.
• Selective Regularization Continuous Learning Methods. This fourth video in the series covers the latest research on Selective Regularization based approaches and explores various loss functions. Learn about three approaches: Elastic Weight Consolidation, Learning What Not to Forget, and Continual Learning through Synaptic Intelligence.
• Knowledge Distillation Continuous Learning Methods. This fifth video in the series covers Knowledge Distillation based approaches, where the main idea is to keep the model’s responses close to the old optimal responses while learning a new task. Learn about two approaches: Learning Without Forgetting, and Incremental Learning of Object Detectors,
• Continuous Learning Use Cases. This sixth video in the series explores the three scenarios where we can utilize continuous learning for injecting new knowledge in existing systems: Concept Drift Adaptation, Class Incremental Learning (CIL), and Sequential Multi-Task Learning. We also cover the various architectures to be used for each of these scenarios.
• Continuous Learning and Tensorflow. This seventh video in the series provides an overview to Tensorflow to better understand the continuous learning example in the ninth video.
• Continuous Learning and Keras. This eighth video in the series explains Keras, which is an open source high-level neural network API. We cover both functional and sequential APIs and show how to build the Custom Loss Function in Keras. This intro to Keras will help you better understand the continuous learning example in the ninth video.
• Continuous Learning In Practice. This ninth video in the series covers a detailed example of continuous learning using Tensorflow and Keras to perform Selective Regularization and Knowledge Distillation, and produce Accuracy-based Performance Plots. We end this video series by discussing future work in continuous learning.