Join Newsletter

Practical Learning To Learn

YOW! Data 2019

Gradient descent continues to be our main work horse for training neural networks. One recurring problem though is the large amount of data required. Meta learning frames the problem not as learning from a single large dataset, but learning how to learn from multiple related smaller datasets. In this talk we'll first discuss some key concepts around gradient descent; fine-tuning, transfer learning, joint training and catastrophic forgetting and compare them to how simple meta learning techniques can make optimisation feasible for much smaller datasets.