Join Newsletter

Practical Learning To Learn

YOW! Data 2019

Gradient descent continues to be our main work horse for training neural networks. One recurring problem though is the large amount of data required. Meta learning frames the problem not as learning from a single large dataset, but learning how to learn from multiple related smaller datasets. In this talk we'll first discuss some key concepts around gradient descent; fine-tuning, transfer learning, joint training and catastrophic forgetting and compare them to how simple meta learning techniques can make optimisation feasible for much smaller datasets.

Mat Kelcey

Machine Learning Principal

ThoughtWorks

Australia

Mat is a research engineer who is currently a principal consultant for machine learning at ThoughtWorks. He previously worked on joint Google Brain/X projects in the area of both reinforcement learning robotics and a number of natural language understanding tasks. Prior to Google he worked at Wavii as well as Amazon Web Services working on very large data processing systems. During his 20 years as a software engineer he has gathered broad experience covering everything from front end development to building petabyte scale data pipelines working in a mix of startups and large corporations.