Join Newsletter

Self supervised learning & making use of unlabelled data.

YOW! Data 2020

The general supervised learning problem starts with a labelled dataset. It's common though to additionally have a large collection of unlabelled data also. Self supervision techniques are a way to make use of this data to boost performance. In this talk we'll review some contrastive learning techniques that can either be used to provide weak labelled data or to act as a way of pre training for few-shot learning.

Mat Kelcey

Machine Learning Principal

ThoughtWorks

Australia

Mat is a research engineer who is currently a principal consultant for machine learning at ThoughtWorks. He previously worked on joint Google Brain/X projects in the area of both reinforcement learning robotics and a number of natural language understanding tasks. Prior to Google he worked at Wavii as well as Amazon Web Services working on very large data processing systems. During his 20 years as a software engineer he has gathered broad experience covering everything from front end development to building petabyte scale data pipelines working in a mix of startups and large corporations.