Title: Self-Supervised Learning — Can One Really Do Away with Data Labeling?
Abstract:
Self-Supervised Learning is a hot topic, spurred by two landmark papers from giants of AI and Machine Learning — Geoff Hinton and Yann LeCun, respectively. The implied claim is that one can train a model without using any labels and later develop a classifier that performs well using only very limited labels. We will discuss this exciting area, along with the current state of the narrative, empirical findings and theoretical research.
This is joint work with XY Han (University of Chicago) and Vardan Papyan (University of Toronto).
Bio:
David Donoho has studied the exploitation of sparse signals in signal recovery, including for denoising, superresolution, and solution of underdetermined equations. His research with collaborators showed that ell-1 penalization was an effective and even optimal way to exploit sparsity of the object to be recovered. He coined the notion of compressed sensing which has impacted many scientific and technical fields, including magnetic resonance imaging in medicine, where it has been implemented in FDA-approved medical imaging protocols and is already used in millions of actual patient MRIs.
In recent years David and his postdocs and students have been studying large-scale covariance matrix estimation, large-scale matrix denoising, detection of rare and weak signals among many pure noise non-signals, compressed sensing and related scientific imaging problems, and most recently, empirical deep learning.