Lectures

  1. Lecture 1: Course Overview Slides. We went over a high level overview for the course, and began studying marginal consistency: fixing marginal mean inconsistency reduces squared error, and generalization bounds. Video here: https://www.youtube.com/watch?v=EBZEJO1RebE
  2. Lecture 2: Introduced quantiles and pinball loss. Defined marginal quantile consistency. Fixing marginal quantile consistency reduces pinball loss — by a quantifiable amount of the distribution is Lipschitz. Proved generalization bounds with DKW, and began marginal guarantees in sequential settings. Video here: https://www.youtube.com/watch?v=uuodIrCb4uc
  3. Lecture 3: Sequential prediction with marginal quantile consistency guarantees. Offline to online reductions for mean and marginal quantile consistency. Begin calibration: Defined measures of calibration error and related them, and gave (but did not yet analyze) our first algorithm for calibrating a function. Calibration reduces squared error of a predictor by exactly its original squared calibration error. Video here: https://www.youtube.com/watch?v=_7MRy4OurR0
  4. Lecture 4: Mean and Quantile calibration in the batch setting: We gave algorithms to post-process arbitrary functions f so that they become mean or quantile calibrated, while reducing their error (as measured by squared loss or pinball loss respectively). Video here: https://www.youtube.com/watch?v=ih8NBK_b-mQ