Prediction problems can be formulated as repeated games, without probabilistic assumptions. In each round of the prediction game, a strategy makes a decision, then observes an outcome and pays a loss. The aim is to minimize the regret, which is the amount by which the total loss incurred exceeds the total loss of the best decision in hindsight. We are interested in the minimax optimal strategy, which minimizes the regret. We focus on two cases where the optimal strategy is simple to compute. The first involves prediction with log loss (a formulation of sequential probability density estimation, equivalent to sequential compression, coding, gambling and investment problems): we present a characterization of problems for which the optimal strategy corresponds to Bayesian prediction. The second is the sequential least squares game, where decisions and outcomes lie in a subset of a Hilbert space, and loss is squared distance. When the outcomes are the vertices of a simplex, this is the `Brier game,' studied for the calibration of sequential probability forecasts; when the outcome set is convex, the game is related to sequential Gaussian density estimation. We show that the value of the game depends only on the radius of the smallest ball that contains the convex subset, and that the minimax optimal strategy is a simple shrinkage strategy.
Based on joint work with Fares Hedayati, and with Wouter Koolen and Alan Malek.