# Recent Publications

### A Self-supervised Approach to Hierarchical Forecasting with Applications to Groupwise Synthetic Controls

When forecasting time series with a hierarchical structure, the existing state of the art is to forecast each time series …

### MISO is Making a Comeback With Better Proofs and Rates

MISO, also known as Finito, was one of the first stochastic variance reduced methods discovered, yet its popularity is fairly low. Its …

### DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate

In this paper, we consider distributed algorithms for solving the empirical risk minimization problem under the master/worker …

### A Stochastic Decoupling Method for Minimizing the Sum of Smooth and Non-Smooth Functions

We consider the problem of minimizing the sum of three convex functions: i) a smooth function $f$ in the form of an expectation or a …

We consider a new extension of the extragradient method that is motivated by approximating implicit updates. Since in a recent work …

### Stochastic Distributed Learning with Gradient Quantization and Variance Reduction

We consider distributed optimization where the objective function is spread among different devices, each sending incremental model …

### 99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it

Many popular distributed optimization methods for training machine learning models fit the following template: a local gradient …

### Distributed Learning with Compressed Gradient Differences

Training very large machine learning models requires a distributed computing approach, with communication of the model updates often …

### A Stochastic Penalty Model for Convex and Nonconvex Optimization with Big Constraints

The last decade witnessed a rise in the importance of supervised learning applications involving big data and big models. Big data …

### SEGA: Variance Reduction via Gradient Sketching

We propose a randomized first order optimization method–SEGA (SkEtched GrAdient method)– which progressively throughout its …

# Recent Posts

### I am at Simons Institute until 18 July

From 15 to 18 July I’m attending the Frontiers of Deep Learning workshop at Simons Insitute.

### I visited Matthias Ehrhardt at Bath University

From 17 to 28 June I visited Matthias Ehrhardt.

### I was at ICML 2019 presenting the work I did at Amazon

Our work on time series was accepted as a poster to the Time Series Workshop at ICML and I presented it together with Federico Vaggi.

### I am a member of the program committee for NeurIPS and UAI 2019

After a successful round of reviews for ICML I was invited to serve on committee for two more important ML conferences.

### I'm visiting Martin Jaggi from 18 February to 15 March

I will be at EPFL, visiting the Machine Learning and Optimization Laboratory led by Martin Jaggi.