Publications

Filter by type:
. MISO is Making a Comeback With Better Proofs and Rates. 2019.

PDF arXiv

. DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate. 2019.

PDF arXiv

. Revisiting Stochastic Extragradient. 2019.

PDF arXiv

. Stochastic Distributed Learning with Gradient Quantization and Variance Reduction. 2019.

PDF arXiv

. Distributed Learning with Compressed Gradient Differences. 2019.

PDF arXiv

. 99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it. 2019.

PDF arXiv

. SEGA: Variance Reduction via Gradient Sketching. In Advances in Neural Information Processing Systems, 2018, 2018.

PDF arXiv NIPS

. A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm. 2018.

PDF arXiv

. A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning. International Conference on Machine Learning, 0001.

PDF pmlr