Konstantin Mishchenko
Home
Publications
Posts
Contact
CV
Konstantin Mishchenko
Latest
Random Reshuffling: Simple Analysis with Vast Improvements
Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates
Adaptive Gradient Descent without Descent
Sinkhorn Algorithm as a Special Case of Stochastic Mirror Descent
First Analysis of Local GD on Heterogeneous Data
Tighter Theory for Local SGD on Identical and Heterogeneous Data
A Self-supervised Approach to Hierarchical Forecasting with Applications to Groupwise Synthetic Controls
MISO is Making a Comeback With Better Proofs and Rates
DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate
A Stochastic Decoupling Method for Minimizing the Sum of Smooth and Non-Smooth Functions
Revisiting Stochastic Extragradient
Stochastic Distributed Learning with Gradient Quantization and Variance Reduction
99% of Worker-Master Communication in Distributed Optimization Is Not Needed
Distributed Learning with Compressed Gradient Differences
A Stochastic Penalty Model for Convex and Nonconvex Optimization with Big Constraints
SEGA: Variance Reduction via Gradient Sketching
A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning
A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm
Cite
×