Konstantin Mishchenko
Home
Publications
Posts
Contact
CV
Peter Richtárik
Latest
Random Reshuffling: Simple Analysis with Vast Improvements
Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates
First Analysis of Local GD on Heterogeneous Data
Tighter Theory for Local SGD on Identical and Heterogeneous Data
MISO is Making a Comeback With Better Proofs and Rates
A Stochastic Decoupling Method for Minimizing the Sum of Smooth and Non-Smooth Functions
Revisiting Stochastic Extragradient
Stochastic Distributed Learning with Gradient Quantization and Variance Reduction
99% of Worker-Master Communication in Distributed Optimization Is Not Needed
Distributed Learning with Compressed Gradient Differences
A Stochastic Penalty Model for Convex and Nonconvex Optimization with Big Constraints
Cite
×