Recent Publications

We consider distributed optimization where the objective function is spread among different devices, each sending incremental model …

It is well known that many optimization methods, including SGD, SAGA, and Accelerated SGD for over-parameterized models, do not scale …

Training very large machine learning models requires a distributed computing approach, with communication of the model updates often …

The last decade witnessed a rise in the importance of supervised learning applications involving big data and big models. Big data …

We propose a randomized first order optimization method–SEGA (SkEtched GrAdient method)– which progressively throughout its …

We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective writes a sum of smooth …

Distributed learning aims at computing high-quality models by training over scattered data. This covers a diversity of scenarios, …

Recent Posts

More Posts

After a successful round of reviews for ICML I was invited to serve on committee for two more important ML conferences.

I will be at EPFL, visiting the Machine Learning and Optimization Laboratory led by Martin Jaggi.

Excluding visitors, we have 6 PhD students and 3 postdocs. The photo is taken just a couple of days ago at KAUST.

I was asked to be a reviewer at ICML next year, and I was happy to accept.

On 20 October Alibek Sailanbayev and I got ranked 71st out of >4000 teams worldwide in the IEEEXtreme competition.

Contact