Katyusha: The First Direct Acceleration of Stochastic Gradient Methods, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. STOC, pp.1200-1205, 2017. ,
Variance Reduction for Faster Non-Convex Optimization, Proceedings of The 33rd International Conference on Machine Learning, vol.48, pp.699-707, 2016. ,
UCI machine learning repository, 2007. ,
LIBSVM: A library for support vector machines, ACM transactions on intelligent systems and technology (TIST), vol.2, p.27, 2011. ,
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives, Advances in Neural Information Processing Systems 27, pp.1646-1654, 2014. ,
URL : https://hal.archives-ouvertes.fr/hal-01016843
Optimal mini-batch and step sizes for SAGA, The International Conference on Machine Learning, 2019. ,
URL : https://hal.archives-ouvertes.fr/hal-02005431
SGD: general analysis and improved rates ,
URL : https://hal.archives-ouvertes.fr/hal-02365318
Stochastic Quasi-Gradient methods: Variance Reduction via Jacobian Sketching, 2018. ,
StopWasting My Gradients: Practical SVRG, Advances in Neural Information Processing Systems 28, pp.2251-2259, 2015. ,
Nonconvex Variance Reduced Optimization with Arbitrary Sampling ,
Accelerating stochastic gradient descent using predictive variance reduction, Advances in Neural Information Processing Systems, pp.315-323, 2013. ,
Semi-stochastic gradient descent methods, Frontiers in Applied Mathematics and Statistics, vol.3, p.9, 2017. ,
Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting, IEEE Journal of Selected Topics in Signal Processing, vol.2, pp.242-255, 2016. ,
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop, 2019. ,
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization, Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS'17, pp.608-617, 2017. ,
Introductory lectures on convex optimization: A basic course, vol.87, 2013. ,
Confidence level solutions for stochastic programming, In: Automatica, vol.44, pp.1559-1568, 2008. ,
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.2613-2621, 2017. ,
Stochastic Proximal Gradient Descent with Acceleration Techniques, Advances in Neural Information Processing Systems 27, pp.1574-1582, 2014. ,
Scikit-learn: Machine Learning in Python, Journal of Machine Learning Research, vol.12, pp.2825-2830, 2011. ,
URL : https://hal.archives-ouvertes.fr/hal-00650905
Stochastic Variance Reduction for Nonconvex Optimization, Proceedings of the 34th International Conference on Machine Learning, vol.48, pp.314-323, 2016. ,
A convergence theorem for non negative almost supermartingales and some applications, pp.111-135, 1985. ,
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets, Advances in Neural Information Processing Systems 25, pp.2663-2671, 2012. ,
URL : https://hal.archives-ouvertes.fr/hal-00674995
Stochastic dual coordinate ascent methods for regularized loss minimization, Journal of Machine Learning Research, vol.14, pp.567-599, 2013. ,