Statistical query (SQ) algorithms are the class of learning algorithms that can be implemented using approximate expectations of any given function of the input distribution, as opposed to direct access to i.i.d. samples. This computational model has a number of applications, ranging from noise-tolerant learning to differential privacy, and it has been used to obtain unconditional lower bounds on conjectured hard problems over distributions. In this talk I will give an introduction to the theory of SQ algorithms, and we will see some recent developments in the case of stochastic convex optimization. Our main contribution is establishing nearly optimal SQ algorithms for mean vector estimation (this includes stochastic linear programming), which serves as a basis to obtain SQ versions of various gradient-type and polynomial time algorithms for stochastic convex optimization. Time permitting, I will show some consequences of our results for learning of halfspaces, differential privacy, and proving unconditional lower on the power of convex relaxations for random constraint satisfaction problems. This talk is based on joint work with Vitaly Feldman and Santosh Vempala, to appear in SODA 2017.