Generative models have had remarkable success in various applications, from image and video generation to composing music and to language modeling. The problem is that we are lacking in theory, when it comes to the capabilities and limitations of generative models; understandably, this gap can seriously affect how we develop and use them down the line.
One of the main challenges has been the ability to effectively pick samples from complicated data patterns, especially given the limitations of traditional methods when dealing with the kind of high-dimensional and complex data commonly encountered in modern AI applications.
Now, a team of scientists led by Florent Krzakala and Lenka Zdeborová at EPFL has investigated the efficiency of modern neural network-based generative models. The study, published in PNAS, compares these contemporary methods against traditional sampling techniques, focusing on a specific class of probability distributions related to spin glasses and statistical inference problems.