Online Seminar: Random Matrix Theory and Applications

Introduction:

This seminar aims to cover topics on both the theoretical and applied aspects of random matrix theory and related fields.

The meetings will be held on Zoom and typically scheduled monthly on Thursdays in Beijing Time (UTC/GMT +8 hours).

If you want to join the mailing list, please send an email to rmta-seminar+subscribe AT googlegroups DOT com.
You will be asked to log in to your Google account for the subscription.
Otherwise, if you do not have a Google account, please contact rmta.seminar AT gmail DOT com.

Organizers:

Zhigang Bao (University of Hong Kong)
Zhenyu Liao (Huazhong University of Science & Technology)
Yuanyuan Xu (AMSS, Chinese Academy of Sciences)
Lun Zhang (Fudan University)

Upcoming seminars:

Year 2025:

  • Date: 16:00-17:00 pm (Beijing Time), October 2, 2025
    Speaker: Yan Fyodorov (King's College London)

    Title: Kac-Rice inspired approach to non-Hermitian random matrices
    Abstract: We will discuss a method of analyzing the joint probability density (JPD) of an eigenvalue $z$ and the associated right eigenvector ${\bf v}$ (normalized with ${\bf v}^*{\bf v}=1$) for non-Hermitian random matrices of a given size $N\times N$. To illustrate utility of the general method I will derive and analyze the JPD for two particular examples: (i) one-parameter family of matrices interpolating between complex Ginibre and real Ginibre ensembles and (ii) a complex Ginibre matrix additively perturbed by a fixed matrix. In particular, in the former case I will discuss the formation of an excess of eigenvalues in the vicinity of the real axis on approaching the real Ginibre limit, which eventually gives rise to the existence of a new scaling regime of "weak non-reality" as $N\to \infty$. In the second case after providing general JPD I will briefly discuss a particular case of non-Hermitian Rosenzweig-Porter model which recently attracted considerable interest in physics literature. If time allows, I will discuss a generalization of the proposed method which is expected to be suitable for analysis of JPD involving both left- and right eigenvectors.




  • Date: 17:00-18:00 pm (Beijing Time), November 6, 2025
    Speaker: Anirban Basak (Tata Institute of Fundamental Research)

    Title: TBA
    Abstract: TBA




Past seminars:

Year 2025:

  • Date: 9:00-10:00 am (Beijing Time), April 17, 2025
    Speaker: Lucas Benigni (Université de Montréal)

    Title: Spectrum of the Neural Tangent Kernel in a quadratic scaling
    Abstract:Despite their surplus of parameters, modern deep learning models often generalize well, a phenomenon exemplified by the "double descent curve." While this behavior is theoretically grasped for problems such as ridge regression under linear scaling of dimensions, intriguing phenomenon emerge under quadratic scaling, where sample size equals parameter count. In this presentation, we study the eigenvalues of the Neural Tangent Kernel, a matrix model pertinent to wide neural networks trained via gradient descent, within this quadratic regime.




  • Date: 9:00-10:00 am (Beijing Time), April 24, 2025
    Speaker: Jiaoyang Huang (University of Pennsylvania)

    Title: Ramanujan Property and Edge Universality of Random Regular Graphs
    Abstract: Extremal eigenvalues of graphs are of particular interest in theoretical computer science and combinatorics. Specifically, the spectral gap—the difference between the largest and second-largest eigenvalues—measures the expansion properties of a graph. In this talk, I will focus on random d-regular graphs.
    I will begin by providing background on the eigenvalues of random d-regular graphs and their connections to random matrix theory. In the second part of the talk, I will discuss our recent results on eigenvalue rigidity and edge universality for these graphs. Eigenvalue rigidity asserts that, with high probability, each eigenvalue concentrates around its classical location as predicted by the Kesten-McKay distribution. Edge universality states that the second-largest eigenvalue and the smallest eigenvalue of random d-regular graphs converge to the Tracy-Widom distribution from the Gaussian Orthogonal Ensemble. Consequently, approximately 69% of d-regular graphs are Ramanujan graphs. This work is based on joint work with Theo McKenzie and Horng-Tzer Yau.




  • Date: 9:00-10:00 am (Beijing Time), May 22, 2025
    Speaker: Zhou Fan (Yale University)

    Title: Kronecker-product random matrices and a matrix least squares problem
    Abstract: We study the eigenvalue distribution and resolvent of a Kronecker-product random matrix model, which has a mean-field structure in each Kronecker factor but not a global mean-field structure over all variables. Our main results are a quantitative approximation for the Stieltjes transform, a deterministic equivalent approximation for the resolvent, and sharp estimates for entries and blocks of the resolvent on global spectral scales. Our study is motivated by consideration of a matrix-valued least-squares optimization problem, where the dimension of the optimization variable is comparable to the dimensions of the random input matrices of the problem. Our analyses imply an asymptotic characterization of the optimal solution and its associated optimal objective value. This is joint work with Renyuan Ma.




  • Date: 9:30-10:30 am (Beijing Time), June 12, 2025
    Speaker: Elliot Paquette (McGill University)

    Title: From magic squares, through random matrices, and to the multiplicative chaos
    Abstract: In 2004, motivated by connections of random matrix theory to number theory, Diaconis and Gamburd showed a fascinating connection between the enumeration problem of magic squares (squares filled integers with row and column sum constraints) and the moments of the ‘secular coefficients’ of random matrices, when the size of the matrix tends to infinity. These are the coefficients in the monomial expansion of a characteristic polynomial, or equivalently, the elementary symmetric polynomials of the eigenvalues of this random matrix. It turns out that this characteristic polynomial has a limit, when the matrix size tends to infinity. It converges to a random fractal, the holomorphic multiplicative chaos. We describe this process on the unit circle, and show how it can be connected even more strongly to random matrices, and how magic square combinatorics are a type of ‘signature’ of this holomorphic multiplicative chaos. We’ll review some open questions about these objects, and discuss some links between this and other stochastic processes such as the Gaussian multiplicative chaos, the circular beta-ensemble and random multiplicative function.




  • Date: 9:00-10:00 am (Beijing Time), July 31, 2025
    Speaker: Vadim Gorin (UC Berkeley)

    Title: How weak are weak factors? Uniform inference for signal strength in signal plus noise models
    Abstract: We discuss four classical signal-plus-noise models: the sum of a Wigner matrix and a low-rank perturbation, spiked sample covariance matrices, the factor model, and canonical correlation analysis with low-rank dependencies. Our objective is to construct confidence intervals for the signal strength that are uniformly valid across all regimes - strong, weak, and critical signals. We demonstrate that traditional Gaussian approximations fail in the critical regime. Instead, we introduce a universal transitional distribution that enables valid inference across the entire spectrum of signal strengths. The crucial role is played by the (stochastic) Airy-Green function, which we are going to define and examine.




  • Date: 9:00-10:00 am (Beijing Time), September 18, 2025
    Speaker: Courtney Paquette (McGill University)

    Title: High-dimensional Optimization in Machine Learning with Applications to Scaling Limits and Compute-Optimal Neural Scaling Laws
    Abstract: Given the massive scale of modern ML models, we now only get a single shot to train them effectively. This restricts our ability to test multiple architectures and hyper-parameter configurations. Instead, we need to understand how these models scale, allowing us to experiment with smaller problems and then apply those insights to larger-scale models. In this talk, I will present a framework for analyzing scaling laws in stochastic learning algorithms using a power-law random features model, leveraging high-dimensional probability and random matrix theory. I will then use this scaling law to address the compute-optimal question: How should we choose model size and hyper-parameters to achieve the best possible performance in the most compute-efficient manner? Additionally, I will introduce a scaling limit commonly seen in ML optimization algorithms which has origins in statistical physics and I will highlight several promising research directions in scaling laws that remain underexplored but offer significant potential.




RMTA