Monday, December 23, 2024

How AI improves simulations with smarter sampling techniques

Share

Imagine that you are tasked with sending a team of soccer players onto the field to assess the condition of the grass (of course, this is a likely task for them). If you randomly select their positions, they may focus on certain areas while completely neglecting others. However, if you offer them a strategy, such as spreading the grass evenly throughout the field, you can get a much more right picture of the health of the grass.

Now imagine the need to disperse not just in two dimensions, but in dozens or even hundreds. That’s the challenge facing researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). They developed an AI-powered approach to “low disparity sampling,” a method that improves simulation accuracy by distributing data points more evenly in space.

The key innovation is the apply of graph neural networks (GNNs), which enable points to “communicate” and self-optimize to achieve better uniformity. Their approach represents a significant improvement in simulation in fields such as robotics, finance and computer science, especially in solving elaborate, multi-dimensional problems crucial for right simulations and numerical calculations.

“For many problems, the more evenly you can distribute the points, the more accurately you can simulate complex systems,” says T. Konstantin Rusch, lead author of the modern paper and a postdoc at MIT CSAIL. “We have developed a method called Message-Passing Monte Carlo (MPMC) that can generate uniformly spaced points using geometric deep learning techniques. This further allows us to generate points highlighting dimensions that are particularly important to a given problem, a property that is very important in many applications. The graph neural networks underlying the model allow points to ‘talk’ to each other, achieving much better uniformity than previous methods.”

Their job was was published in the September issue of the monthly magazine .

Take me to Monte Carlo

The idea of ​​Monte Carlo methods is to learn about the system by simulating it through random sampling. Sampling is the selection of a subset of a population to estimate the characteristics of the entire population. Historically, it was used as early as the 18th century, when mathematician Pierre-Simon Laplace used it to estimate the population of France without having to count every person.

Low-divergence sequences, i.e., low-divergence, i.e., high-uniformity sequences such as Sobol’, Halton, and Niederreiter, have long been the gold standard for quasi-random sampling, which alternates random sampling with low-disparity sampling. They are widely used in fields such as computer graphics and computational finance, from pricing options to risk assessment, where uniformly filling spaces with points can lead to more right results.

The team’s proposed MPMC structure transforms random samples into highly homogeneous points. This is done by processing random samples using a GNN that minimizes a specific measure of disparity.

One of the substantial challenges of using AI to generate highly uniform points is that the usual way of measuring point uniformity is very tardy to calculate and hard to apply. To solve this problem, the team switched to a faster and more elastic homogeneity measure called L2 disparity. For high-dimensional problems where this method alone is not sufficient, a novel technique is used that focuses on crucial low-dimensional point projections. This way, they can create sets of points that are better suited to specific applications.

The team says the implications go far beyond academia. In computational finance, for example, simulations rely heavily on the quality of sampling points. “Random points are often inefficient with these types of methods, but our GNN-generated low-discrepancy points lead to greater precision,” Rusch says. “For example, we considered the classic computational finance problem in 32 dimensions, where our MPMC scores outperform previous state-of-the-art quasi-random sampling methods by a factor of four to 24.”

Works in Monte Carlo

In robotics, path and motion planning often relies on sampling-based algorithms that guide robots through decision-making processes in real time. Greater MPMC uniformity could lead to more proficient robotic navigation and real-time adaptation for applications such as autonomous driving and drone technology. “In a recent preprint, we showed that our MPMC scores achieve a fourfold improvement over previous low-discrepancy methods when applied to real-world motion planning problems in robotics,” says Rusch.

“Traditional low-divergence sequences were a huge advance in their time, but the world has become more complex, and the problems we now solve often exist in 10-, 20-, and even 100-dimensional spaces,” says Daniela Rus, CSAIL director and professor electrical engineering and computer science from MIT. “We needed something smarter, something that would adapt as dimensionality increased. GNNs represent a paradigm shift in how low-disparity point sets are generated. Unlike traditional methods in which points are generated independently, GNNs allow points to “talk” to each other, so the network learns to place points in a way that reduces clustering and gaps – common problems with typical approaches.

In the future, the team plans to make MPMC points even more accessible to everyone by eliminating the current limitations in training a new GNN for any fixed number of points and dimensions.

“Most applied mathematics uses constantly changing quantities, but calculations usually allow us to use only a finite number of points,” says Art B. Owen, a professor of statistics at Stanford University, who was not involved in the research. “The century-old divergence field uses abstract algebra and number theory to define effective sampling points. The paper uses graph neural networks to find input points with low divergence compared to a continuous distribution. This approach is already very close to the most well-known low-divergence point sets for small problems and shows great promise for 32-dimensional integrals in computational finance. We can expect this to be the first of many attempts to use neural methods to find good entry points for numerical computations.”

Rusch and Rus wrote the paper with University of Waterloo researcher Nathan Kirk, DeepMind artificial intelligence professor at the University of Oxford and former CSAIL collaborator Michael Bronstein, and University of Waterloo professor of statistics and actuarial science Christiane Lemieux. Their research was supported in part by the AI2050 program at Schmidt Futures, Boeing, the U.S. Air Force Research Laboratory and the U.S. Air Force Artificial Intelligence Accelerator, the Swiss National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, and a world-leading EPSRC research fellowship Turing AI.

Latest Posts

More News