Skidmore Computational Physics & ML Lab A computational physics and machine-learning research group at Skidmore College

Blog

Here you’ll find some musings on physics, mathematics and programming.

Mathematica tips for numerical linear algebra (4)

So now we know how to multiply matrices using Mathematica’s Dot function. This also works for higher-rank tensors (more indices) such as $m_{abc} n_{cde} = (mn)_{abde}$, which we can implement as

Mathematica tips for numerical linear algebra (3)

The take-away from this is that many clever people have worked for a very long time on optimising the Mathematica’s built-in functions, such as Dot. In particular, when Dot is called, it turns the calculation over to a set of C++ libraries for numerical linear algebra known as the Intel MKL.

Mathematica tips for numerical linear algebra (2)

The bulk of the numerical calculations that I need are basically linear algebra – matrix-matrix, vector-matrix and more exotic multiplications. All of the entries in these tensors are “machine precision”, which roughly translates to a C++ double. Mathematica can store these numbers as “packed arrays” – working with machine precision numbers rather than “exact” quantities greatly speeds up calculations (if they are written in the right way).

Mathematica tips for numerical linear algebra (1)

Calabi-Yau manifolds are special as they are Kähler manifolds which admit Ricci-flat metrics. As of the end of 2020, there are still no complete analytic expressions for such metrics on compact manifolds, other than in the somewhat trivial cases of tori. Instead, the best we can do are approximate numerical metrics. I became interested in this a couple of years ago when trying to learn some of the machine-learning techniques that are all the rage at the moment. But before I could I apply all this fancy new technology, I first needed some data to work with. And so I found myself needing to understand how to generate these numerical metrics.