Introduction to Einstein Notation¶
This notation allows for expressing an extraordinary number of operations concisely. This section is not necessary for the following but it is an interesting intellectual exercise.
The basic idea is to sum the terms of an equation when the same index appears twice and is not defined elsewhere. Thus:
$$ A_{i,i} \quad \textrm{means} \quad \sum_{i=1}^N A_{i,i} \; \textrm{(the trace of the matrix)}$$
If we look at the matrix multiplication $A\, B$, for every index (i,j) of the result we have:
$$ C_{i,j} = A_{i,k} \, B_{k,j} \quad \textrm{i.e.} \quad C_{i,j} = \sum_{k=1}^N A_{i,k} \, B_{k,j} $$
The full name of the Einstein notation being the Einstein summation convention, the Numpy function is einsum
. Here is how it works for our first 2 examples:
import numpy as np
A = np.arange(9).reshape(3,3)
print("Trace 'ii' : ", np.einsum('ii', A), '\n') # 0 + 4 + 8 = 12
print("Multiplication matricielle A A 'ij,jk->ik' :")
print(np.einsum('ij,jk->ik', A, A)) # notez que j'ai nommé différement les indices
Trace 'ii' : 12 Multiplication matricielle A A 'ij,jk->ik' : [[ 15 18 21] [ 42 54 66] [ 69 90 111]]
A.dot(A) # on vérifie
array([[ 15, 18, 21], [ 42, 54, 66], [ 69, 90, 111]])
It is noted that the arguments of einsum
are:
- the summation rule in a string with a comma to separate each component
- the components on which the rule applies
We can go a little further with Numpy. Here are all the rules that einsum
uses:
Basic and additional rules¶
- a repeated index implies summation over that index unless that index is mentioned in the result
(see example of the diagonal of A below for the exception) - indices that repeat from one component to another imply that the referenced elements will be multiplied together
(see example of the matrix product) - a letter omitted in the result (after
->
) implies summation over that index
(see example of summing the elements of a vector below) - if you don't put the arrow, einsum will put it for you with on the right all the indices that are not doubled arranged in alphabetical order
(see example of the transpose below)
Here is a list of operations taken from the blog of Dr. Goulu:
einsum Signature | Numpy Equivalent | Description |
---|---|---|
('i->', v) |
sum(v) |
sum of values of vector v |
('i,i->i', u, v) |
u \* v |
element-wise multiplication of vectors u and v |
('i,i', u, v) |
inner(u, v) |
dot product of u and v |
('i,j', u, v) |
outer(u, v) |
dyadic product of u and v |
('ij', A) |
A |
returns matrix A |
('ji', A) |
A.T |
transpose of A |
('ii->i', A) |
diag(A) |
diagonal of A |
('ii', A) |
trace(A) |
sum of the diagonal of A |
('ij->', A) |
sum(A) |
sum of values of A |
('ij->j', A) |
sum(A, axis=0) |
sum of columns of A |
('ij->i', A) |
sum(A, axis=1) |
sum of rows of A |
('ij,ij->ij', A, B) |
A \* B |
element-wise transposed matrix multiplication of A and B |
('ij,jk', A, B) |
dot(A, B) |
dot product of A and B |
('ij,jk->ij', A, B) |
inner(A, B) |
inner product of A and B |
('ij,jk->ijk', A, B) |
A[:, None] \* B |
each row of A multiplied by B |
('ij,kl->ijkl', A, B) |
A[:, :, None, None] \* B |
each value of A multiplied by B |
The None
in the last two lines is a way to reshape an array. So np.arange(6)
is a 1-dimensional array, np.arange(6)[:]
is the same 1-dimensional array, while np.arange(6)[:,None]
is a 2-dimensional array namely 6 x 1
when np.arange(6)[None,:,None]
has 3 dimensions: 1 x 6 x 1
.
Practical Application¶
We will compare the performance of einsum
and the corresponding Numpy functions. To do this, calculate
- the cube of each element of a vector
- of a square matrix, $A^3$,
with einsum
and without. Compare the execution speed in all cases with 10000 elements.
# a vous de jouer
Solution :
u = np.random.random(10000)
%timeit u*u*u
%timeit np.einsum('i,i,i->i', u,u,u)
11.5 µs ± 112 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 15.2 µs ± 131 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
A = u.reshape(100,100)
%timeit A.dot(A).dot(A)
%timeit np.einsum('ij,jk,kl->il', A, A, A)
138 µs ± 9.48 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 134 ms ± 1.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
We observe that einsum
is slower (*). But looking on the web, we see that this has not always been the case and that it is related to the version of the Numpy library. In conclusion, if we want performance, we should test our code beforehand to choose the fastest method.
(*) slightly slower for vector calculation but 1000 times slower on my laptop for A.dot(A)
(all my processors are running at 100% while the calculation by einsum
is only done on 1 processor). The A.dot(A)
version is much faster thanks to the MKL library used by Numpy on my machine.