---
jupytext:
  text_representation:
    extension: .md
    format_name: myst
    format_version: 0.13
    jupytext_version: 1.18.0-dev
kernelspec:
  display_name: Python 3 (ipykernel)
  language: python
  name: python3
---

(optimizing-code-chapter)=

# Optimizing code

:::{sidebar} Donald Knuth
_“Premature optimization is the root of all evil”_
:::

**Author**: _Gaël Varoquaux_

This chapter deals with strategies to make Python code go faster.

:::{admonition} Prerequisites

- [line_profiler](https://pypi.org/project/line-profiler/)
  :::

+++

## Optimization workflow

1. Make it work: write the code in a simple **legible** ways.
2. Make it work reliably: write automated test cases, make really sure
   that your algorithm is right and that if you break it, the tests will
   capture the breakage.
3. Optimize the code by profiling simple use-cases to find the
   bottlenecks and speeding up these bottleneck, finding a better
   algorithm or implementation. Keep in mind that a trade off should be
   found between profiling on a realistic example and the simplicity and
   speed of execution of the code. For efficient work, it is best to work
   with profiling runs lasting around 10s.

## Profiling Python code

:::{admonition} **No optimization without measuring!**

- **Measure:** profiling, timing
- You'll have surprises: the fastest code is not always what you think
  :::

### Timeit

In Jupyter or IPython, use `timeit`
(<https://docs.python.org/3/library/timeit.html>) to time elementary
operations:

```{code-cell}
import numpy as np

a = np.arange(1000)

%timeit a ** 2
```

```{code-cell}
%timeit a ** 2.1
```

```{code-cell}
%timeit a * a
```

Use this to guide your choice between strategies.

:::{note}
For long running calls, using `%time` instead of `%timeit`; it is
less precise but faster.
:::

### Profiler

Useful when you have a large program to profile, for example the
{download}`following file <demo.py>`:

```{literalinclude} demo.py

```

:::{note}
This is a combination of two unsupervised learning techniques, principal
component analysis ([PCA](https://en.wikipedia.org/wiki/Principal_component_analysis)) and
independent component analysis
([ICA](https://en.wikipedia.org/wiki/Independent_component_analysis)). PCA
is a technique for dimensionality reduction, i.e. an algorithm to explain
the observed variance in your data using less dimensions. ICA is a source
separation technique, for example to unmix multiple signals that have been
recorded through multiple sensors. Doing a PCA first and then an ICA can be
useful if you have more sensors than signals. For more information see:
[the FastICA example from scikits-learn](https://scikit-learn.org/stable/auto_examples/decomposition/plot_ica_blind_source_separation.html).
:::

To run it, you also need to download the {download}`ica module <ica.py>`.
In IPython we can time the script:

```python
In [1]: %run -t demo.py
IPython CPU timings (estimated):
    User  :    14.3929 s.
    System:   0.256016 s.
```

and profile it:

```python
In [2]: %run -p demo.py
        916 function calls in 14.551 CPU seconds
Ordered by: internal time
ncalls  tottime  percall  cumtime  percall filename:lineno (function)
    1   14.457   14.457   14.479   14.479 decomp.py:849 (svd)
    1    0.054    0.054    0.054    0.054 {method 'random_sample' of 'mtrand.RandomState' objects}
    1    0.017    0.017    0.021    0.021 function_base.py:645 (asarray_chkfinite)
    54    0.011    0.000    0.011    0.000 {numpy.core._dotblas.dot}
    2    0.005    0.002    0.005    0.002 {method 'any' of 'numpy.ndarray' objects}
    6    0.001    0.000    0.001    0.000 ica.py:195 (gprime)
    6    0.001    0.000    0.001    0.000 ica.py:192 (g)
    14    0.001    0.000    0.001    0.000 {numpy.linalg.lapack_lite.dsyevd}
    19    0.001    0.000    0.001    0.000 twodim_base.py:204 (diag)
    1    0.001    0.001    0.008    0.008 ica.py:69 (_ica_par)
    1    0.001    0.001   14.551   14.551 {execfile}
    107    0.000    0.000    0.001    0.000 defmatrix.py:239 (__array_finalize__)
    7    0.000    0.000    0.004    0.001 ica.py:58 (_sym_decorrelation)
    7    0.000    0.000    0.002    0.000 linalg.py:841 (eigh)
    172    0.000    0.000    0.000    0.000 {isinstance}
    1    0.000    0.000   14.551   14.551 demo.py:1 (<module>)
    29    0.000    0.000    0.000    0.000 numeric.py:180 (asarray)
    35    0.000    0.000    0.000    0.000 defmatrix.py:193 (__new__)
    35    0.000    0.000    0.001    0.000 defmatrix.py:43 (asmatrix)
    21    0.000    0.000    0.001    0.000 defmatrix.py:287 (__mul__)
    41    0.000    0.000    0.000    0.000 {numpy.core.multiarray.zeros}
    28    0.000    0.000    0.000    0.000 {method 'transpose' of 'numpy.ndarray' objects}
    1    0.000    0.000    0.008    0.008 ica.py:97 (fastica)
    ...
```

Clearly the `svd` (in `decomp.py`) is what takes most of our time, a.k.a. the
bottleneck. We have to find a way to make this step go faster, or to avoid this
step (algorithmic optimization). Spending time on the rest of the code is
useless.

:::{admonition} **Profiling outside of IPython, running \`\`cProfile\`\`**
Similar profiling can be done outside of IPython, simply calling the
built-in [Python profilers](https://docs.python.org/3/library/profile.html) `cProfile` and
`profile`.

```console
$  python -m cProfile -o demo.prof demo.py
```

Using the `-o` switch will output the profiler results to the file
`demo.prof` to view with an external tool. This can be useful if
you wish to process the profiler output with a visualization tool.
:::

### Line-profiler

The profiler tells us which function takes most of the time, but not
where it is called.

For this, we use the
[line_profiler](https://pypi.org/project/line-profiler/): in the
source file, we decorate a few functions that we want to inspect with
`@profile` (no need to import it)

```python
@profile
def test():
    rng = np.random.default_rng()
    data = rng.random((5000, 100))
    u, s, v = linalg.svd(data)
    pca = u[:, :10] @ data
    results = fastica(pca.T, whiten=False)
```

Then we run the script using the [kernprof](https://pypi.org/project/line-profiler/) command, with switches `-l, --line-by-line` and `-v, --view` to use the line-by-line profiler and view the results in addition to saving them:

```console
$ kernprof -l -v demo.py

Wrote profile results to demo.py.lprof
Timer unit: 1e-06 s

Total time: 1.27874 s
File: demo.py
Function: test at line 9

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     9                                           @profile
    10                                           def test():
    11         1         69.0     69.0      0.0      rng = np.random.default_rng()
    12         1       2453.0   2453.0      0.2      data = rng.random((5000, 100))
    13         1    1274715.0 1274715.0     99.7      u, s, v = sp.linalg.svd(data)
    14         1        413.0    413.0      0.0      pca = u[:, :10].T @ data
    15         1       1094.0   1094.0      0.1      results = fastica(pca.T, whiten=False)
```

**The SVD is taking all the time.** We need to optimise this line.

## Making code go faster

Once we have identified the bottlenecks, we need to make the
corresponding code go faster.

### Algorithmic optimization

The first thing to look for is algorithmic optimization: are there ways
to compute less, or better?

For a high-level view of the problem, a good understanding of the maths
behind the algorithm helps. However, it is not uncommon to find simple
changes, like **moving computation or memory allocation outside a for
loop**, that bring in big gains.

#### Example of the SVD

In both examples above, the SVD -
[Singular Value Decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition)
\- is what
takes most of the time. Indeed, the computational cost of this algorithm is
roughly $n^3$ in the size of the input matrix.

However, in both of these example, we are not using all the output of
the SVD, but only the first few rows of its first return argument. If
we use the `svd` implementation of SciPy, we can ask for an incomplete
version of the SVD. Note that implementations of linear algebra in
SciPy are richer then those in NumPy and should be preferred.

```python
In [3]: %timeit np.linalg.svd(data)
1 loops, best of 3: 14.5 s per loop

In [4]: import scipy as sp

In [5]: %timeit sp.linalg.svd(data)
1 loops, best of 3: 14.2 s per loop

In [6]: %timeit sp.linalg.svd(data, full_matrices=False)
1 loops, best of 3: 295 ms per loop

In [7]: %timeit np.linalg.svd(data, full_matrices=False)
1 loops, best of 3: 293 ms per loop
```

We can then use this insight to {download}`optimize the previous code <demo_opt.py>`:

```{literalinclude} demo_opt.py
:pyobject: test
```

```python
In [1]: import demo

In [2]: %timeit demo.
demo.fastica   demo.np        demo.prof.pdf  demo.py        demo.pyc
demo.linalg    demo.prof      demo.prof.png  demo.py.lprof  demo.test

In [2]: %timeit demo.test()
ica.py:65: RuntimeWarning: invalid value encountered in sqrt
  W = (u * np.diag(1.0/np.sqrt(s)) * u.T) * W  # W = (W * W.T) ^{-1/2} * W
1 loops, best of 3: 17.5 s per loop

In [3]: import demo_opt

In [4]: %timeit demo_opt.test()
1 loops, best of 3: 208 ms per loop
```

Real incomplete SVDs, e.g. computing only the first 10 eigenvectors, can
be computed with ARPACK, available in `scipy.sparse.linalg.eigsh`.

:::{admonition} Computational linear algebra
For certain algorithms, many of the bottlenecks will be linear
algebra computations. In this case, using the right function to solve
the right problem is key. For instance, an eigenvalue problem with a
symmetric matrix is easier to solve than with a general matrix. Also,
most often, you can avoid inverting a matrix and use a less costly
(and more numerically stable) operation.

Know your computational linear algebra. When in doubt, explore
`scipy.linalg`, and use `%timeit` to try out different alternatives
on your data.
:::

## Writing faster numerical code

A complete discussion on advanced use of NumPy is found in chapter
{ref}`advanced-numpy`, or in the article [The NumPy array: a structure for
efficient numerical computation](https://hal.inria.fr/inria-00564007/en).
by van der Walt _et al._ Here we discuss only some commonly encountered tricks
to make code faster.

+++

### Vectorizing for loops

Find tricks to avoid for loops using NumPy arrays. For this, masks and
indices arrays can be useful.

### Broadcasting

Use {ref}`broadcasting <broadcasting>` to do operations on arrays as
small as possible before combining them.

<!---
XXX: complement broadcasting in the NumPy chapter with the example of
the 3D grid
-->

### In place operations

```{code-cell}
a = np.zeros(10_000_000)

%timeit global a ; a = 0*a
```

```{code-cell}
%timeit global a ; a *= 0
```

**note**: we need `global a` in the `timeit` so that it works as expected, as
otherwise it is assigning to `a`, and thus considers it as a local variable.

### Be easy on the memory: use views, and not copies

Copying big arrays is as costly as making simple numerical operations
on them:

```{code-cell}
a = np.zeros(10_000_000)

%timeit a.copy()
```

```{code-cell}
%timeit a + 1
```

### Beware of cache effects

Memory access is cheaper when it is grouped: accessing a big array in a
continuous way is much faster than random access. This implies amongst
other things that **smaller strides are faster** (see
{ref}`cache-effects`):

```{code-cell}
c = np.zeros((5000, 5000), order='C')

# Row elements are far apart in memory, for C ordering.
%timeit np.median(c, axis=0)
```

```{code-cell}
# Column elements are contiguous in memory, for C ordering.
%timeit np.median(c, axis=1)
```

```{code-cell}
c.strides
```

This is the reason why Fortran ordering or C ordering may make a big
difference on speed of operations:

```{code-cell}
rng = np.random.default_rng()

a = rng.random((20, 2**18))

b = rng.random((20, 2**18))

%timeit b @ a.T
```

```{code-cell}
c = np.ascontiguousarray(a.T)

%timeit b @ c
```

Note that copying the data to work around this effect may not be worth it:

```{code-cell}
%timeit c = np.ascontiguousarray(a.T)
```

Using [numexpr](https://github.com/pydata/numexpr) can be useful to
automatically optimize code for such effects.

### Use compiled code

The last resort, once you are sure that all the high-level optimizations have
been explored, is to transfer the hot spots, i.e. the few lines or functions
in which most of the time is spent, to compiled code. For compiled code, the
preferred option is to use [Cython](https://www.cython.org): it is easy to
transform exiting Python code in compiled code, and with a good use of the
[NumPy support](https://docs.cython.org/en/latest/src/tutorial/numpy.html)
yields efficient code on NumPy arrays, for instance by unrolling loops.

:::{warning}
For all the above: profile and time your choices. Don't base your
optimization on theoretical considerations.
:::

## Additional Links

- If you need to profile memory usage, you could try the
  [memory_profiler](https://pypi.org/project/memory-profiler)
- If you need to profile down into C extensions, you could try using
  [gperftools](https://github.com/gperftools/gperftools) from Python with
  [yep](https://pypi.org/project/yep).
- If you would like to track performance of your code across time, i.e. as you
  make new commits to your repository, you could try:
  [asv](https://asv.readthedocs.io/en/stable/)
- If you need some interactive visualization why not try
  [RunSnakeRun](https://www.vrplumber.com/programming/runsnakerun/)
