# Speedup scipy griddata for multiple interpolations between two irregular grids

There are several things going on every time you make a call to scipy.interpolate.griddata:

1. First, a call to sp.spatial.qhull.Delaunay is made to triangulate the irregular grid coordinates.
2. Then, for each point in the new grid, the triangulation is searched to find in which triangle (actually, in which simplex, which in your 3D case will be in which tetrahedron) does it lay.
3. The barycentric coordinates of each new grid point with respect to the vertices of the enclosing simplex are computed.
4. An interpolated values is computed for that grid point, using the barycentric coordinates, and the values of the function at the vertices of the enclosing simplex.

The first three steps are identical for all your interpolations, so if you could store, for each new grid point, the indices of the vertices of the enclosing simplex and the weights for the interpolation, you would minimize the amount of computations by a lot. This is unfortunately not easy to do directly with the functionality available, although it is indeed possible:

import scipy.interpolate as spint
import scipy.spatial.qhull as qhull
import itertools

def interp_weights(xyz, uvw):
tri = qhull.Delaunay(xyz)
simplex = tri.find_simplex(uvw)
vertices = np.take(tri.simplices, simplex, axis=0)
temp = np.take(tri.transform, simplex, axis=0)
delta = uvw - temp[:, d]
bary = np.einsum('njk,nk->nj', temp[:, :d, :], delta)
return vertices, np.hstack((bary, 1 - bary.sum(axis=1, keepdims=True)))

def interpolate(values, vtx, wts):
return np.einsum('nj,nj->n', np.take(values, vtx), wts)


The function interp_weights does the calculations for the first three steps I listed above. Then the function interpolate uses those calcualted values to do step 4 very fast:

m, n, d = 3.5e4, 3e3, 3
# make sure no new grid point is extrapolated
bounding_cube = np.array(list(itertools.product([0, 1], repeat=d)))
xyz = np.vstack((bounding_cube,
np.random.rand(m - len(bounding_cube), d)))
f = np.random.rand(m)
g = np.random.rand(m)
uvw = np.random.rand(n, d)

In : vtx, wts = interp_weights(xyz, uvw)

In : np.allclose(interpolate(f, vtx, wts), spint.griddata(xyz, f, uvw))
Out: True

In : %timeit spint.griddata(xyz, f, uvw)
1 loops, best of 3: 2.81 s per loop

In : %timeit interp_weights(xyz, uvw)
1 loops, best of 3: 2.79 s per loop

In : %timeit interpolate(f, vtx, wts)
10000 loops, best of 3: 66.4 us per loop

In : %timeit interpolate(g, vtx, wts)
10000 loops, best of 3: 67 us per loop


So first, it does the same as griddata, which is good. Second, setting up the interpolation, i.e. computing vtx and wts takes roughly the same as a call to griddata. But third, you can now interpolate for different values on the same grid in virtually no time.

The only thing that griddata does that is not contemplated here is assigning fill_value to points that have to be extrapolated. You could do that by checking for points for which at least one of the weights is negative, e.g.:

def interpolate(values, vtx, wts, fill_value=np.nan):
ret = np.einsum('nj,nj->n', np.take(values, vtx), wts)
ret[np.any(wts < 0, axis=1)] = fill_value
return ret