Many problems in pure and applied mathematics entail high dimensionality, typically due to a large number of parameters or coordinates. For gridded data in high dimensions, the number of grid points grows exponentially with respect to dimension—a situation called “the curse of dimensionality.” One successful approach to ameliorating this situation is the use of “sparse grids,” which are constructed from a collection of “component grids” that have high resolution in some but not all dimensions. Sparse grid-based approaches to high dimensional problems entail not only representation of high-dimensional grids, but large collections of high-dimensional grids of varying resolution, all of which must be combined to calculate a solution on the sparse grid. Computation in high-dimensions can be painful if a frequently-used strategy called virtual linearisation is employed to support high dimensionality; that is, the mapping of a multidimensional index space into a single index and expressing multidimensional relationships as strides accesses on this single index. Operations such as identifying neighbouring grid points, iterating over indices, and interpolation with its concomitant data dependencies are tedious to implement using virtual linearisation . Elements of Python’s software commons—most notably NumPy, SicPy interpolation, and itertools—obviate the need for such roundabout programming. In this short talk I will use my work on implementing high-dimensional multi-resolution solver infrastructure in Python as a case study. I will describe how a Python-centric programming model has not only made my work easier, but arguably has made it possible given my project’s timeframe. I will show code examples and pictures from my implementation of multidimensional grids and gridded data fields, multiresolution grid complexes, and sparse grids.