cartopy: contourf method significantly slower than basemap

Description

I’m migrating my codes from Basemap to Cartopy. Generally Cartopy is always faster, however I find contourf method can take me seconds. Here’s my example code.

import time

import cartopy
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage as sn
from mpl_toolkits.basemap import Basemap


#-------------------
#  Create data
#-------------------
x, y = np.mgrid[60:140:0.25, 0:45:0.25]
r = np.random.randint(0, 255, size=(320, 180), dtype='uint8')
r = sn.filters.gaussian_filter(r, (3, 3), mode='constant')

#-------------------
#  Cartopy
#-------------------
ax = plt.axes(projection=cartopy.crs.LambertConformal(
    central_longitude=100.0, central_latitude=35.0))
ax.set_extent((60,140,0,45))
for step in (20, 10, 5):
    tic = time.time()
    col = ax.contourf(x, y, r, transform=cartopy.crs.PlateCarree(),
        levels=np.arange(100,161,step))
    toc = time.time()
    points = sum(len(e.get_paths()) for e in col.collections)
    print('[Cartopy] Step: {} Points: {}  Time: {:.0f}ms'.format(
        step, points, (toc - tic) * 1000))


#-------------------
#  Basemap
#-------------------
_map = Basemap(projection='lcc', resolution=None,
    lon_0=100.0, lat_0=35.0, llcrnrlat=0,
    urcrnrlat=45, llcrnrlon=60, urcrnrlon=140)
for step in (20, 10, 5):
    tic = time.time()
    col = _map.contourf(x, y, r, levels=np.arange(100,161,step))
    toc = time.time()
    points = sum(len(e.get_paths()) for e in col.collections)
    print('[Basemap] Step: {} Points: {}  Time: {:.0f}ms'.format(
        step, points, (toc - tic) * 1000))

The result:

[Cartopy] Step: 20 Points: 123  Time: 729ms
[Cartopy] Step: 10 Points: 273  Time: 1433ms
[Cartopy] Step: 5 Points: 528  Time: 2887ms
[Basemap] Step: 20 Points: 123  Time: 118ms
[Basemap] Step: 10 Points: 273  Time: 151ms
[Basemap] Step: 5 Points: 528  Time: 179ms

which means Cartopy can be up to 16x slower on my old laptop. After some profiling work I find the culprit is https://github.com/SciTools/cartopy/blob/master/lib/cartopy/mpl/geoaxes.py#L1505, it seems this step is to calculate new datalim and autoscale the extent. This is how far I can get to since I don’t have enough time to dig in.

  • Why does this step take so much time?
  • As a quick workaround, can we just check if the extent is already set (which is the most scenario imo) before autoscaling?

Environment

  • OS: Windows 10
  • cartopy: 0.17.0
  • basemap: 1.2.0
  • matplotlib: 3.0.2
  • shapely: 1.6.4.post1

Thanks!

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 5
  • Comments: 30 (10 by maintainers)

Most upvoted comments

Greg is an absolute treasure to this group and goodness knows how many people he has saved in the more “operational” world. I mean, you’re all treasures really…

On a serious note, I must add yet another two cents that this speed issue is critical for so many users told to migrate from basemap to cartopy. I appreciate the lack of time and resources. I wonder if there’s any ideas or desire over writing a grant proposal to get some real labour devoted to optimisation of cartopy. I’ve only had experience leading proposals in academia, strictly on weather science and not things like this, but the people volunteering their free time on here deserve support! And more help! (I’m running at 100% myself so it’s frustrating I can’t offer more help more often).

John

On Sep 16, 2021, at 06:17, wxninja @.***> wrote:

@greglucas Thanks! I’m up and running now. Turns out I just kept getting the syntax wrong when trying to pull 1690. Duh.

Just wanted to chime in and say thank you for engineering this potential solution. I think it’ll be a great help, and expand the ‘usability’ of Cartopy for those who need to make large amounts of graphics, or make graphics quickly.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.