xarray: Many methods are broken (e.g., concat/stack/sortby) when using repeated dimensions

Concatenating DataArrays with repeated dimensions does not work.

import xarray as xr  #xarray 0.8.2
from numpy import eye
A = xr.DataArray(eye(3), dims=['dim0', 'dim0'])
xr.concat([A, A], 'newdim')

fails with

[...]
ValueError: axes don't match array

About this issue

  • Original URL
  • State: open
  • Created 7 years ago
  • Reactions: 2
  • Comments: 17 (14 by maintainers)

Commits related to this issue

Most upvoted comments

I cannot see a use case in which repeated dims actually make sense.

I use repeated dimensions to store a covariance matrix. The data variable containing the covariance matrix has 4 dimensions, of which the last 2 are repeated. For example, I have a data variable with dimensions (channel, scanline, element, element), storing an element-element covariance matrix for every scanline in satellite data.

This is valid NetCDF and should be valid in xarray. It would be a significant problem for me if they became disallowed.

I’m not too fond of having multiple dimensions with the same name because, whenever you need to operate on one but not the other, you have little to no choice but revert to positional indexing.

Consider also how many methods expect either **kwargs or a dict-like parameter with the dimension or variable names as the keys. I would not be surprised to find that many API design choices fall apart in the face of this use case.

Also, having two non positional (as it should always be in xarray!) dimensions with the same name only makes sense when modelling symmetric N:N relationships. Two good examples are covariance matrices and the weights for a Dijkstra algorithm.

The problems start when the object represents an asymmetric relationship, e.g:

  • Cost (for the purpose of graph resolution, so time/money/other) of transportation via river, where going from A->B (downstream) is cheaper than going back from B->A (upstream)
  • Currency conversion, where EUR->USD is not identical to 1/(USD->EUR) because of arbitrage and illiquidity
  • In financial Monte Carlo simulations, I had to deal with credit rating transition matrices which define the probability of a company to change its credit rating. In unfavourable market conditions, the chances of being downgraded from AAA to AA are higher than being promoted from AA to AAA.

I could easily come up with many other cases. In case of asymmetric N:N relationships, it is highly desirable to share the same index across multiple dimensions with different names (that would typically convey the direction of the relationship, e.g. “from” and “to”).

What if, instead of allowing for duplicate dimensions, we allowed sharing an index across different dimensions?

Something like

river_transport = Dataset(
    coords={
        'station': ['Kingston', 'Montreal'],
        'station_from': ('station', )
        'station_to': ('station', )
    },
    data_vars={
        cost=(('station_from', 'station_to'), [[0, 20], [15, 0]]),
    }
}

or, for DataArrays:

river_transport = DataArray(
    [[0, 20], [15, 0]],
    dims=('station_from', 'station_to'),
    coords={
        'station': ['Kingston', 'Montreal'],
        'station_from': ('station', )
        'station_to': ('station', )
    },
}

Note how this syntax doesn’t exist as of today:

        'station_from': ('station', )
        'station_to': ('station', )

From an implementation point of view, I think it could be easily implemented by keeping track of a map of aliases and with some __geitem__ magic. More effort would be needed to convince DataArrays to accept (and not accidentally drop) a coordinate whose dims don’t match any of the data variable’s.

This design would not resolve the issue of compatibility with NetCDF though. I’d be surprised if the NetCDF designers never came across this - maybe it’s a good idea to have a chat with them?

I cannot see a use case in which repeated dims actually make sense.

In my case this situation originates from h5 files which indeed contains repeated dimensions (variables(dimensions): uint16 B0(phony_dim_0,phony_dim_0), ..., uint8 VAA(phony_dim_1,phony_dim_1)), thus xarray is not to blame here. These are “dummy” dimensions, not associated with physical values. What we do to circumvent this problem is “re-dimension” all variables. Maybe a safe approach would be for open_dataset to raise a warning by default when encountering such variables, with possibly an option to perform automatic or custom dimension naming to avoid repeated dims. I also agree with @shoyer that failing loudly when operating on such DataArrays instead of providing confusing results would be an improvement.

Some more food for thoughts.

Instead of extending the Xarray data model we could now leverage Xarray flexible indexes and provide some utility methods like this:

da = DataArray(
    [[0, 20], [15, 0]],
    dims=('station', 'station'),
    coords={'station': ['Kingston', 'Montreal']},
)

da
# <xarray.DataArray (station: 2)>
# array([[ 0, 20],
#        [15,  0]])
# Coordinates:
#   * station  (station) <U8 'Kingston' 'Montreal'

da_split = da.split_repeated_dims(station=('station_from', 'station_to'))
da_split
# <xarray.DataArray (station_from: 2, station_to: 2)>
# array([[ 0, 20],
#        [15,  0]])
# Coordinates:
#   * station_from    (station_from) <U8 'Kingston' 'Montreal'
#   * station_to      (station_to) <U8 'Kingston' 'Montreal'
#   * station         (station_from, station_to) object ...
# Indexes:
#   ┌ station_from    RepeatedIndex
#   │ station_to
#   └ station

da_merged = da_split.merge_repeated_index('station')
# <xarray.DataArray (station: 2)>
# array([[ 0, 20],
#        [15,  0]])
# Coordinates:
#   * station  (station) <U8 'Kingston' 'Montreal'

Where RepeatedIndex is a lightweight Xarray index wrapper that would encapsulate the index assigned to the “station” coordinate in the original DataArray such that it is reused for all coordinates “station”, “station_from” and “station_to” (all sharing the same underlying data).

The coordinate da_split.station would be some sort of n-dimensional lazy coordinate holding tuple values, not really useful per se but that would still allow label-based selection for all repeated dimensions at once (a bit similar although not exactly the same as how we represent the dimension coordinate of a PandasMultiIindex):

da_split.station.data
# some custom lazy duck-array object

da_split.station.values
# [[('Kingston', 'Kingston') ('Kingston', 'Montreal')]
#  [('Montreal', 'Kingston') ('Montreal', 'Montreal')]]

So RepeatedIndex could support three kinds of selection (maybe more):

da_split.sel(station="Kingston")   # shorthand for station=('Kingston', 'Kingston')
# <xarray.DataArray ()>
# array(0)
# Coordinates:
#     station  <U8 'Kingston'

da_split.sel(station_from="Kingston")
# array([ 0, 20])
# <xarray.DataArray (station_to: 2)>
# Coordinates:
#   * station_to     (station_to) <U8 'Kingston' 'Montreal'
#     station_from   <U8 'Kingston'

da_split.sel(station_to="Kingston")
# array([ 0, 15])
# <xarray.DataArray (station_from: 2)>
# Coordinates:
#   * station_from    (station_from) <U8 'Kingston' 'Montreal'
#     station_to      <U8 'Kingston'

Now regarding the concat example mentioned in the top comment of this issue, there’s an extra step required:

da_split = da.split_repeated_dims(station=('station_from', 'station_to'))
da_concat = xr.concat([da_split, da_split], 'newdim')
da_result = da_concat.merge_repeated_index('station')

Overall this would be a pretty nice and explicit way to solve the issue of repeated dimensions, IMHO. It is 100% compatible with the current Xarray data model, which might be preferred for such a special case.

EDIT: sorry for the multiple edits!

This also affects the stack method.

@jhamman Ok, good to hear it’s not slated to be removed. I would love to work on this, I wish I had the time! I’ll keep it in mind if I do find some spare time.

I cannot see a use case in which repeated dims actually make sense.

Agreed. I would have disallowed them entirely, but sometimes it’s useful to allow loading variables with duplicate dimensions, even if the only valid operation you can do is de-duplicate them.

Every routine that looks up dimensions by name should go through the get_axis_num method. That would be a good place to add a check for uniqueness.