Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch our lazy array classes to use Dask instead? #1725

Open
fujiisoup opened this issue Nov 17, 2017 · 9 comments
Open

Switch our lazy array classes to use Dask instead? #1725

fujiisoup opened this issue Nov 17, 2017 · 9 comments

Comments

@fujiisoup
Copy link
Member

fujiisoup commented Nov 17, 2017

Ported from #1724, comment by @shoyer

In the long term, it would be nice to get ride of these uses of _data, maybe by switching entirely from our lazy array classes to Dask.

The subtleties of checking _data vs data are undesirable, e.g., consider the bug on these lines:

elif isinstance(var.data, dask_array_type):
values_str = short_dask_repr(var, show_dtype=False)

@shoyer shoyer changed the title repr in backend arrays Switch our lazy array classes to use Dask instead Nov 17, 2017
@shoyer
Copy link
Member

shoyer commented Nov 17, 2017

This comment has the full context: #1372 (comment). To repeat myself:


You might ask why this separate lazy compute machinery exists. The answer is that dask fails to optimize element-wise operations like (scale * array)[subset] -> scale * array[subset], which is a critical optimization for lazy decoding of large datasets.

See dask/dask#746 for discussion and links to PRs about this. jcrist had a solution that worked, but it slowed down every dask array operations by 20%, which wasn't a great win.

I wonder if this is worth revisiting with a simpler, less general optimization pass that doesn't bother with broadcasting. See the subclasses of NDArrayMixin in xarray/conventions.py for examples of the sorts of functionality we need:

  • Casting (e.g., array.astype(bool)).
  • Chained arithmetic with scalars (e.g., 0.5 + 0.5 * array).
  • Custom element-wise operations (e.g., map_blocks(convert_to_datetime64, array, dtype=np.datetime64))
  • Custom aggregations that drop a dimension (e.g., map_blocks(characters_to_string, array, drop_axis=-1))

If we could optimize all these operations (and ideally chain them), then we could drop all the lazy loading stuff from xarray in favor of dask, which would be a real win.


The downside of this switch is that lazy loading of data from disk would now require dask, which would be at least slightly annoying to some users. But it's probably worth the tradeoff from a maintainability perspective, and also to fix issues like #1372.

@jhamman
Copy link
Member

jhamman commented Nov 17, 2017

The downside of this switch is that lazy loading of data from disk would now require dask, which would be at least slightly annoying to some users. But it's probably worth the tradeoff from a maintainability perspective, and also to fix issues like #1372.

I'm really not opposed to this. I also think it would be a good/reasonable thing to do before 1.0. There may be some pure-numpy Xarray users out there but I suspect they could 1) handle having dask as a dependency and 2) wouldn't be all that affected by the change since the in-memory workflow isn't particularly dependent on lazy evaluation.

@shoyer
Copy link
Member

shoyer commented Nov 17, 2017

Yeah, we could solve this by making dask a requirement only if you want load netCDF files and/or load netCDF files lazily.

Potentially chunks=False in open_dataset could indicate that you're OK loading everything into memory with NumPy. We would then have to choose between making the default use Dask or NumPy.

@rabernat
Copy link
Contributor

I just had to confront and understand how lazy CF decoding worked in order to move forward with #1528. In my initial implementation, I applied chunking to variables directly in ZarrStore. However, I learned that decode_cf_variable did not preserve variable chunks. So I switched the chunking to after the call to decode_cf.

My impression after this exercise is that having two different definitions of "lazy" within xarray leads to developer confusion! So I favor putting dask more central in xarray's data model.

@benbovy
Copy link
Member

benbovy commented Nov 17, 2017

I'm rather a numpy-xarray user than a dask-xarray user (since most often my data fits in memory), but I wouldn't mind at all having to install dask as a requirement!

Potentially chunks=False in open_dataset could indicate that you're OK loading everything into memory with NumPy. We would then have to choose between making the default use Dask or NumPy.

Maybe like other users who are used to lazy loading, I'm a bit more concerned by this. I find it so handy to be able to load a medium-sized file instantly, quickly inspect its content, and then work with only a small subset of the variables / data, all of this without worrying about chunks.

Assuming that numpy-loading is the default, new xarray users coming from netcdf4-python and who don't know much about dask might find xarray a very inefficient tool when trying to first import a medium-sized netcdf file.

If choosing chunks=False as default, I can also imagine often forgetting to set it to True when loading a file that is a bit too big to load in memory... That would be annoying.

By saying "making the default use Dask", do you mean that data from a file will be "loaded" as dask arrays by default? If this is the case, new xarray users which are probably not familiar with dask (at least less likely than they are familiar with numpy) will have to learn 1-2 concepts from dask before using xarray. This might not be a big deal, though.

In summary, I'm also really not opposed to use dask to replace all the current lazy-loading machinery, but ideally it should be as transparent as possible with respect to the current "user experience".

@rabernat
Copy link
Contributor

Since #1532, the repr for dask-backed variables does not show any values (to avoid triggering computations). But numpy-backed lazily-masked-and-scaled data is treated differently: it is shown.

This highlights an important difference between how LazilyIndexedArray and dask array work: with dask, either you compute a whole chunk or you compute nothing. With LazilyIndexedArray, you can slice the array however you want and only apply mask_and_scale to the specific items you have selected. This small difference has big performance implications, especially for the "medium sized" datasets @benbovy refers to. If we changed so that decode_cf used dask, you would have to compute the whole chunk in order to see the repr.

So on second thought, maybe the system we have now is better than using dask for "everything lazy."

@shoyer
Copy link
Member

shoyer commented Nov 18, 2017

@rabernat actually in #1532 we switched to not displaying a preview of any lazily loaded data on disk -- even if it isn't loaded with Dask. (I was not sure about this change, but I was alone in my reservations.)

I do agree that our lazy arrays serve a useful purpose currently. I would only consider removing them if we can improve Dask so it works just as well for this use case.

@shoyer shoyer changed the title Switch our lazy array classes to use Dask instead Switch our lazy array classes to use Dask instead? Nov 20, 2017
@fujiisoup fujiisoup mentioned this issue Feb 9, 2018
4 tasks
@shoyer
Copy link
Member

shoyer commented Jul 18, 2018

On a somewhat related note, I am now proposing extending xarray's "lazy array" functionality to include limited support for arithmetic, without necessarily using dask: #2298

@rabernat rabernat added the topic-arrays related to flexible array support label Jul 12, 2019
@dcherian dcherian added topic-lazy array and removed topic-arrays related to flexible array support labels Apr 19, 2022
@jhamman
Copy link
Member

jhamman commented Sep 15, 2023

Should we close this? Seems like we've headed in a different discussion with our lazy array implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants