Christine Doig - Scale your data, not your process: Welcome to the Blaze ecosystem
Christine Doig - Scale your data, not your process: Welcome to the Blaze ecosystem [EuroPython 2015] [21 July 2015] [Bilbao, Euskadi, Spain] NumPy and Pandas have revolutionized data processing and munging in the Python ecosystem. As data and systems grow more complex, moving and querying becomes more difficult. Python already has excellent tools for in-memory datasets, but we inevitably want to scale this processing and take advantage of additional hardware. This is where Blaze comes in handy by providing a uniform interface to a variety of technologies and abstractions for migrating and analyzing data. Supported backends include databases like Postgres or MongoDB, disk storage systems like PyTables, BColz, and HDF5, or distributed systems like Hadoop and Spark. This talk will introduce the Blaze ecosystem, which includes: - Blaze (data querying): [http://blaze.pydata.org/en/latest/][1] - Odo (data migration): [http://odo.readthedocs.org/en/latest/][2] - Dask (task scheduler): [http://dask.pydata.org/en/latest/][3] - DyND (dynamic, multidimensional arrays): [https://github.com/libdynd /dynd-python][4] - Datashape (data description): [http://datashape.pydata.org/][5] Attendees will get the most out of this talk if they are familiar with NumPy and Pandas, have intermediate Python programming skills, and/or experience with large datasets. [1]: http://blaze.pydata.org/en/latest/ [2]: http://odo.readthedocs.org/en/latest/ [3]: http://dask.pydata.org/en/latest/ [4]: https://github.com/libdynd/dynd-python [5]: http://datashape.pydata.org/