We can create a HDF5 file using the HDFStore class provided by Pandas:
import numpy as np from pandas import HDFStore,DataFrame # create (or open) an hdf5 file and opens in append mode hdf = HDFStore('storage.h5')Now we can store a dataset into the file we just created:
df = DataFrame(np.random.rand(5,3), columns=('A','B','C')) # put the dataset in the storage hdf.put('d1', df, format='table', data_columns=True)The structure used to represent the hdf file in Python is a dictionary and we can access to our data using the name of the dataset as key:
print hdf['d1'].shape
(5, 3)The data in the storage can be manipulated. For example, we can append new data to the dataset we just created:
hdf.append('d1', DataFrame(np.random.rand(5,3), columns=('A','B','C')), format='table', data_columns=True) hdf.close() # closes the fileThere are many ways to open a hdf5 storage, we could use again the constructor of the class HDFStorage, but the function read_hdf makes us also able to query the data:
from pandas import read_hdf # this query selects the columns A and B # where the values of A is greather than 0.5 hdf = read_hdf('storage.h5', 'd1', where=['A>.5'], columns=['A','B'])At this point, we have a storage which contains a single dataset. The structure of the storage can be organized using groups. In the following example we add three different datasets to the hdf5 file, two in the same group and another one in a different one:
hdf = HDFStore('storage.h5') hdf.put('tables/t1', DataFrame(np.random.rand(20,5))) hdf.put('tables/t2', DataFrame(np.random.rand(10,3))) hdf.put('new_tables/t1', DataFrame(np.random.rand(15,2)))Our hdf5 storage now looks like this:
print hdf
On the left we can see the hierarchy of the groups added to the storage, in the middle we have the type of dataset and on the right there is the list of attributes attached to the dataset. Attributes are pieces of metadata you can stick on objects in the file and the attributes we see here are automatically created by Pandas in order to describe the information required to recover the data from the hdf5 storage system.File path: storage.h5 /d1 frame_table (typ->appendable,nrows->10,ncols->3,indexers->[index],dc->[A,B,C]) /new_tables/t1 frame (shape->[15,2]) /tables/t1 frame (shape->[20,5]) /tables/t2 frame (shape->[10,3])
once opened and data retrieved, is it necessary to close it?
ReplyDeleteIt's always better to close it.
DeleteYou can use a context manager as well:
DeleteIn [278]: with HDFStore('store.h5') as store:
.....: store.keys()
Thanks for this great introduction.
ReplyDeleteI have just one question, though: how can I query and filter specific columns if the column names have a space in them? The string representation of the argument where = ['A>0.5'] would seem to be problematic if 'A' were instead something like 'A [meters]'.
Thanks!
Try inserting a forward slash in front of the white space, i.e. 'Distance Meters' -> 'Distance/ Meters'
Delete