nilmtk: FHMM output error
Hello! I try to follow the tutorial in
https://github.com/nilmtk/nilmtk/blob/master/docs/manual/user_guide/disaggregation_and_metrics.ipynb
I convert the REDD low freq data by myself. And all steps are fine untill I want to disaggregate the test dataset by using trained FHMM.
`
Loading data for meter ElecMeterID(instance=2, building=1, dataset=‘REDD’)
Done loading data all meters for this chunk.
Loading data for meter ElecMeterID(instance=2, building=1, dataset=‘REDD’)
Done loading data all meters for this chunk.
ValueError Traceback (most recent call last) <ipython-input-25-99fe1197e6f9> in <module>() 2 output = HDFDataStore(disag_filename, ‘w’) 3 # Note that we have mentioned to disaggregate after converting to a sample period of 60 seconds ----> 4 fhmm.disaggregate(test_elec.mains(), output, sample_period=60) 5 output.close()
f:\00 semester project\nilmtk\nilmtk\disaggregate\fhmm_exact.pyc in disaggregate(self, mains, output_datastore, **load_kwargs) 404 # Copy mains data to disag output 405 output_datastore.append(key=mains_data_location, –> 406 value=pd.DataFrame(chunk, columns=cols)) 407 408 if data_is_available:
f:\00 semester project\nilmtk\nilmtk\docinherit.pyc in f(_args, *_kwargs) 44 @wraps(self.mthd, assigned=(‘name’,‘module’)) 45 def f(_args, *_kwargs): —> 46 return self.mthd(obj, _args, *_kwargs) 47 48 return self.use_parent_doc(f, overridden)
f:\00 semester project\nilmtk\nilmtk\datastore\hdfdatastore.pyc in append(self, key, value) 160 data in the table, so be careful. 161 “”" –> 162 self.store.append(key=key, value=value) 163 self.store.flush() 164
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in append(self, key, value, format, append, columns, dropna, *_kwargs) 917 kwargs = self._validate_format(format, kwargs) 918 self._write_to_group(key, value, append=append, dropna=dropna, –> 919 *_kwargs) 920 921 def append_to_multiple(self, d, value, selector, data_columns=None,
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in _write_to_group(self, key, value, format, index, append, complib, encoding, *_kwargs) 1262 1263 # write the object -> 1264 s.write(obj=value, append=append, complib=complib, *_kwargs) 1265 1266 if s.is_table and index:
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in write(self, obj, axes, append, complib, complevel, fletcher32, min_itemsize, chunksize, expectedrows, dropna, *_kwargs) 3785 self.create_axes(axes=axes, obj=obj, validate=append, 3786 min_itemsize=min_itemsize, -> 3787 *_kwargs) 3788 3789 for a in self.axes:
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in create_axes(self, axes, obj, validate, nan_rep, data_columns, min_itemsize, **kwargs) 3475 # validate the axes if we have an existing table 3476 if validate: -> 3477 self.validate(existing_table) 3478 3479 def process_axes(self, obj, columns=None):
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in validate(self, other) 2921 raise ValueError( 2922 "invalid combinate of [%s] on appending data [%s] " -> 2923 “vs current table [%s]” % (c, sax, oax)) 2924 2925 # should never get here
ValueError: invalid combinate of [values_axes] on appending data [name->values_block_0,cname->values_block_0,dtype->float64,kind->float,shape->(1, 1878)] vs current table [name->values_block_0,cname->values_block_0,dtype->float32,kind->float,shape->None]`
Then I try to train Combinational Optimizations. The training part has no problem. But again I have problems in output
`
Loading data for meter ElecMeterID(instance=2, building=1, dataset=‘REDD’)
Done loading data all meters for this chunk.
Including vampire_power = 96.8099975586 watts to model…
Estimating power demand for ‘ElecMeter(instance=5, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘fridge’, instance=1)])’
Estimating power demand for ‘ElecMeter(instance=11, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘microwave’, instance=1)])’
Estimating power demand for ‘ElecMeter(instance=8, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘sockets’, instance=2)])’
Estimating power demand for ‘ElecMeter(instance=9, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘light’, instance=1)])’
Estimating power demand for ‘ElecMeter(instance=6, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘dish washer’, instance=1)])’
Loading data for meter ElecMeterID(instance=2, building=1, dataset=‘REDD’)
Done loading data all meters for this chunk.
Including vampire_power = 91.1900024414 watts to model…
Estimating power demand for ‘ElecMeter(instance=5, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘fridge’, instance=1)])’
Estimating power demand for ‘ElecMeter(instance=11, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘microwave’, instance=1)])’
Estimating power demand for ‘ElecMeter(instance=8, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘sockets’, instance=2)])’
Estimating power demand for ‘ElecMeter(instance=9, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘light’, instance=1)])’
Estimating power demand for ‘ElecMeter(instance=6, building=1, dataset=‘REDD’, appliances=[Appliance(type=‘dish washer’, instance=1)])’
ValueError Traceback (most recent call last) <ipython-input-27-fecb6a3423fa> in <module>() 3 output = HDFDataStore(disag_filename, ‘w’) 4 # Note that we have mentioned to disaggregate after converting to a sample period of 60 seconds ----> 5 co.disaggregate(test_elec.mains(), output, sample_period=60) 6 output.close()
f:\00 semester project\nilmtk\nilmtk\disaggregate\combinatorial_optimisation.pyc in disaggregate(self, mains, output_datastore, vampire_power, **load_kwargs) 179 # Copy mains data to disag output 180 mains_df = pd.DataFrame(chunk, columns=cols) –> 181 output_datastore.append(key=mains_data_location, value=mains_df) 182 183 if data_is_available:
f:\00 semester project\nilmtk\nilmtk\docinherit.pyc in f(_args, *_kwargs) 44 @wraps(self.mthd, assigned=(‘name’,‘module’)) 45 def f(_args, *_kwargs): —> 46 return self.mthd(obj, _args, *_kwargs) 47 48 return self.use_parent_doc(f, overridden)
f:\00 semester project\nilmtk\nilmtk\datastore\hdfdatastore.pyc in append(self, key, value) 160 data in the table, so be careful. 161 “”" –> 162 self.store.append(key=key, value=value) 163 self.store.flush() 164
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in append(self, key, value, format, append, columns, dropna, *_kwargs) 917 kwargs = self._validate_format(format, kwargs) 918 self._write_to_group(key, value, append=append, dropna=dropna, –> 919 *_kwargs) 920 921 def append_to_multiple(self, d, value, selector, data_columns=None,
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in _write_to_group(self, key, value, format, index, append, complib, encoding, *_kwargs) 1262 1263 # write the object -> 1264 s.write(obj=value, append=append, complib=complib, *_kwargs) 1265 1266 if s.is_table and index:
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in write(self, obj, axes, append, complib, complevel, fletcher32, min_itemsize, chunksize, expectedrows, dropna, *_kwargs) 3785 self.create_axes(axes=axes, obj=obj, validate=append, 3786 min_itemsize=min_itemsize, -> 3787 *_kwargs) 3788 3789 for a in self.axes:
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in create_axes(self, axes, obj, validate, nan_rep, data_columns, min_itemsize, **kwargs) 3475 # validate the axes if we have an existing table 3476 if validate: -> 3477 self.validate(existing_table) 3478 3479 def process_axes(self, obj, columns=None):
D:\Program Files\Miniconda\lib\site-packages\pandas\io\pytables.pyc in validate(self, other) 2921 raise ValueError( 2922 "invalid combinate of [%s] on appending data [%s] " -> 2923 “vs current table [%s]” % (c, sax, oax)) 2924 2925 # should never get here
ValueError: invalid combinate of [values_axes] on appending data [name->values_block_0,cname->values_block_0,dtype->float64,kind->float,shape->(1, 1878)] vs current table [name->values_block_0,cname->values_block_0,dtype->float32,kind->float,shape->None]`
Please help me!! Thank you!!
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 21 (10 by maintainers)
Commits related to this issue
- Update fhmm_exact.py Trying to solve #493 — committed to nilmtk/nilmtk by nipunbatra 7 years ago
- Update fhmm_exact.py Trying to solve #493 — committed to nilmtk/nilmtk by nipunbatra 7 years ago
- Revert "Fix FHMM output error #493 issue due to newer version of pandas. Tested with pandas 0.19.2." This reverts commit 417ce3ebe8fc03e222c27b797fc2c872aab9edd3. — committed to GeoffreyOnRails/nilmtk by GeoffreyOnRails 7 years ago
I stumbled upon this problem as well. I was trying to find a quick and easy fix without downgrading to pandas 0.17.1. I found a very quick solution for those trying to get through the guide.
I do not know whether this will affect the expected behaviour of the the code in other cases.
All I did was change these lines:
https://github.com/nilmtk/nilmtk/blob/ef493f6f62bf226828b79455dc07d5d1dc218f59/nilmtk/disaggregate/fhmm_exact.py#L408-L411
to:
Essentially, it typecasts the float64 data to float32
@HenriqueAPSilva : After investigations, the Index Error is similar to the one on #471, so I tried the solution of downgrading pandas further to 0.16.2, and it worked well.
I also pursued @OdysseasKr path, after doing manually his modifications I went from the error above, on fhmm.disaggregate(test_elec.mains(), output, sample_period=60)
To the following error, on the f1_score calculation :
This is caused by a modification on the resampling API on Pandas 0.18.0 and higher (http://pandas.pydata.org/pandas-docs/version/0.18.0/whatsnew.html#resample-api)
now produces a pandas.tseries.resample.DatetimeIndexResampler instead of a pandas.core.series.Series. Since pandas.tseries.resample.DatetimeIndexResampler is considered by the DataFrame constructor as an array, we have contraint on master_chunk and slave_chunk to be of the same size.
produces a plain old Series.
I’ve submitted a pull request with these changes (#544).
(By the way, I think master_chunk and slave_chunk shall be of same size, but I didn’t want to modify the behaviour compared to previous version, even if we have only a one minute difference.)