keras: model.fit ValueError: I/O operation on closed file
Any idea what could be causing this error? I’ve been trying to solve this for a week. Thanks in advance.
Train on 60816 samples, validate on 15204 samples
Epoch 1/20
60816/60816 [==============================] - 19s - loss: 0.1665 - acc: 0.9597 - val_loss: 0.1509 - val_acc: 0.9605
Epoch 2/20
31200/60816 [==============>...............] - ETA: 8s - loss: 0.1583 - acc: 0.9600
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-e3590f9fe5e6> in <module>()
16
17 my_model = model.fit(train_x, train_y, batch_size=100, nb_epoch=20,
---> 18 show_accuracy=True, verbose=1, validation_data=(test_x, test_y))
19 score = model.evaluate(test_x, test_y, show_accuracy=True, verbose=0)
20 print('Test loss:', score[0])
/usr/local/lib/python2.7/dist-packages/keras/models.pyc in fit(self, X, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, show_accuracy, class_weight, sample_weight)
644 verbose=verbose, callbacks=callbacks,
645 val_f=val_f, val_ins=val_ins,
--> 646 shuffle=shuffle, metrics=metrics)
647
648 def predict(self, X, batch_size=128, verbose=0):
/usr/local/lib/python2.7/dist-packages/keras/models.pyc in _fit(self, f, ins, out_labels, batch_size, nb_epoch, verbose, callbacks, val_f, val_ins, shuffle, metrics)
284 batch_logs[l] = o
285
--> 286 callbacks.on_batch_end(batch_index, batch_logs)
287
288 epoch_logs = {}
/usr/local/lib/python2.7/dist-packages/keras/callbacks.pyc in on_batch_end(self, batch, logs)
58 t_before_callbacks = time.time()
59 for callback in self.callbacks:
---> 60 callback.on_batch_end(batch, logs)
61 self._delta_ts_batch_end.append(time.time() - t_before_callbacks)
62 delta_t_median = np.median(self._delta_ts_batch_end)
/usr/local/lib/python2.7/dist-packages/keras/callbacks.pyc in on_batch_end(self, batch, logs)
168 # will be handled by on_epoch_end
169 if self.verbose and self.seen < self.params['nb_sample']:
--> 170 self.progbar.update(self.seen, self.log_values)
171
172 def on_epoch_end(self, epoch, logs={}):
/usr/local/lib/python2.7/dist-packages/keras/utils/generic_utils.pyc in update(self, current, values)
59 prev_total_width = self.total_width
60 sys.stdout.write("\b" * prev_total_width)
---> 61 sys.stdout.write("\r")
62
63 numdigits = int(np.floor(np.log10(self.target))) + 1
/usr/local/lib/python2.7/dist-packages/ipykernel/iostream.pyc in write(self, string)
315
316 is_child = (not self._is_master_process())
--> 317 self._buffer.write(string)
318 if is_child:
319 # newlines imply flush in subprocesses
ValueError: I/O operation on closed file
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 16
- Comments: 34 (2 by maintainers)
Commits related to this issue
- Fixes #2110 "ValueError: I/O operation on closed file" This is a workaround for #2110 where calling `model.fit` with `verbose=1` using IPython can intermittently raise "ValueError: I/O operation on c... — committed to scottlawsonbc/keras by scottlawsonbc 8 years ago
- Revert "Fixes #2110 "ValueError: I/O operation on closed file"" This reverts commit ae6933cbc6dc6b00a366394dd3859b035bd06129. — committed to scottlawsonbc/keras by scottlawsonbc 8 years ago
- Fixes #2110 "ValueError: I/O operation on closed file" This is a workaround for #2110 where calling `model.fit` with verbose=1 using IPython can intermittently raise "ValueError: I/O operation on clo... — committed to scottlawsonbc/keras by scottlawsonbc 8 years ago
- added sleep to avoud python bug referred from: https://github.com/fchollet/keras/issues/2110 — committed to bhandras/kaggle-fish by bhandras 7 years ago
I had the same problem (also with IPython), and I solved it by adding a
time.sleep(0.1)
the line just aftermodel.fit
. It seems Python (or just IPython?) needs a time before two fitting…@fchollet Right, I’m experiencing the same with Spyder (because it uses IPython kernel too). With each epoch completes at 2s, it results on I/O operation error too.
verbose = 0
does the trick. Thanks.This looks like an IO bug with IPython, you might want to file a bug with the devs. It seems to be triggered by the Keras progbar’s use of sys.stdout.
On 3 April 2016 at 07:35, artix41 notifications@github.com wrote:
4.4 is out. Indeed fixed this.
conda install ipykernel
to upgrade if you are on anaconda, like me.I think this can be closed.
I can confirm that
verbose=2
fixes my case.I fix this problem with adding
%%capture
as the first line of the cellInstead of writing to sys.stdout directly, could those calls be wrapped on an auxiliary function with a try/catch block to help prevent the entire learning from breaking down if/when this happens?
Although it is clear this is not the best solution, I have to say it is very depressing to find that after 50h of training your model was lost simply because some text could not be printed 😞
I am glad ModelCheckpoint works very well, though!
I experienced too this problem. As workaround I set verbose=2 in the argument of model.fit() and it has fixed it. Moreover setting verbose =2 at least logs the epoch and accuracy of the training compared to verbose = 0 which doesn’t log anything.
I just ran into this issue with ipykernel 4.4.1. I am working from this example with a 2gb csv file. I only get around 2-3m lines in before I get
ValueError: I/O operation on closed file
error.@rilut Setting verbose = 0 strangely results in the termination of my run in a single epoch. Any idea what is happening here? I added time.sleep(0.1) too. Isn’t helping much. I am still trying to find a workaround 😦
@artix41 - Thanks! adding the
time.sleep(0.1)
worked for meI had this problem too and setting verbose=0 in the argument of model.fit() seems to have fixed it.