keras: Keras Progbar Logger is not working as expected

When using model.fit_generator, the progress-bar does not work as expected.

As noted on StackOverflow, the following verbosity levels exist (which should be explained in the documentation):

  • 0: No logging
  • 1: Displaying a progress-bar for each batch
  • 2: Displaying one line per epoch

Verbosity-Modes 0 and 2 work as expected, but mode 1 (the default mode) creates the following output:

Epoch 2/100
   64/50000 [..............................] - ETA: 323s - loss: 2.3547 - acc: 0.0625
  128/50000 [..............................] - ETA: 181s - loss: 2.3319 - acc: 0.0781
  192/50000 [..............................] - ETA: 133s - loss: 2.3233 - acc: 0.1042
  256/50000 [..............................] - ETA: 110s - loss: 2.3243 - acc: 0.1055
  384/50000 [..............................] - ETA: 86s - loss: 2.3267 - acc: 0.0990 
  512/50000 [..............................] - ETA: 73s - loss: 2.3213 - acc: 0.1094
  640/50000 [..............................] - ETA: 66s - loss: 2.3184 - acc: 0.1109
  704/50000 [..............................] - ETA: 63s - loss: 2.3188 - acc: 0.1080
  832/50000 [..............................] - ETA: 59s - loss: 2.3177 - acc: 0.1058
  960/50000 [..............................] - ETA: 56s - loss: 2.3176 - acc: 0.1031
 1088/50000 [..............................] - ETA: 53s - loss: 2.3146 - acc: 0.1048

I already identified the problem in the source-code. It uses \b and \r both, which does not work as expected on Windows and Ubuntu 16.04 with PyCharm 2016.3.3. I was able to fix this issue, by simply removing line 257 (sys.stdout.write('\b' * prev_total_width)) because \r is a carriage return to the start of the line anyway. I’ve also tested this on Linux and removing line 257 does not break existing functionality.

Is there a reason, why this additional \b exists? Is it required on Mac? If not, I would file a pull-request to remove that line and fix the progress-bar output on Windows.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 13
  • Comments: 55 (5 by maintainers)

Commits related to this issue

Most upvoted comments

weird_output_keras

I don’t know if this is related, but I was getting output like this from keras. I’m currently attempting to upgrade versions of keras cudatoolkit and tensorflow to see if that has any effect

@kotoroshinoto I had the exact same issue. After I removed tqdm import, everything works good again.

https://github.com/bstriner/keras-tqdm

^ in case you were wondering what I was talking about

I am getting a similar issue in jupyter notebooks right now

Manged to fix the problem by making the terminal window bigger

The solution of @ibutenko did not work for me unfortunately, however, I found a fix which did not need any modification of sources: install ipykernel and import it in your code:

pip install ipykernel Then import ipykernel

In fact, in the Keras generic_utils.py file, one probematic line was (for me):

            if self._dynamic_display:
                sys.stdout.write('\b' * prev_total_width)
                sys.stdout.write('\r')
            else:
                sys.stdout.write('\n')

And, the value self._dynamic_display was initiated such as:

        self._dynamic_display = ((hasattr(sys.stdout, 'isatty') and
                                  sys.stdout.isatty()) or
                                 'ipykernel' in sys.

So, loading ipykernel added it to sys.modules and fixed the problem for me.

@kotoroshinoto I had the exact same issue. After I removed tqdm import, everything works good again.

@QRiner This fixed my issue. I commented out import tqdm from tqdm and the progress bar behavior went back to normal.

I got output very similar to @kotoroshinoto fitting models in a Jupyter notebook. I installed keras-tqdm, and it worked perfectly. In your fit function, set verbose=0, and callbacks=[TQDMNotebookCallback()].

My fix is the following:

Comment out lines 302 to 306 of generic_utils.py, and replace them with a print of the \r character, like so:

        #if self._dynamic_display:   
        #    sys.stdout.write('\b' * prev_total_width)
        #    sys.stdout.write('\r')
        #else:
        #    sys.stdout.write('\n')
        sys.stdout.write('\r')    

This produces the expected behavior for me: Running nohup myprogram.py & and then tail -f nohup.out shows the progress bar moving rightward at the bottom line of the terminal, instead of taking up 100s or 1000s of lines as described above.

Getting same thing with keras 2.0.8 in a notebook top

I’ve got this same issue on windows w/ jupyter, except on windows chrome it produces output like this: https://i.imgur.com/Iko46QP.png

On firefox on linux the same issue is happening, but firefox does not render the “\b” character

can confirm that deleting the line @apacha mentions fixes the issue

For who is getting this error in new keras or in google colab, try adding iPython:

import tensorflow as tf
import numpy as np
from tensorflow import keras
import IPython
%matplotlib inline

model = tf.keras.Sequential([
    keras.layers.Dense(units = 1, input_shape=[1])                             
])

model.compile(
    optimizer = 'adam',
    loss = 'mean_squared_error'
)
model.summary()

xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-.0, 1.0, 4.0, 7.0, 10.0, 13.0], dtype=float)

from tqdm.keras import TqdmCallback
model.fit(xs, ys, epochs=500, verbose=0, callbacks=[TqdmCallback(verbose=0)])

I’m seeing this behaviour with keras 2.0.8 in a jupyter notebook

@soomiles I’m in the same boat, I made the PR so I wouldn’t have to add that line to TF everytime I clean install it, however it’s not being included in the latest TF releases lol. Not sure what’s going on, I might open an issue

Edit: Opened issue here: https://github.com/tensorflow/tensorflow/issues/38883

I don’t know if this is related, but I was getting output like this from keras. I’m currently attempting to upgrade versions of keras cudatoolkit and tensorflow to see if that has any effect

I have updated Keras generic_utils.py as follows to make it work

# sys.stdout.write(bar) # <<<<<<<<<<<<<< Comment sys.stdout.write(bar + info) # <<<<<<<<<<<<<< Add # sys.stdout.write(info) # <<<<<<<<<<<<<< Comment sys.stdout.flush() `` I don’t know why this is happening though… Odd.

As recommended by @juharris, tqdm actually looks very much like the thing needed here to get rid of the solution that is currently employed which does not work on every system.

Apparently not… My PR was rejected. And the maintainers constantly ignore this for 5 months now. 😡

But you can monkey-patch your installation by searching generic_utils.py in your python installation folder and comment one line as described above.

I had the same problem as @kotoroshinoto, on Spyder 4.1.1 and TF 2.1.0.

epoch 1/5 1560/59563 […] - ETA: 3:26 - train_loss: 0.0106 - ETA: 14:17 - train_loss: 0.0518 - ETA: 10:36 - train_loss: 0.0486 - ETA: 6:13 - train_loss: 0.0515 - ETA: 5:48 - train_loss: 0.0507 - ETA: 4:39 - train_loss: 0.0766 - ETA: 4:31 - train_loss: 0.0364 - ETA: 4:04 - train_loss: 0.0206 - ETA: 4:01 - train_loss: 0.0182 - ETA: 3:46 - train_loss: 0.0144 - ETA: 3:44 - train_loss: 0.0060 - ETA: 3:34 - train_loss: 0.0041 - ETA: 3:33 - train_loss: 0.0044 - ETA: 3:27 - train_loss: 0.0169

I solved the problem by increasing the Progbar update interval to ~0.5s.

tf.keras.utils.Progbar(target, stateful_metrics=metrics_names, interval=0.5)

I’ve also fixed the issue reported by @kotoroshinoto by removing tqdm import and restarting the kernel.

I was getting a weird progress bar in pycharm in the keras version 2.1.3. But it is working fine with the keras that come along with tensorflow,the keras version inside it is version 2.0.8-tf. The weird progress bar prints a new progress bar in a new line for each batch processed. So I was getting 1000+ bars for a single epoch processed as I had 1000+ batches to be fed as input. I changed the Line no:339: from sys.stdout.write('\n') to sys.stdout.write('\r') and it fixed the issue. Is this a valid fix that works for every platform? Should I send a Pull request?

Tried this fix on the newest Keras 2.0.2 version and something seems to have changed, because if I remove line 257 now, the output resembles to the following

762/782 [============================>.] - ETA: 0s - loss: 1.4444 - acc: 0.4882769/782 [============================>.] - ETA: 0s - loss: 1.4417 - acc: 0.4894775/782 [============================>.] - ETA: 0s - loss: 1.4387 - acc: 0.4906Epoch 00000: val_acc improved from -inf to 0.57940, saving model to assignment_2.h5
782/782 [==============================] - 9s - loss: 1.4362 - acc: 0.4914 - val_loss: 1.2375 - val_acc: 0.5794
Epoch 2/100
778/782 [============================>.] - ETA: 0s - loss: 1.0792 - acc: 0.6277Epoch 00001: val_acc improved from 0.57940 to 0.63520, saving model to assignment_2.h5
782/782 [==============================] - 7s - loss: 1.0793 - acc: 0.6275 - val_loss: 1.0735 - val_acc: 0.6352
Epoch 3/100
757/782 [============================>.] - ETA: 0s - loss: 0.9665 - acc: 0.6693763/782 [============================>.] - ETA: 0s - loss: 0.9668 - acc: 0.6693769/782 [============================>.] - ETA: 0s - loss: 0.9663 - acc: 0.6693775/782 [============================>.] - ETA: 0s - loss: 0.9659 - acc: 0.6694Epoch 00002: val_acc improved from 0.63520 to 0.67660, saving model to assignment_2.h5
782/782 [==============================] - 7s - loss: 0.9665 - acc: 0.6692 - val_loss: 0.9512 - val_acc: 0.6766
Epoch 4/100
 97/782 [==>...........................] - ETA: 5s - loss: 0.9059 - acc: 0.6862103/782 [==>...........................] - ETA: 5s - loss: 0.9078 - acc: 0.6849110/782 [===>..........................] - ETA: 5s - loss: 0.9109 - acc: 0.6844116/782 [===>..........................] - ETA: 5s - loss: 0.9073 - acc: 0.6856

which is even worse than printing one entry per line.

Update: Hmm… getting this thing now in Keras 1.2.2 too. Check, if this new behavior might be a PyCharm bug.