keras: AttributeError: 'ProgbarLogger' object has no attribute 'log_values'

Please make sure that the boxes below are checked before you submit your issue. Thank you!

  • Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
  • If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with: pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
  • Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).

I am performing batch learning and after a few batches I get this error from this line of code:

model.fit(Xtrain, Ytrain, batch_size=128, nb_epoch=1,
                  verbose=1,validation_split=0.01,
                  callbacks=[ModelCheckpoint(weightStr, monitor='val_loss', verbose=0, save_best_only=True, mode='auto')])

Traceback (most recent call last):

  File "<ipython-input-1-0ab90ed05873>", line 321, in <module>
    callbacks=[ModelCheckpoint(weightStr, monitor='val_loss', verbose=0, save_best_only=True, mode='auto')])

  File "/home/kevin/.local/lib/python2.7/site-packages/keras/models.py", line 620, in fit
    sample_weight=sample_weight)

  File "/home/kevin/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1104, in fit
    callback_metrics=callback_metrics)

  File "/home/kevin/.local/lib/python2.7/site-packages/keras/engine/training.py", line 842, in _fit_loop
    callbacks.on_epoch_end(epoch, epoch_logs)

  File "/home/kevin/.local/lib/python2.7/site-packages/keras/callbacks.py", line 40, in on_epoch_end
    callback.on_epoch_end(epoch, logs)

  File "/home/kevin/.local/lib/python2.7/site-packages/keras/callbacks.py", line 196, in on_epoch_end
    self.progbar.update(self.seen, self.log_values, force=True)

AttributeError: 'ProgbarLogger' object has no attribute 'log_values'

I have no idea why I get this error, It seems to happen randomly, can anyone point me in the right direction?

Here is the module of code that I am running:

for e in range(numEpoch):
    numOfImgToLoad = 50000#we can tune this
    totalNumberOfImages = len(imagesAndClass)
    runningTotal = 0
    startingPoint = 0
    endingPoint = numOfImgToLoad
    while totalNumberOfImages > 0:
        print "StartingPoint: {}, endingPoint {}".format(startingPoint, endingPoint)
        totalNumberOfImages = totalNumberOfImages - numOfImgToLoad#subtract the number of images loaded into mem
        if totalNumberOfImages < 0:
            remainder = totalNumberOfImages + numOfImgToLoad
            (Xtrain, Ytrain) = loadImages(imagesAndClass[startingPoint:remainder])
            Xtrain = np.array(Xtrain).reshape(len(Xtrain), 1, 106, 106)#np.array(Xtrain).reshape(4415, 1, 106, 106)
            runningTotal += remainder
        else:
            (Xtrain, Ytrain) = loadImages(imagesAndClass[startingPoint:endingPoint])
            Xtrain = np.array(Xtrain).reshape(len(Xtrain), 1, 106, 106)
            runningTotal += numOfImgToLoad
            startingPoint = endingPoint+1
            endingPoint = startingPoint + numOfImgToLoad - 1

        Xtrain /= 255#change pixel value to between 0 and 1
        Xtrain = Xtrain.astype('float32')
        Ytrain = np_utils.to_categorical(Ytrain, len(classes)+1)
        Ytrain = np.array(Ytrain)
        print "Starting epoch {}".format(e)
        model.fit(Xtrain, Ytrain, batch_size=128, nb_epoch=1,
                  verbose=1,validation_split=0.01,
                  callbacks=[ModelCheckpoint(weightStr, monitor='val_loss', verbose=0, save_best_only=True, mode='auto')])
                  #callbacks=[EarlyStopping(monitor='val_loss', patience=2)])
        #print "Starting epoch {} on image {}".format(e, runningTotal)
        print "Killing Xtrain and resetting"
        del Xtrain
        del Ytrain

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 12
  • Comments: 31 (4 by maintainers)

Commits related to this issue

Most upvoted comments

This happens if steps_per_epoch is 0. Make sure that your batch size is not greater than the dataset size to avoid it.

Have you checked the dimensions for both ur x and y for train and test, I had a dimension issue with my y train. Generally the error arising if u tey to split data of incorrect size , say if it’s 1, and u try to split it, it gives this error

I also encountered this error today. Had to set verbose=0 to fix. Please reopen the issue. It needs to be fixed!

Encountered this error when my training set was an empty array. Using Keras 2.0.9.

A more descriptive error message might be helpful

For method fit() Keras should throw an exception when using validation_split and training set is ended up to be 0… Example: your last batch in epoch contains 1 sample and validation_split = 0.3 => training size = 0… When training set size is 0 you will see this exception… If you look at callbacks.py in keras, class ProgbarLogger, log_values member is created ONLY when batch starts… So if you don’t have data in the batch it gets to on_epoch_end with log_values undefined…

Encountered this in 2.1.6 when supplying empty training set by mistake. IMHO the code should handle this gracefully.

It seems this is not solved yet. I uploaded Keras yesterday (2/6/2017) and the code is still raising this message: AttributeError: ‘ProgbarLogger’ object has no attribute ‘log_values’

Set verbose=0:

model.fit(X, Y, verbose=0)

Or make sure your dataset is bigger than the batch size.

Hi Guys, I also meet this issue, but it happened that the sample size is small, e.g., I used a sample size 2 and do model.fit it by split ratio 0.2 then get this error. but when I used sample size >1000, then it disappeared.

hope this information may help.

regards, Yifei

If this is a common issue, we need a clear error message to handle it. Please open a PR to add appropriate error handling.

Ran into this issue today. saransh-mehta suggestion was the solution. Also ran into an issue where the number of samples for test/train split was not sufficient. train was 0 samples.

将batch_size设置为较小的数字。当您的batch_size设置为比样本集大小更大的值时,会出现此错误。

My guess is that the issue is not in how big is your training set, but rather the last batch size. Example: if the last batch has 3 samples, that gives you 2 samples for training set and 0 for validation (casted to integer). This is when you would see this exception. So, increase the last batch to the size where validation samples # will be > 0. Really surprised that the issue is not fixed for such a long time…

I thought of this too, but actually it seemed that the problem was more trivial than so. Thanks for your help. For my problem, I was calling the name of the images differently, so when it was supposed to take these images and split them it couldn’t find them for the different naming I called it with, so it wasn’t a problem in the model rather a step before it.

Can you post a minimal, short, standalone script to reproduce your issue?