adversarial-robustness-toolbox: ART does not work with Keras Embedding layers
Describe the bug
I am unable to create any instances of art.classifiers.KerasClassifier whenever the underlying Keras model contains an Embedding layer. Using the TensorFlow backend, this invariably leads to a TypeError: Can not convert a NoneType into a Tensor or Operation.
To Reproduce Steps to reproduce the behavior:
- Create any Keras model with an Embedding layer on the TensorFlow backend.
- Attempt to instantiate
art.classifiers.KerasClassifieron it. - Watch it fail.
Expected behavior
I expected ART to simply return an instance of KerasClassifier as it usually does.
Screenshots N/A, but here’s a minimal non-working example:
from keras.layers import Dense, Activation, Dropout, Embedding, LSTM
from art.classifiers import KerasClassifier
model = Sequential()
model.add(Embedding(100, 128, input_length=50))
model.add(LSTM(128))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['binary_accuracy'])
classifier = KerasClassifier((0, 1), model=model)
classifier.fit(x_train, y_train, nb_epochs=10, batch_size=128)
System information (please complete the following information):
- Ubuntu 18.04 LTS
- Python version 3.6.5
- ART version 0.5.0
- TensorFlow version 1.12.0
- Keras version 2.2.4
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 22 (10 by maintainers)
Commits related to this issue
- Merge pull request #33 from MATHSINN/dev Optimize FGSM for minimal perturbation (closes #26 ) — committed to imolloy/adversarial-robustness-toolbox by deleted user 6 years ago
Did it (hope in the correct way this time).
Btw, going back to the “clip values” point, a colleague of mine (IMHO correctly) suggested that the lower bound for the perturbation should be 0, since it has been injected (at least in the example) after a relu, so the perturbation should not add negative values.
Sorry @ririnicolae, I read this just now. Yes, I can do it
I think you are totally right about the clip_values. I tried to remove it and I confirm that it works. About the example, I can not share the very same example because it’s based on private data, but, if this works for you, I can realize an example showing how to adversarially perturb one of the conv layer in the middle for the mnist example
@step8p PR for issue #49 will be in tomorrow, that would give you access to the same workaround that I suggested to @cr019283.