You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I'm looking to run PGD on ART's PyTorch Faster-RCNN using the xView dataset. This dataset contains images of varying shapes, so in order to use batch_size > 1, the inputs are stored as 1D numpy object arrays. For example, if using batch_size=16, the input x has shape (16,) and dtype np.object. x[i] would be of dtype float (or int) and of shape (image_i_height, image_i_width, 3).
Describe the solution you'd like
Based on the testing I've done locally, it appears there are two places where modifications would need to be made:
The loss_gradient() of the PyTorch Faster-RCNN. The attack will initially crash here. np.stack() requires that the elements in grad_list are of the same shape (which they are not, in my case). A solution could be to check if x.dtype == np.object, and if so, make grads a 1D object array and then loop through the i elements of the batch, assigning grads[i] to the gradient of image i.
The next function where the attack crashes is the _apply_perturbation() of fast_gradient.py, here. The same kind of logic in (1) above could be used to avoid np.clip() breaking.
The text was updated successfully, but these errors were encountered:
lcadalzo
changed the title
Allow PyTorch Faster-RCNN to receive np object arrays as input
Enable PGD attack on PyTorch Faster-RCNN using np object arrays as input
Oct 29, 2020
Is your feature request related to a problem? Please describe.
I'm looking to run PGD on ART's PyTorch Faster-RCNN using the xView dataset. This dataset contains images of varying shapes, so in order to use batch_size > 1, the inputs are stored as 1D numpy object arrays. For example, if using batch_size=16, the input
x
has shape(16,)
and dtypenp.object
.x[i]
would be of dtype float (or int) and of shape(image_i_height, image_i_width, 3)
.Describe the solution you'd like
Based on the testing I've done locally, it appears there are two places where modifications would need to be made:
The loss_gradient() of the PyTorch Faster-RCNN. The attack will initially crash here.
np.stack()
requires that the elements ingrad_list
are of the same shape (which they are not, in my case). A solution could be to checkif x.dtype == np.object
, and if so, makegrads
a 1D object array and then loop through thei
elements of the batch, assigninggrads[i]
to the gradient of imagei
.The next function where the attack crashes is the
_apply_perturbation()
of fast_gradient.py, here. The same kind of logic in (1) above could be used to avoidnp.clip()
breaking.Additional context
It appears this issue might've surfaced recently for another scenario (perhaps ASR?) because I see this kind of logic already implemented here and [here].(https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/dev_1.4.2/art/attacks/evasion/fast_gradient.py#L374)
The text was updated successfully, but these errors were encountered: