By “attack” the authors mean figuring out ways to trick the network into classifying things incorrectly. For example, you might attack a network that recognizes faces by determining certain inputs that still are recognizable as faces to a human, but not to the network.