Adaptation of Adversaries [1]
- The adversaries are motivated to transform the test data to reduce the learner’s effectiveness.
- Spam filter designers
- Attempt to learn good filters by training their algorithms on Spam (and legitimate) email messages received in the recent past.
- Spammer
- Are motivated to reverse-engineer existing Spam filters and use this knowledge to generate messages which are different enough from the (inferred) training data to circumvent the filters.
Solutions
- Increase the robustness of the learning algorithm to generic training/test data differences via standard methods such as regularization or minimization of worst-case loss [1]
- However, these techniques do not account for the adversarial nature of the training/test set discrepancies and may be overly conservative.
- Predictive analystics to anticipate and counter the adversaries [1]
- For example, predictions can be made using extrapolation or game-theoretic considerations, and can be employed to transform training instances so that they become similar to (future) test data and therefore provider a more appropriate basis for learning.
- Time-varying posture to increase uncertainty [1]
- Pros
- This approach is flexible, scalable, easy to implement, and hard to reverse-engineer.
Reference
[1] Moving Target Defense for Adaptive Adversaries, by Richard Colbaugh and Kristin Glass, in ISI 2013.