Despite the impressive (and sometimes even superhuman) accuracies of machine learning on various tasks such as object recognition, speech recognition, natural language processing, and playing Go, classifiers perform badly in the presence of small imperceptible but adversarial (opposing) perturbations in the input sample. In addition to being an intriguing phenomenon, the existence of such “adversarial examples” exposes a severe vulnerability in current machine learning (ML) systems and questions the future of ML and Artificial Intelligence (AI). In the current scenario, we see a rising level of hostile behavior in many application domains, including email (spamming), biometric systems, voice assistants, web searches, and pay-per-click advertisements, to name a few. In this special session, we invite contributions from researchers involved in studying the behavior of such machine learning systems susceptible to attacks that disrupt the system it was intended to benefit, as well as appropriate defense mechanisms that can be incorporated to prevent such attacks.
Related topics include but are not limited to:
Dr. M.V. Joshi (mv_joshi@daiict.ac.in) | Dr. Shruti Bhilare (shruti_bhilare@daiict.ac.in) |
Dr. Srimanta Mandal (srimanta_mandal@daiict.ac.in) | Dr. Avik Hati (avikhati@nitt.edu) |
(After login, choose special session II)
Accepted, registered, and presented papers can be published by SPIE, which will be included in SPIE Digital Library, provided to the Web of Science Conference Proceedings Citation Index-Science, Scopus, Ei Compendex, and others.