OpenLink Software

About: With the rising success of adversarial attacks on many NLP tasks, systems which actually operate in an adversarial scenario need to be reevaluated. For this purpose, we pose the following research question: How difficult is it to fool automatic short answer grading systems? In particular, we investigate the robustness of the state of the art automatic short answer grading system proposed by Sung et al. towards cheating in the form of universal adversarial trigger employment. These are short token sequences that can be prepended to students’ answers in an exam to artificially improve their automatically assigned grade. Such triggers are especially critical as they can easily be used by anyone once they are found. In our experiments, we discovered triggers which allow students to pass exams with passing thresholds of [Formula: see text] without answering a single question correctly. Furthermore, we show that such triggers generalize across models and datasets in this scenario, nullifying the defense strategy of keeping grading models or data secret.

 Permalink

an Entity references as follows:

Faceted Search & Find service v1.13.91

Alternative Linked Data Documents: Sponger | ODE     Raw Data in: CXML | CSV | RDF ( N-Triples N3/Turtle JSON XML ) | OData ( Atom JSON ) | Microdata ( JSON HTML) | JSON-LD    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] This material is Open Knowledge Creative Commons License Valid XHTML + RDFa
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.
OpenLink Virtuoso version 07.20.3229 as of Jul 10 2020, on Linux (x86_64-pc-linux-gnu), Single-Server Edition (94 GB total memory)
Copyright © 2009-2025 OpenLink Software