Protecting Chatbots from Toxic Content
There is a paradigm shift in web-based services towards conversational user interfaces. Companies increasingly offer conversational interfaces, or chatbots, to let their customers or employees interact with their services in a more flexible and mobile manner. Unfortunately, this new paradigm faces a major problem, namely toxic content. Toxic content consists of user inputs to chatbots that cause privacy concerns, may be adversarial or malicious, and can cause the chatbot provider substantial economic, reputational, or legal harm. We address this problem with an interdisciplinary approach, drawing upon programming languages, cloud computing, and other disciplines to build protections for chatbots. Our solution, called BotShield, is non-intrusive in that it does not require changes to existing chatbots or underlying conversational platforms. This paper introduces novel security mechanisms, articulates their security guarantees, and illustrates them via case studies.
Thu 8 NovDisplayed time zone: Guadalajara, Mexico City, Monterrey change
13:30 - 15:00 | |||
13:30 30mTalk | A CAPable distributed programming model Onward! Papers Florian Myter Vrije Universiteit Brussel, Belgium, Christophe Scholliers Universiteit Gent, Belgium, Wolfgang De Meuter Vrije Universiteit Brussel | ||
14:00 30mTalk | Protecting Chatbots from Toxic Content Onward! Papers Guillaume Baudart IBM Research, Julian Dolby IBM Research, Evelyn Duesterwald IBM Research, Martin Hirzel IBM Research, Avraham Shinnar IBM Research | ||
14:30 30mTalk | JEff: Objects for Effect Onward! Papers |