Protecting Chatbots from Toxic Content
There is a paradigm shift in web-based services towards conversational user interfaces. Companies increasingly offer conversational interfaces, or chatbots, to let their customers or employees interact with their services in a more flexible and mobile manner. Unfortunately, this new paradigm faces a major problem, namely toxic content. Toxic content consists of user inputs to chatbots that cause privacy concerns, may be adversarial or malicious, and can cause the chatbot provider substantial economic, reputational, or legal harm. We address this problem with an interdisciplinary approach, drawing upon programming languages, cloud computing, and other disciplines to build protections for chatbots. Our solution, called BotShield, is non-intrusive in that it does not require changes to existing chatbots or underlying conversational platforms. This paper introduces novel security mechanisms, articulates their security guarantees, and illustrates them via case studies.
Thu 8 Nov Times are displayed in time zone: Guadalajara, Mexico City, Monterrey change
13:30 - 15:00
|A CAPable distributed programming model|
|Protecting Chatbots from Toxic Content|
|JEff: Objects for Effect|