I (Libby) was quoted in a New Scientist story out today: Troll hunters: the Twitterbots that fight against online abuse. In the story, I discuss our efforts to build tools for automatically detecting harassment on Twitter.
I wasn’t surprised to find my colorful language included in the piece, but I don’t want that to detract from my overall message. Social media, for many people and especially people who are marginalized or disenfranchised offline, is a mess. Our research will make that less true, and we work to build tools, platforms, and norms that help social media deliver on its democratic potential.
One other comment on the story - I’m credited with saying the best answer is to put bots in the hands of Twitter and Facebook, but really I think the power best lies in the users. Not so that users can have power over one another but so they can have power over their own experiences. For instance, that means providing tools like blockbots and flood controls so users can decide what content they see. I definitely don’t trust Twitter or Facebook, the companies or the users, to effectively police content, and I wouldn’t want to.
I do think the biggest problem facing current harassment reduction approaches is that they are reactive. Hopefully we’ll make some progress on proactive and preventative techniques, and we are actively working on some in the lab. Stay tuned.