Whether in person or via phone, text, email, or other means, most people experience dozens of interactions with others every day. These exchanges can provide moments of humor, excitement, endearment, disappointment, regret, or even anger.
When these outcomes lead to a difference of opinion, some people take to a popular subreddit on the website Reddit to provide detailed backstories to help other Redditors figure out who, exactly, is the “jerk” in the situation.
Now, Assistant Professor Alex Williams and his students from the Min H. Kao Department of Electrical Engineering and Computer Science have used machine learning to help answer the question thousands of Redditors have posed.
“What happens in the ‘AITA’ subreddit is that people will upload the context of a conversation or interaction that went sideways, and users will vote on who acted unreasonably—the individual that created the post or another person in the shared narrative,” said Williams. “The model we have built has shown a far better-than-average chance of predicting who the ‘A’ is, or at least who the online community believes it to be.”
Williams’ team collected 13,748 posts and used a variety of machine-learning models to explore whether a post’s outcome could be predicted based on the content of a post or the social media characteristics tied to the individual posting the content. Their technique resulted in a 76 percent prediction success rate, with the data of most importance coming from an unexpected source.
“Social metadata and features extending from the post were a much bigger indicator of how the vote would go than any of the actual language used in the post or the sentiment of the post,” said Williams. “Knowing this doesn’t just help judge how a post would be received, but also helps inform how to write things or not to write them if you want to avoid being seen as the ‘A.’”
The UT-based team noted that their work makes a significant number of assumptions, including that that posts on Reddit are both truthful and voted on by a community of unbiased judges. Williams noted that, in practice, people tend to be less than truthful on the Internet and have biases that affect their decision making and that their findings are a reflection only of this particular Reddit community.
“Providing feedback about how reasonably a person acted in a given situation is an incredibly human-centered task,” said Williams. “We certainly shouldn’t view our techniques as an oracle for determining irrational actions. Even for people, this can still be pretty challenging! That said, our work establishes a frontier for new technologies that hint at nudging us to reconsider whether we may be in the wrong.”
The students on the team were UT’s Ethan Haworth, Justin Langston, Ankush Patel, Joseph West, and Ted Grover from the University of California, Irvine.
Their work was selected for publication by the prestigious Association for The Advancement of Artificial Intelligence’s International Conference on Web and Social Media, and will be published in June.