Categories: Technology

Spirit AI wants to be your Ally on the fly, a player-centric bot for online gaming abuse

Online harassment or abuse in gaming involves some of the most toxic slurs ever slung in cyberspace. For more sensitive gamers, simply saying, “Don’t take it personally,” doesn’t cut it.

No race, creed, ethnicity, gender, nor religious affiliation is exempt from verbal or textual abuse online. Although many gamers see this as just virtual trash-talking and part of the territory, a poll from IGN shows that one in three gamers are actually turned off by online abuse.

Read More: Google’s new Perspective API can help you not sound like a jerk while commenting

“While the use of boastful or insulting speech to intimidate or humiliate can have value as a psychological strategy, when the remarks attack someone for their gender, perceived sexual orientation, or race, many would agree that a line has been crossed,” wrote Kaitlyn Williams, who received a Stanford University Boothe Prize Honorable Mention for her essay When Gaming Goes Bad: An Exploration of Videogame Harassment Towards Female Gamers.

In order to evaluate whether a line has been crossed in online gaming interactions, the team at Spirit AI developed a bot called Ally that makes “in-game player communities, chatrooms and online social platforms safer and more inclusive.”

Using the power of machine learning and predictive analytics, Ally takes a player-centric approach to abuse, asking the player whether or not they are OK with player interactions and learning what situations, language, and individuals are within their “safe zones” or comfort levels.

What’s cool about the player-centric approach is that Spirit AI’s Ally comes in the form of a virtual character that mimic’s the game’s style, so it doesn’t seem out of place.

The Ally checks-in on a potential abuse victim and asks whether or not a user has been offensive, and the software has a customizable interface that allows players to input what is offensive to them.

On the surface this may seem tedious and even overkill on political correctness.

However, the Spirit Triage Manager will decide how the system will respond to each abusive scenario as it is detected, and its context-aware reporting system can create a case file for further analysis by the community team, whether a player proactively reports an instance of abuse or responds to an Ally enquiry.

It’s good to know that there is at least some human element working behind the scenes.

Additionally, as players gain more experience in a game or chat room, or build their own community with whom they feel relaxed, their response to problematic language may change. They’re free to tell Ally whenever their preferences change – or even how they’re feeling on any given day.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

Klaus Schwab is writing a book on the ‘Intelligent Age’ & its data-based economy: World Governments Summit

The Intelligent Age is a re-branding of the Great Reset blended with the Fourth Industrial…

1 hour ago

The Talking Dead Head of State: Boris Johnson to chat with Winston Churchill bot at World Governments Summit

What better way to navigate today's technologically complex world than by summoning an incorporeal algorithm…

14 hours ago

Tech Executives Take Roles at Companies Innovating in Robotics

The robotics world is amidst something of a new frontier. Front and center at CES…

3 days ago

DARPA wants to enhance cognitive performance after limited sleep with neurotech

DARPA is putting together a research program to enhance people's cognitive performance after receiving only…

4 days ago

Are AI chatbots crossing the line? The risks developers need to know ahead of Valentine’s Day

AI chatbots are transforming communication, but recent headlines reveal their darker side: lawsuits over harmful content and reports…

5 days ago

India’s digital ID architect touts DPI for govt tracking individual finances, vaccine passports

DPI systems can just as easily exclude people from participating in some aspects of society…

6 days ago