Categories: Technology

Spirit AI wants to be your Ally on the fly, a player-centric bot for online gaming abuse

Online harassment or abuse in gaming involves some of the most toxic slurs ever slung in cyberspace. For more sensitive gamers, simply saying, “Don’t take it personally,” doesn’t cut it.

No race, creed, ethnicity, gender, nor religious affiliation is exempt from verbal or textual abuse online. Although many gamers see this as just virtual trash-talking and part of the territory, a poll from IGN shows that one in three gamers are actually turned off by online abuse.

Read More: Google’s new Perspective API can help you not sound like a jerk while commenting

“While the use of boastful or insulting speech to intimidate or humiliate can have value as a psychological strategy, when the remarks attack someone for their gender, perceived sexual orientation, or race, many would agree that a line has been crossed,” wrote Kaitlyn Williams, who received a Stanford University Boothe Prize Honorable Mention for her essay When Gaming Goes Bad: An Exploration of Videogame Harassment Towards Female Gamers.

In order to evaluate whether a line has been crossed in online gaming interactions, the team at Spirit AI developed a bot called Ally that makes “in-game player communities, chatrooms and online social platforms safer and more inclusive.”

Using the power of machine learning and predictive analytics, Ally takes a player-centric approach to abuse, asking the player whether or not they are OK with player interactions and learning what situations, language, and individuals are within their “safe zones” or comfort levels.

What’s cool about the player-centric approach is that Spirit AI’s Ally comes in the form of a virtual character that mimic’s the game’s style, so it doesn’t seem out of place.

The Ally checks-in on a potential abuse victim and asks whether or not a user has been offensive, and the software has a customizable interface that allows players to input what is offensive to them.

On the surface this may seem tedious and even overkill on political correctness.

However, the Spirit Triage Manager will decide how the system will respond to each abusive scenario as it is detected, and its context-aware reporting system can create a case file for further analysis by the community team, whether a player proactively reports an instance of abuse or responds to an Ally enquiry.

It’s good to know that there is at least some human element working behind the scenes.

Additionally, as players gain more experience in a game or chat room, or build their own community with whom they feel relaxed, their response to problematic language may change. They’re free to tell Ally whenever their preferences change – or even how they’re feeling on any given day.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

Sudip Singh named CEO at Ness Digital Engineering to help more enterprises succeed in the AI economy 

A new global survey that featured 1,800 C-level executives found that data and AI dominates…

17 hours ago

ADvendio helps reclaim 28% of the workday with the launch of Revenue OS for Agentic Advertising

While it seems that no industry is immune to the disruptive forces of AI technology,…

1 day ago

“Fair screening is a win-win”: Get Covered bets on transparent pet friendly rentals  

Few contemporary customs tie humanity to its prehistoric predecessors; civilization, urban life, agriculture, industrial workforces,…

2 days ago

‘Algorithmic warfare is the future, will lead to enormous number of cyberattacks’: Eric Schmidt at Munich Security Conference

The war in Ukraine shows that algorithmic warfare is the future, and with it will…

7 days ago

CBDCs and the Kodak Moment Central Banks Don’t See Coming

When Laughter Reveals Everything You Need to Know At the World Economic Forum in Davos…

1 week ago

LLMjacking: The Silent Risk Undermining Self-Hosted AI Systems

What is LLMjacking? The quick rise of Large Language Models has given hackers a new…

1 week ago