Military TechnologySocial Media

DARPA wants AI to moderate social media groups, mitigate ‘destructive ideas’ during humanitarian efforts

Pentagon seeks ‘generalizable’ technology for commercial platforms & third-party vendors, but will it be used to censor? perspective

darpa AI
6.12Kviews

DARPA is launching a new research project called Civil Sanctuary to develop multilingual AI moderators to mitigate “destructive ideas” while encouraging “positive behavioral norms.”

With the goal of providing technologies to support humanitarian missions, the Pentagon’s research funding arm is looking to create multilingual AI moderators that will “preserve and promote the positive factors of engagement in online discourse while minimizing the risk of negative social and psychological impacts emerging from violations of platform community guidelines,” according to the project opportunity announcement.

The Defense Advanced Research Projects Agency (DARPA) Civil Sanctuary project aims to provide technologies capable of supporting the Pentagon’s humanitarian assistance efforts “by facilitating online social environments where positive behavioral norms – those linked to the productive sharing of information, particularly during crises – are encouraged locally in user conversations through the use of multilingual AI moderators.”

Is Civil Sanctuary a vehicle for social media censorship?

According to the opportunity announcement, Civil Sanctuary “will exceed current content moderation capabilities by expanding the moderation paradigm from detection/deletion to proactive, cooperative engagement.”

The announcement doesn’t rule out censorship via detection/deletion, but the program’s main focus will be on proactively engaging social media communities with multilingual AI moderators.

Why is DARPA launching this program?

According to the Civil Sanctuary announcement, “social media environments often fall prey to disinformation, bullying, and malicious rhetoric, which may be perpetuated through broader social dynamics linked to toxic and uncritical group conformity.”

In other words, the Pentagon’s research arm sees social media groups as having a hive mindset that bullies the community while promoting “disinformation.”

DARPA’s response is, “New technologies are required to preserve and promote the positive factors of engagement in online discourse while minimizing the risk of negative social and psychological impacts emerging from violations of platform community guidelines.”

Who is the target audience?

At present, non-English speaking social media communities that wish to better enforce their community guidelines for humanitarian purposes are the target audience.

When all is said and done, the idea is to “demonstrate novel and generalizable technologies that commercial platforms may leverage via third-party vendors.”

Right now, Civil Sanctuary is focusing on training AI moderators in languages that are not English, and this will come in two phases.

Phase 1 will involve the initial prototyping of artificial agents for online mediation in minimally a single non-English language.

Phase 2 will extend Phase 1 systems to:

  1. Multilingual settings involving two or more non-English languages
  2. Changing community guidelines.

With non-English speakers being the target audience now, the technology developed could just as easily work with an English-speaking audience if ever applied.

The Pentagon’s humanitarian response efforts consist of both domestic and overseas operations.

What will Civil Sanctuary look like in action?

On the surface, it looks as though content deemed to be destructive will be flagged and a multilingual chatbot will try to convince the person posting that they are wrong while teaching them the errors of their ways.

Here’s what DARPA has to say:

“Civil Sanctuary will scale the moderation capability of current platforms, enabling a quicker response to emerging issues and creating a more stable information environment, while simultaneously teaching users more beneficial behaviors that mitigate harmful reactive impulses, including mitigating the uncritical acceptance and amplification of destructive ideas as a means to assert group conformity.”

Additionally, “Extending current research in computational dialogue and cognitive modeling, artificial agents created under this program will learn best practices for online mediation by observing human experts and then employ these skills to interactively guide user groups to adhere to community guidelines.”

Pros and cons?

The multilingual AI moderators may prove useful in making sure accurate information is seen by those who need it most during emergencies.

“During emergency situations and times of turmoil, these [social media] platforms can provide a crucial forum for discussing time-sensitive, potentially life-saving information,” the announcement reads.

“During DoD [Department of Defense] Humanitarian Assistance and Disaster Response (HA/DR) operations, relief efforts would benefit from a stable and constructive information environment that naturally facilitates informative dialogue.”

However, if the “generalizable” technologies are ever repurposed beyond the scope of certain non-English speaking social media communities, they could run the risk of being used for disinformation campaigns and censorship if ever put to the task.

DARPA has been funding research into monitoring social media and online news sources for a long time now, and big tech companies like Google, Twitter, and Facebook openly embrace this tactic with every type of coordinated inauthentic behavior removal update they give.

Back in 2011, DARPA launched the Social Media in Strategic Communication (SMISC) program “to help identify misinformation or deception campaigns and counter them with truthful information” on social media.

More recently, DARPA announced its INfluence Campaign Awareness and Sensemaking (INCAS) program that looks to “exploit primarily publicly-available data sources including multilingual, multi-platform social media (e.g. blogs, tweets, messaging), online news sources, and online reference data sources” in order to track geopolitical influence campaigns before they become popular.

When you combine the many research programs that delve into social media surveillance and intervention, the technologies being developed by DARPA can be used for positive or negative purposes.

It all depends on who’s using them, why, and whether they are trustworthy or not.

DARPA to ‘exploit social media, messaging & blog data’ to track geopolitical influence campaigns

Intel agency awards contract to company that harvests social media text, data

DARPA wants to make AI a ‘collaborative partner’ for national defense

DARPA sets sights on making AI self-aware of complex time dimensions

Pentagon is close to enterprise-wide infrastructure to deploy AI at scale: DoD AI Symposium

2 Comments

  1. […] DARPA wants AI to moderate social media groups, mitigate ‘destructive ideas’ during humanitaria&…  The Sociable – English speaking from DARPA wants AI to moderate social media groups, mitigate ‘destructive ideas’ during humanitaria…  The Sociable – visit this link please – Read More by “englishspeaking” – Google News […]

  2. SERIOUSLY?… ha, ha, ha!… you’ve got to be “kidding (or ‘goating’!)”!… good luck with that!
    .
    As one example of AI’s “sophistication”, all one need do is to type the words “black hat” into one’s search engine search bar, hit enter and then switch to images!… and watch all the pretty colours pop up– and, everything non-hat-related! And if one is of the view that playing with search operators will effect desired results… well… you’re “opt(icly) challenged”, deluded, or both!
    .
    Attention “DARPORN” Members!… when covering up with blankets at night, don’t f*rt and breathe in your methane!… this will kill any lucid cognitive abilities!
    .
    On a related Search Engine theme… If you wish to efficiently and effectively address the impact that cyber porn is having on society (and not just on our children), you would do well to consider the role that our search engines have had, and are having in facilitating porn material, in general! For example…
    .
    I wanted to offer DuckDuckGoOnion “Feedback” concerning an image that appeared under their “Images”, of a naked man and woman having intercourse, amid the pictures of the late Dr. James Fraser Mustard, and children at play… Dr. Mustard, a pioneer MD in Early childhood Development, and, heart surgeon!… but, a popup demanded that I receive a DOWNLOAD, before the Feedback message could be sent! My first intuited impression (as no such download conscription was received through the widget before and as I’m aware that corporate ICT key-logs individuals on a frequent basis!), was that DDGO was no longer allowing me to send them messages concerning the CRAP that they often display, re REASONABLE SEARCHES made by way of their “search engine”! In other words, I was to “ENJOY” being VISUALLY ASSAULTED in my use of DDGO!… AND I SHOULD “LEARN TO LOVE IT”!… or they– if I insist on complaining!– will “F me up” with a, “FORCED DOWNLOAD (so to speak!… like some young victim with a hand over the mouth!)”, so that any future complaints will be circumvented through a, “Weinstein-like grooming software package”!… reducing any further complaints to “MUFFLED YELPS”! Incidentally, the search that was made, included the combined expression, “Betty Mustard and Fraser Mustard (Betty, being his late wife)”.
    .
    Despite “Safe Search” being OFF, this should not translate to an image of a sexual nature!… UNLESS, an associated phrase is found within the string I’m using within DDGO’s search bar! And as this wasn’t the case, I desired to express my thoughts about the presence of images that were ADDED to my search results THAT SHOULDN’T HAVE INCLUDED IMAGES OF NAKED INDIVIDUALS, OR ANYTHING HAVING TO DO WITH SEX!… AS NUDITY AND SEX WERE NOT A PART OF MY SEARCH STRING! And YES!… I am saying that DDGO’S “PRINCIPALS” have MUCH in common with imprisoned Harvey Weinstein!… and that the “Safe Search Off Default”, CAN’T– AND MUST NOT!– BREACH OUR DIGITAL HUMAN RIGHTS, AND COMMON DECENCY! MY SEARCH EXPRESSIONS QUALIFY “SAFE SEARCH OFF”!… AND “SAFE SEARCH OFF” DOESN’T JUSTIFY IMPOSING “WEINSTEIN-LIKE SEARCH RESULTS” NOT INCLUDED IN MY SEARCH EXPRESSIONS! TO ME… BUT, APPARENTLY, NOT TO “SAFE AND SECURE” DDGO!… “SAFE SEARCH OFF” MEANS NOT DELIMITING SEARCH RESULTS WITHIN THE CONTEXT OF A GIVEN SEARCH EXPRESSION! BUT, TO DDGO, THIS FEATURE IS AN EXCUSE FOR THE “SOCIAL GROOMING” OF IT(‘)S HAPLESS AND UNWITTING VICTIMS! AND HELLO!… ROSE MCGOWN!
    .
    To close…
    .
    Beyond the problems associated with DDGO, are those pursued by researcher Dr. Robert Epstein (et al… not to be confused with the once incarcerated– and now late!– Trump friend!), who has provided us with some damning accounts of what he calls the, “Search Engine Manipulation Effect (or SEME)”.
    .
    To make a much longer tale shorter, SEME adds up to the outright circumvention of our searches online, by stacking the algorithmic deck in terms what gets “resulted”, and to what degree… and by search engines such as Google! And throw in SEME’s dirty cousin, “Search Engine Optimization (the paying of ‘algorithmic hitmen’ to place one’s website at the top of the search engine ‘search result pecking order’!)”… and what I have termed the, “search Engine Optimization Manipulation Effect”!… you have the recipe for the wholesale manipulation of Free Speech and Freedom of the Press (but, etc.!) in the very ordering of the results received– and right under the noses of every Tom, Dick and Hairy Human ICT Rights Advocacy, and Advocate! DISGUSTING!
    .
    Simply put, search engines (in particular) are our gateway to the tip of the “web iceberg”… the dark web, making up the largest aspect of the web… and if our less than democratic “search results” consistently inform us that our logical, sound, peer-acknowledged and accepted and “intuitively expected” results continually are met with, “No such search results”, you then have to ask yourself: “Are there REALLY ‘No such search results’, or, am I… and millions of other would-be conscienable and responsible networked souls!… being ‘sold a bill of goods’, and we’re to believe that our ‘intuitve hunches’ are but the ISOLATED MUSINGS OF FRINGE INDIVIDUALS, and ‘out of the loop’ of the more ‘l-e-a-r-n-e-d m-e-m-b-e-r-s’ of the ‘insider cyber crowd club’?”
    .
    In other words, would a proper application of Software Forensics… of both the software being used by search engines, and network software!… reveal an unimaginable daily degree of Human Rights and Human ICT Rights abuses right beneath our very “stank detector devices”? And, if such abuses were to be found, what would– should!– our REASONABLE RESPONSES be?… and, what legislative framework should be deployed/ employed to “degroom” these characters? Again… HELLO, Rose McGowan!

Leave a Response

Tim Hinchliffe
Tim Hinchliffe is the editor of The Sociable. His passions include writing about how technology impacts society and the parallels between Artificial Intelligence and Mythology. Previously, he was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. tim@sociable.co