Dissecting Facebook’s ‘coordinated inauthentic behavior’ removal policy
Facebook announces it has removed dozens of pages and accounts from Iran that were involved in what the social media behemoth describes as ‘coordinated inauthentic behavior’.
“They purported to be located in the US and Europe, used fake accounts to run Pages and Groups, and impersonated legitimate news organizations in the Middle East”
In its latest raid, 51 Facebook accounts, 36 pages, seven groups and three Instagram accounts that were involved in ‘coordinated inauthentic behavior’ originating in Iran were removed.
Facebook claims that the removal is based on the ‘behavior’ and not the ‘content’ being posted. However, it is not clear if the decision to remove such activity is influenced by factors other than wanting to stop fake news or social media manipulation.
Defining Coordinated Inauthentic Behavior
While Facebook has defined what it means by ‘coordinated inauthentic behavior’, it has not been clear about how it decides this behavior.
Nathaniel Gleicher, Facebook’s Head of Cybersecurity Policy says the term is different from ‘fake news’. He defines it as, “when groups of pages or people work together to mislead others about who they are or what they’re doing coordinated.”
Gleicher goes on to explain that Facebook takes down such accounts because of their deceptive behavior and not because of the content they’re sharing. That means the post itself may not be false and may not go against the community standards.
Facebook also takes a network down when it looks like it’s being run from one part of the world but is actually being run from another for ideological or financial purposes.
In case of the latest removal, the coordinated inauthentic behavior reportedly originated in Iran. Facebook found that the individuals behind the activity misled people about who they were and what they were doing.
“They purported to be located in the US and Europe, used fake accounts to run Pages and Groups, and impersonated legitimate news organizations in the Middle East. The individuals behind this activity also represented themselves as journalists or other personas and tried to contact policymakers, reporters, academics, Iranian dissidents and other public figures. A number of these account owners also attempted to contact authentic Instagram accounts, some of which later posted content associated with this activity.”
Facebook claims with every removal of a page or post that it is working to detect and stop such activity because “we don’t want our services to be used to manipulate people.”
The decision for removal is based on the ‘behavior’, not the ‘content’.
Facebook’s current announcement explains that the posts from the removed pages and accounts discussed topics such as public figures and politics in the US and the UK, US secessionist movements, Islam, Arab minorities in Iran and the influence of Saudi Arabia in the Middle East.
One of the samples posted by Facebook of a removed page — Ahwaz Saudi Channel — shows Saudi youth writing anti-government slogans on walls.
The company conducted an internal investigation into suspected Iran-linked coordinated inauthentic behavior and identified this activity on the basis of a tip shared by FireEye, a US cybersecurity firm. It is also sharing its insights with law enforcement, policymakers, and industry partners.
Looking at the geo-political aspects of Ahwaz, a city in the southwest of Iran and the capital of the Khuzestan province, the area has been regarded as a chessboard where geopolitics is continuously played with the systematic and well-planned use of media channels.
A 2015 study found the significance of social media for Arab Iranians in Ahwaz, who are struggling because of their Arab origins.
Isn’t it possible then, that Facebook has been influenced by one side of this geopolitics? Is it only targeting accounts and posts that oppose a government or an influential entity like royalty?
Facebook’s Purge of Accounts
Throughout 2018, Facebook took down several accounts as part of coordinated inauthentic behavior. In August 2018, it removed Myanmar military officials from the platform while banning 20 Burmese individuals and organizations from.
In January 2019, it removed hundreds of Indonesian accounts, pages and groups that were linked to an online group accused of spreading hate speech and fake news.
In April 2019, the social media giant removed pages, groups, and accounts for violating Facebook’s policies on coordinated inauthentic behavior or spam from India and Pakistan.
However, is the company’s use of its ‘coordinated inauthentic behavior’ rule consistent and fair, or is it just picking and choosing news? Is there political pressure on Facebook that prompts such removal of accounts?
For example, among the accounts removed in January, was the account of Indonesian social media activist Permadi Arya, who as a result, threatened to file a $71.68 million lawsuit against the tech giant.
Facebook recovered the account a month later.
Such incidents lead to doubts about Facebook’s policies in defining fake news and inauthentic behavior.
However, is there a pattern to these removals that hint at political or financial influence from powerful parties?
By not defining with exactitude its choice of pages or groups, the social media company might either be trying to avoid taking sides, or be veiling the sides it is choosing to take.