Washington-based whistleblower advocacy group files complaint against Facebook artificial intelligence systems for generating terror content

This file photo shows Facebook's logo on a broken screen of a mobile phone. The company is unwittingly auto-generating content for terror-linked groups that its artificial intelligence systems do not recognize as extremist, according to a complaint made public on May 9, 2019. (Photo by AFP)
A Washington-based whistleblower advocacy group has filed a complaint against Facebook, saying its artificial intelligence systems are inadvertently auto-generating content for terror-linked groups.

"Facebook's efforts to stamp out terror content have been weak and ineffectual," read an executive summary of the 48-page complaint made public on Thursday by the National Whistleblowers Center, following its five-month study of the pages of 3,000 members who were connected to groups banned by the US government as “terrorist.”

The researchers of the center further found that the Daesh and al-Qaeda terrorist groups were "openly" active on the social network, said an AFP report published on Friday.

What was more worryingly, they observed, was the fact that Facebook's own software was automatically generating "celebration" and "memories" videos for pages of terrorist groups that had received sufficient views or "likes."

"Of even greater concern, Facebook itself has been creating and promoting terror content with its auto-generate technology," the complaint added as quoted in the report.

Survey results shared in the document further indicated that Facebook was not delivering on its claims about eliminating what it deems “extremist” posts or accounts, a move largely influenced by US government’s policies.

According to the report, the Whistleblower's center further stated that it filed the complaint with the US Securities and Exchange Commission on behalf of a source that preferred to remain anonymous.

It further cited Facebook as claiming that it had been removing terror-linked content "at a far higher success rate than even two years go" after investing heavily in technology.

"We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world," the company underlined.

Facebook announced bans in March at the social network and its affiliate, Instagram, on praise or support for white nationalism and white separatism following the brutal massacre of Muslim worshipers at two mosques in Christchurch, New Zealand, streamed live on Facebook without interruption.US-based media reported that copies of the video showing the carnage of over 50 Muslims was still being circulated on Facebook and Instagram nearly seven weeks after the attack.

According to a CNN report, nine videos on Facebook and Instagram showing parts of the terrorist's original live-stream were identified by Eric Feinberg of the Global Intellectual Property Enforcement Center, who tracks terror-related content online.

All nine videos were posted the week the carnage was carried out and have remained on the platforms since.

This is while Facebook and other social media platforms have come under fire for not doing enough to curtail messages of hate and violence. They have also been widely criticized for failing to provide equal time for all viewpoints.

Last week, Facebook banned African-American Muslim activist Louis Farrakhan, claiming that the move was an attempt to crack down on “hate content” on its platform.Farrakhan has been an outspoken critic of US government’s longstanding policies of discrimination against African-Americans, as well as the overreaching influence of pro-Israeli lobby groups on US government agencies and officials.