Facebook algorithms 'may recognize terrorists'

Facebook founder Mark Zuckerberg has outlined a plan to let artificial intelligence (AI) software review content posted on the social network.<br />

He said algorithms would have the ability to spot terrorism, bullying as well as prevent suicide, in a letter describing the strategy.

He admitted Facebook had formerly made errors in the content it had removed from your web site.

However, he said it might take years for the necessary algorithms to be developed.

Malfunctions

"The intricacy of the issues we have seen has outstripped our present procedures for regulating the community," he said.

He highlighted the removal of videos associated with the Black Lives Matter movement along with the historic napalm girl photo from Vietnam as "malfunctions" in the existing procedure.

Facebook was also criticised about killing a soldier, months before the assault, following reports that one of the killers of Fusilier Lee Rigby spoke online, in 2014.

"We are studying systems that can read a text and look at photographs as well as videos to comprehend if anything dangerous may be occurring.

"This is still very early in development, but we have begun to have it look at some content, and it already generates about one-third of all reports to the team that reviews content."

He said AI assured to identify debatable content faster than humans and will also "identify dangers that nobody would have flagged in any way, including terrorists planning attacks using private routes".

"At the moment, we are starting to research approaches to make use of AI to tell the difference between news stories about terrorism and genuine terrorist propaganda."

Personal filtering

Mr Zuckerberg said his ultimate objective was to allow individuals to post largely whatever they liked, within what the law states, with algorithms discovering what had been uploaded.

Users would then manage to filter their news feed to take off the types of the post they failed to wish to find out.

"Where is the line on nudity? On violence? On graphical content? On profanity? Everything you decide will be your personal settings," he explained.

"For people who do not make a decision, the default will be whatever the bulk of individuals in your region selected, as a referendum.

"It is worth noting that important improvements in AI must comprehend text, pictures as well as videos to judge whether or not they contain hate speech, graphic violence, sexually explicit content, and much more.

"At our current pace of research, we hope to begin handling a few of these cases in 2017, but others Won't be possible for many years."