Facebook Inc’s algorithms demote rather than promote polarising content, its global head of safety told British lawmakers on Thursday, adding that the US company would welcome effective government regulation.
Governments in Europe and the United States are grappling with regulating social media platforms to reduce the spread of harmful content, particularly for young users.
Britain is leading the charge by bringing forward laws that could fine social media companies up to 10% of their turnover if they fail to remove or limit the spread of illegal content.
Secondary legislation that would make company directors liable could be proposed if the measures do not work.
Facebook whistleblower Frances Haugen told the same committee of lawmakers on Monday that Facebook’s algorithms pushed extreme and divisive content to users.
Facebook’s Antigone Davis denied the charge.
“I don’t agree that we are amplifying hate,” Davis told the committee on Thursday, adding: “I think we try to take in signals to ensure that we demote content that is divisive for example, or polarising.”
She said she could not guarantee a user would not be recommended hateful content, but Facebook was using AI to reduce its prevalence to 0.05%.
“We have zero interest in amplifying hate on our platform and creating a bad experience for people, they won’t come back,” she said. “Our advertisers won’t let it happen either.”
Davis said Facebook, which announced on Thursday it would rebrand as Meta, wanted regulators to contribute to making social media platforms safer, for example in research into eating disorders or body image.
“Many of these are societal issues and we would like a regulator to play a role,” she said, adding Facebook would welcome a regulator with “proportionate and effective enforcement powers”.
“I think criminal liability for directors is a pretty serious step and I’m not sure we need it to take action.”
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)