New Delhi: Facebook has said it has been working on ensuring safety and security of users and has invested more than USD 13 billion in teams and technology in this area since 2016.
The social media giant noted that its "advanced AI" has helped block 3 billion fake accounts in the first half of this year.
The comments come after reports stated that the US-based company had failed to fix the platform's flaws despite being flagged on numerous occasions.
"We firmly believe that ongoing research and candid conversations about our impact are some of the most effective ways to identify emerging issues and get ahead of them.
"This doesn't mean we find and fix every problem right away. But because of this approach, together with other changes, we have made significant progress across a number of important areas, including privacy, safety and security, to name a few," Facebook said in a blogpost on Tuesday.
Facebook admitted that in the past, it didn't address safety and security challenges "early enough in the product development process".
"Instead, we made improvements reactively in response to a specific abuse. But we have fundamentally changed that approach. Today, we embed teams focusing specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it," it added.
Facebook added that products also have to go through an Integrity Review process, similar to the Privacy Review process, so that it can anticipate potential abuses and build in ways to mitigate them.
"Today, we have 40,000 people working on safety and security, and have invested more than USD 13 billion in teams and technology in this area since 2016. Since 2017, Facebook's security teams have disrupted and removed more than 150 covert influence operations, both foreign and domestic, helping prevent similar abuse," the blog said.
Read: Facebook oversight board reviewing 'XCheck' system for VIPs
It added that its advanced AI has helped the platform block 3 billion fake accounts in the first half of ths year.
"Our AI systems have gotten better at keeping people safer on our platform, for example by proactively removing content that violates our standards on hate speech. We now remove 15X more of such content across Facebook and Instagram than when we first began reporting it in 2017," it said.
Facebook - which counts India among its biggest markets - noted that since 2019, it started using technology that "understands the same concept in multiple languages and applies learnings from one language to improve its performance in others".
It added that the company has changed its approach to protecting people's privacy as a company, including investing in and expanding Privacy Checkup and launching tools like 'Off-Facebook Activity' and 'Why Am I Seeing This?' that show people how their information is used and lets them manage settings more easily.
"Misinformation has been a challenge on and off the internet for many decades...At Facebook, we've begun addressing this comprehensively...We remove false and harmful content that violates our Community Standards, including more than 20 million pieces of false COVID-19 and vaccine content," the blog said.
Facebook highlighted that it has changed not just what it built but how it was built, so that when new products are launched, "they are more likely to have effective privacy, security and safety protections already built in".
Read: Facebook India appoints ex-IAS officer Rajiv Aggarwal as Head of Public Policy
PTI