MADRID, July 15. (Portaltic/EP) –
Goal has published an annual report in which, for the first time, it has collected how it works on the impact that its services generate on human rights and what are the policies that it has implemented to fulfill its obligations in this area.
In this document, the company has covered its learnings and progress that took place from January 1, 2020 to December 31, 2021 based on the Guiding Principles on Business and Human Rights of the United Nations.
First, the company has insisted that the report provides more detail on its approach to actions that put human rights at risk in areas that have been covered previously.
However, it has incorporated other new ones, such as the treatment of the Covid-19 health crisis and data protection with the use of one of its latest launches, the new Ray Ban Stories.
In this sense, Meta has highlighted that the Artificial Intelligence (AI) it is a core component of the technologies you use to provide value to your services and keep your communities safe.
In addition, it has advanced that it created an interdisciplinary team dedicated to responsible AI to guarantee that its services were based on respect for human rights, democracy and the rule of law.
These works focus on five pillars. These are Privacy & Security, Equity and inclusion, Robustness and security, Transparency and control; and Accountability and governance.
Under these premises, he has worked around problems such as accessibility, human trafficking and integrity in electoral processes.
PRESIDENTIAL ELECTIONS IN THE UNITED STATES AND MYANMAR
Meta has detailed what its actions were in view of the holding of the 2020 presidential elections in countries such as United States, Thailand or Burma, an election for which the company, along with others such as Google or Twitter, was accused of promoting disinformation.
In this sense, it has indicated that it eliminated almost a dozen operations supported by countries such as Russia and China, which used false accounts to deceive users and promote false news.
Likewise, it eliminated more than 265,000 articles, images or videos of content on Facebook and Instagram to avoid interfering in the voting decision citizens and worked with law enforcement in the days leading up to and following the events, with the aim of ensuring that information linking those responsible for their crimes was available.
Something similar happened in the Burma elections, where fact-checking programs were introduced to reduce the spread of disinformation and improve the quality of news, as well as analysis and monitoring tools, such as CrowdTangle.
The company has acknowledged in this report that it used similar measures in the recent presidential elections in the Philippines, which took place last May. Thus, he has assured that he created tools in collaboration with independent electoral watchdogs and civil organizations.
With this, Meta has ensured that it came to erase a network made up of more than 400 accounts, pages and groups in the Philippines that worked to rape and circumvent both its usage policies and its community standards.
COVID-19 CRISIS AND USE OF DATA AT RAY-BAN STORIES
Other sections included in this analysis refer to the way in which Meta has focused its efforts to protect and preserve the rights of its users in the face of the crisis of Covid-19.
In relation to the pandemic, the company has ensured that “mobilized to support public health, amplify authoritative information, connect users to essential services, and assist relevant agencies in their work to save lives.”
It also developed and deployed AI tools to extend fact-checking and detect users or accounts that might want to violate its platform bans.
To curb misinformation about the virus, its treatments or its vaccines, it implemented a series of actions available in its section of Community Rulesfreely accessible to users of its services.
On the other hand, it was in charge of placing labels that warned of the possibility of containing false information to 167 million contents.
In addition to mentioning some of the actions it has taken to protect the rights of users, the company has insisted that it did not support the promotion of automated applications of contact tracing.
Along with the measures implemented by the pandemic, Meta has pointed out that it carried out an internal analysis of the potential implications for human rights in the development of its latest product, Ray-Ban Stories smart glasses.
The risks that were then identified were the informed consent of the users of the device, as well as the effects of its use on vulnerable groups, such as women, children and minority groups; as well as data storage in the cloud.
After these investigations, Meta determined that this device could have positive consequences for, for example, people with disabilities, as it offers different voice-controlled photography and video capabilities.
To avoid the discrimination that these users with special needs could face, it was in charge of developing a specific use policy in which these people were integrated, as well as the ‘Do not disturb’ function and other specific signals for these users.