Before visiting this website, you should confirm that you are a qualified investor within the meaning of the Prospectus Regulation (EU) 2017/1129 of 14 June 2017.
You should make sure that the rules you are subject to allow you to subscribe to shares and/or units of the Collective Investment Schemes (“CIS”) mentioned on this website. Certain rules (including rules on public offering and/or marketing of CIS) may, depending on the country where the CIS are marketed, impact the marketing options for CIS and restrict the marketing thereof to certain types of investors.
I hereby acknowledge that I am aware of the rules applicable to me and I wish to access this website.
By accessing this website, I confirm that I have read and approved the legal notice
"Legal Information and Website Terms and Conditions of Use".
Digital rights are becoming increasingly relevant in today’s tech-focused world.
We need to ensure that human rights are fully present in the digital space.
The EU’s Digital Services Act (DSA) and the Digital Markets Act (DMA) will be a pioneering gamechanger in this regard.
Assessing company practices on digital rights is still difficult due to a lack of standardisation.
Facial recognition technology can have an impact on human rights
While strolling around London, scrolling through your Facebook feed or asking your smart speaker to sum up the weather predictions, you are faced with your digital rights. These rights are becoming more widely known by both consumers and regulators. Therefore, companies face additional pressure to integrate these rights in their internal processes and product offerings. We further elaborate on the importance of these rights, their relevance, upcoming regulatory proposals and the data challenge when it comes to assessing digital rights.
DIGITAL RIGHTS: A NATURAL EXTENSION OF HUMAN RIGHTS?
Digital rights pertain to all human rights in a digital environment. This definition is quite broad, which is why these rights tend to focus on distinct issues. These include -among others- the right to privacy, freedom of expression and the right to internet access.
In December 2013, the UN General Assembly addressed the right to privacy in the digital age, making clear that it should be protected both online as well as offline. Companies’ activities can impact these rights in numerous ways, ranging from failing to protect the confidentiality of personal data of users to selling technology equipment to governments with poor human rights records that are used to track or monitor individuals’ communications and movements. Naturally, the IT sector can also be a driver to promote human rights by enabling open communication between people and empower them to express themselves- bolstering freedom of expression. Nevertheless, human rights can be negatively impacted in case these rights are restricted in a systemic way.
Recently, a series of IT companies, such as Facebook and Google, took a strong stance against the unbridled restriction of freedom of expression by refusing to remove online posts that enabled the mass protests in favour of Navalny that took place earlier this year in Russia. On the other hand, unchecked freedom of expression might also cause harm, especially bolstered by the algorithmic nature of platforms with the aim to increase user engagement and marketability for targeted advertising. Sensationalised inaccuracies or straight-up lies may be proactively pushed by platforms as they are likely to generate more user interaction than mere objective news articles. Negative externalities of these processes can be illustrated by the widespread misleading information on the Coronavirus or continuous false reporting on the ‘rigged’ US elections in 2020.
Considering these digital rights and the role that IT companies play, We are convinced that personal data is an economic driver and resource for innovation. Indeed, such data is a key asset for companies, but it is also tied to key risks to be managed in the corporate strategies and business. First, a digital trust deficit could erode reputation and brand. Companies in the payment industry, for example, attract customer based on consumer trust. A violation of that trust can significantly hurt these companies. Second, data protection breaches and misuses could lead to high fines and penalties, which are material for the business bottom line of the company. Just look at the multiple pending lawsuits against Alphabet seeking more than 3 billion USD in damages related to the alleged unlawful collection of children’s data for targeted advertising via YouTube.
STAKEHOLDER PRESSURE AS A CATALYST; REGULATION AS THE TIPPING POINT
Customer and investor pressure to incorporate digital rights in business’ due diligence processes are important, but upcoming legislation might be even more powerful to enable a paradigm shift.
Applied in 2018, the EU’s General Data Protection Regulation (GDPR) legislation was one of the first pieces of legislation to increase users’ control and rights over their personal data. Two upcoming European legislative proposals -the Digital Services Act (DSA) and the Digital Markets Act (DMA)- will cause an even bigger upheaval for social media platforms and Big Tech firms. The DSA and the DMA are aimed at, respectively, clarifying the policing responsibilities of large platforms and curbing their market powers. The DSA in particular will generate quite some upheaval as it will force large players to, among others, publish the parameters of their content-moderation and targeted-advertisement algorithms. The U.K. follows suit with an incoming “Online Safety Bill” next year to tackle concerns about the societal impact of Big Tech.
Naturally, the upcoming EU regulation cannot only be seen as mere customer protection. It will likely be introduced with geopolitical considerations, as tech sovereignty is a strategic priority of the European Commission (EC) and the new regulations will strongly impact the workings of GAFAM (Google, Amazon, Facebook, Apple and Microsoft). The DMA will probably solely focus on GAFAM, as it has been drafted to limit unfair market practices and curb the market power of larger IT companies.
A last regulatory element that should be discussed, is the recently published draft of the EU’s social taxonomy. The goal of this taxonomy is to establish a comprehensive way to define what a socially-sustainable activity or company entails. A similar taxonomy has been developed for environmental activities, which is already more advanced than its social counterpart. The preliminary text on the social taxonomy describes how companies will need to internalise their potential negative effects on human rights. This also includes digital rights, when material for the company’s activities.
ASSESSING CORPORATES’ PRACTICES ON DIGITAL RIGHTS
From the different ESG risks, the social risks tend to be more difficult to assess. A lack of standardisation has led to the widespread perception that it is impossible to measure the social performance of companies, and that existing data is not reliable or comparable. This consistency problem is exacerbated by the low correlation between social scores defined by extra-financial data providers.
Luckily there are quite some initiatives that strive to solve the issues linked to social data, and more specifically data pertaining to the management of digital rights by corporates. One of these initiatives is Ranking Digital Rights. The initiative aims at promoting freedom of expression and privacy on the internet by creating global standards and incentives for companies to respect and protect users’ rights. Each year, they produce a ranking of the most powerful digital platforms and telecommunications companies, assessed based on the companies’ policies and commitments disclosed to their users and the public.
THE PECULIAR CASE OF FACIAL RECOGNITION
Since digital rights can be impacted through a wide range of situations and technologies, we aim at identifying the future technologies that may challenge these rights. In this context. We have joined a collaborative investor’s initiative that focusses on facial recognition.
The concept of facial recognition relates to ‘the process of identifying or verifying the identity of a person using a picture or a video of their face‘.
This initiative addresses the human rights impact linked to the development and use of facial recognition technology. The goal will be to first understand and learn about the human rights issues linked to this technology, define best practices and disseminate these in companies that are exposed to this technology.