Before visiting this website, you should confirm that you are a qualified investor within the meaning of the Prospectus Regulation (EU) 2017/1129 of 14 June 2017.
You should make sure that the rules you are subject to allow you to subscribe to shares and/or units of the Collective Investment Schemes (“CIS”) mentioned on this website. Certain rules (including rules on public offering and/or marketing of CIS) may, depending on the country where the CIS are marketed, impact the marketing options for CIS and restrict the marketing thereof to certain types of investors.
I hereby acknowledge that I am aware of the rules applicable to me and I wish to access this website.
By accessing this website, I confirm that I have read and approved the legal notice
"Legal Information and Website Terms and Conditions of Use".
A standout event in the markets for 2023 will undoubtedly be the rise of Artificial Intelligence (AI) as an essential investment theme. This is evident from Nvidia Corp’s entry into the elite group of companies boasting a market capitalisation exceeding one trillion US dollars. This market enthusiasm mirrors the monumental productivity gains expected from Artificial Intelligence across many sectors. But what social and environmental impacts will this new economic “revolution” bring? Could AI destabilise our economies, societies, and democracies? In short, is AI “ESG compliant”?
THE PROMISES OF AI
Generative Artificial Intelligence (that is, AI that creates text, images, videos, music, codes, designs) is set to transform our lives. By emulating and expanding neural networks, AI has already mimicked human abilities (through “deep learning” processes). Generative AI goes even further, possessing the capability to think like a human, which significantly broadens the range of content it can generate. Consequently, generative AI should find applications across a wide variety of fields and all economic sectors. Broadly speaking, AI should make employees far more efficient, especially in content creation, data synthesis, and administrative and support tasks. Employees will spend less time “creating” and more time reviewing AI-generated outcomes. Especially as AI can produce fake content, the need for human verification is likely to surge. In total, generative AI might instigate a new “super-cycle of productivity”, potentially boosting global productivity by nearly one percent annually.
However, AI also raises serious ethical questions.
AI COULD LEAD TO INEQUALITIES
AI undoubtedly poses critical ethical challenges. Foremost among these is its impact on the job market. Referring to Joseph Schumpeter’s concept of creative destruction, long-term economic growth is expected from the achieved productivity gains. However, in the short term, those whose jobs have been replaced will need to find new roles, stirring concerns. For instance, Google searches for “is my job at risk” have doubled in the early months of the year, and OpenAI researchers estimate that for 80% of US employees, at least 10% of their tasks could be altered by AI. In reality, the effect on employment remains uncertain. In general terms, AI should enhance productivity across all sectors, particularly in services. The nature of jobs will evolve. Both employers and employees would benefit from embracing these changes, especially through training.
AI AND DISCRIMINATION?
A secondary concern is that of discrimination. Various articles have highlighted the risk of racial, gender, and disability-based discrimination, as well as bias against minorities, that certain AI algorithms might induce in areas such as credit access, recruitment, and insurance. These biases often stem from the datasets feeding the AI, which are inherently affected by such prejudices. Thankfully, it’s possible to train AI not to perpetuate these biases. Regulatory frameworks also have a crucial role to play.
ENSURING AI GETS IT RIGHT
Generative AI is not without its flaws, and it can produce incorrect or even harmful outcomes. If not properly fine-tuned, initial AI models could disseminate biased or false information to the electorate, expose vulnerable groups (children, the elderly) to harmful content, or lead to medical misdiagnoses. It underscores the importance of rigorous quality checks both by AI service providers and their users.
DATA PROTECTION AND CYBERSECURITY
Generative AI also amplifies issues related to illicit use of personal data, unauthorised reproduction of content, and protection of anonymity. It further enables the creation of highly personalised advertising, information, and pricing for every user. Regarding cybersecurity, AI enables the creation of “deepfakes”, making frauds more convincing and thus riskier. At the same time, AI should enhance the detection capabilities of hacking attempts, phishing, and malicious software.
THE RISK OF LOSING CONTROL
AI also sparks more profound concerns. On 30 May 2023, hundreds of the most renowned AI experts signed a declaration on the risks associated with AI, highlighting the potential risk of human extinction. Notably, Geoffrey Hinton, one of the pioneering scientists of “deep learning” and AI, resigned from Google, publicly expressing his concerns about the rapid development of AI, especially its use by malicious actors. The crux of the problem lies in the fact that AI learns so rapidly that its human creators no longer understand it. AI could set its own objectives without human oversight and make decisions adverse to humans. It is vital to establish extensive control mechanisms over AI’s operations, outcomes, and applications.
REGULATORS ARE ALREADY ON IT
Regulators are already addressing these issues. However, the swift development and adoption of generative AI is so rapid that even the most proactive regulators lag behind. The EU was the first to respond, with its “Artificial Intelligence Act” expected to pass by the end of 2023. The EU aims to ensure that AI remains “human-centric”. This legislation is likely to prohibit the riskiest practices, such as manipulating specific vulnerable individuals or groups, social scoring, biometric identification systems, and categorising individuals, among others. The USA, the UK, China, Canada, India, and others are also updating their regulatory frameworks, each with distinct approaches.
WHAT ABOUT THE ENVIRONMENTAL IMPACT?
The environmental impact of AI is another pressing concern. Major AI players have been reluctant to release data, but it’s known that the energy and water consumption required for AI hardware production, model creation, training, updating, and actual use is substantial. Indeed, the larger the AI model and the more data it incorporates, the heavier its environmental footprint. A single generative AI request is estimated to produce four to five times more greenhouse gas emissions than a conventional search engine query. As tech giants like Google and Microsoft integrate more AI into their search engines, messaging services, and other applications, the growth of AI should logically lead to an increase in their environmental impact. With the “cloud” already accounting for 2.5% to 3.7% of global carbon emissions (more than aviation), these concerns will quickly escalate in significance.
As we’ve seen, the disruptive potential of generative AI means that its associated ESG challenges are immense. It’s too early to predict precisely how generative AI will change our lifestyles and whether its contributions will be overall positive or negative for our societies and the environment. However, it’s clear that we need to channel it to serve humanity and our planet. Sustainable investing has a role to play in supporting the most beneficial models and avoiding risky ones.
We will discuss how to merge AI and sustainable investments in a subsequent article…
DISCLAIMER
Degroof Petercam Asset Management SA/NV l rue Guimard 18, 1040 Brussels, Belgium l RPM/RPR Brussels l TVA BE 0886 223 276 l
Marketing communication. Investing incurs risks. Past performances do not guarantee future results.
Degroof Petercam Asset Management SA/NV, 2022, all rights reserved. This document may not be distributed to retail investors and its use is exclusively restricted to professional investors. This document may not be reproduced, duplicated, disseminated, stored in an automated data file, disclosed, in whole or in part or distributed to other persons, in any form or by any means whatsoever, without the prior written consent of Degroof Petercam Asset Management (DPAM). Having access to this document does not transfer the proprietary rights whatsoever nor does it transfer title and ownership rights. The information in this document, the rights therein and legal protections with respect thereto remain exclusively with DPAM.
DPAM is the author of the present document. Although this document and its content were prepared with due care and are based on sources and/or third party data providers which DPAM deems reliable, they are provided without any warranty of any kind, either express or implied. Neither DPAM nor it sources and third party data providers guarantee the correctness, the completeness, reliability, timeliness, availability, merchantability, or fitness for a particular purpose.
The provided information herein must be considered as having a general nature and does not, under any circumstances, intend to be tailored to your personal situation. Its content does not represent investment advice, nor does it constitute an offer, solicitation, recommendation or invitation to buy, sell, subscribe to or execute any other transaction with financial instruments including but not limited to shares, bonds and units in collective investment undertakings. This document is not aimed to investors from a jurisdiction where such an offer, solicitation, recommendation or invitation would be illegal.
Neither does this document constitute independent or objective investment research or financial analysis or other form of general recommendation on transaction in financial instruments as referred to under Article 2, 2°, 5 of the law of 25 October 2016 relating to the access to the provision of investment services and the status and supervision of portfolio management companies and investment advisors. The information herein should thus not be considered as independent or objective investment research.
Investing incurs risks. Past performances do not guarantee future results. All opinions and financial estimates in this document are a reflection of the situation at issuance and are subject to amendments without notice. Changed market circumstance may render the opinions and statements in this document incorrect.
Google+