The Double-Edged Sword: Unmasking the Power and Pitfalls of Generative AI

June 6, 2023

In the last six months, we’ve been hearing about whether generative AI will positively or negatively affect humanity. Companies are rushing to adopt it, capital is being surged to develop it, and users are increasingly immersed in it. On the flip side, researchers are calling for a slowdown of development. The reality is that time will tell how artificial intelligence will be governed, but in the meantime, we need to be aware of its known benefits and risks.

Generative AI refers to a category of artificial intelligence (AI) algorithms that generate new outputs based on the data they were trained on. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more.

The benefits are undeniable and include the unique power to boost productivity, creativity, and innovation while ushering new levels of efficiency to organizations through augmenting individual capabilities. Also, it has a very strong prediction power and the chance to connect more people by bringing accessibility.

On the opposite end of the spectrum, generative AI can have very harmful effects like bias amplification, misinformation by propagating fake news, and intellectual property infringement. It may pose security risks by facilitating impersonation and can have a lack of transparency, derived from the complex nature of some AI algorithms. AI has the extreme power to generate a vast amount of data, some of which is true, and some are fake (referred to as “hallucinations”). This dilemma begs the question: Are humans prepared to identify which data is fake and handle it accordingly? 

In a recent Twitter thread, software developer and co-founder of the Center for Humane Technology, Tristan Harris, concluded with, “Remember: humanity’s future isn’t yet written. We still have a choice of which future we want with AI. But that will take acting *now*, together, before AI gets so entangled with our societal structures such that it is impossible to later enact adequate safety measures.”

 

AI Pope

 

Recently, someone misused generative AI by creating an image of Pope Francis wearing a puffy coat. This image was generated with MidJourney (an AI image generator tool). Deepfakes are a common currency nowadays, not only in images but also in video, voice, and audio. Scammers use this technology to replicate loved ones’ voices and scam families, making them lose thousands of dollars. AI generated-voice software lets them replicate voices with an audio sample of a few sentences.

According to the Washington Post, in 2022, impostor scams were the second most popular racket in America, with over 36,000 reports of people being swindled by those pretending to be friends and family, according to data from the Federal Trade Commission. Over 5,100 of those incidents happened over the phone, accounting for over $11 million in losses. 

Overall, while generative AI has many exciting potential applications, it is essential to be aware of the risks associated with it and take steps to mitigate those risks. Its application implies a huge responsibility we all need to be aware of. As Globant CEO Martin Migoya recently said, “Humans will always remain at the center of technology, with our abilities and flaws.” That’s why the Be Kind Tech Fund is part of the growing ethical AI movement, actively seeking startups that are working to make AI more responsible. 

Click here to learn more about how the Be Kind Tech Fund is working to enable responsible advances in technology.

Subscribe to our newsletter

Receive the latests news, curated posts and highlights from us. We’ll never spam, we promise.

Our mission is to transform the world, one step at a time. We help organizations thrive by changing the way they relate to their customers and employees. To do this, we need to embrace a common vision, bringing together ethical and inclusive practices.