Some gen AI apps are cloning you without your consent; EU passes the AI Act; AI helps setup new IT systems; Western countries are pessimistic about AI; why AI agents are the future
OpenAI's CTO speaks to the Wall Street Journal; AI is learning what it means to be alive; AI talent wars heat up in Europe; UK researchers want to cut AI infra costs by 1000x
One Friday evening a few weeks ago, I was in my home country Romania, visiting family for a funeral, when I found myself thinking: Was it time for me to start teaching my kids how to speak Romanian? For the past 15 years, I have built a life in the U.K. where my kids were born and raised. They love their Romanian grandparents but struggle to communicate with them–and I wanted to do something about it.
So I started looking for solutions. I searched the internet for about an hour but couldn’t find anything useful, so I went back to my evening.
A few days later, I was scrolling through my Instagram feed when an ad appeared for a language learning app. Having worked for a social media company, I knew what had happened: The company had tracked my activity online, saw I was interested in language learning apps, and decided to target me with an ad. And that’s okay: I’ve had similar experiences in the past and even decided to buy products based on this type of targeted advertising.
Over the next few days, I kept getting more and more ads from the same language app. But once I started to pay closer attention, I realized there was something more troubling going on.
While some of the ads had real people excitedly encouraging me to download the app and try it out “risk free,” other ads looked eerily familiar. They featured people speaking directly to me in French or Chinese, claiming to have mastered a foreign language in mere weeks thanks to the app's miraculous capabilities. However, what was really going on was not actually miraculous but alarming: the videos were manipulated through deepfake technology, potentially without the consent of the people featured in them.
While AI-generated media can be used for harmless entertainment, education, or creative expression, deepfakes have the potential to be weaponized for malicious purposes, such as spreading misinformation, fabricating evidence, or, as in this case, perpetrating scams.
Because I’ve been working in AI for almost a decade, I could easily spot that the people in these ads weren’t actually real, nor were their language skills. Instead, I came to learn thanks to an investigation by Sophia Galer-Smith that an app had been used to clone real people without their knowledge or permission, eroding their autonomy and potentially damaging their reputations.
A troubling aspect of these deepfake ads was the lack of consent inherent in their creation. The language app likely used the services of a video cloning platform developed by a Chinese generative AI company that has changed its name four times in the last three years and does not have any measures in place to prevent the unauthorized cloning of people or any obvious mechanisms to remove someone’s likeness from their databases.
This exploitation is not only unethical but also undermines trust in the digital landscape, where authenticity and transparency are already in short supply. Take the example of Olga Loiek, a Ukrainian student who owns a YouTube channel about wellness. She was recently alerted by her followers that videos of her had been appearing in China. On the Chinese internet, Loiek’s likeness had been transformed into an avatar of a Russian woman looking to marry a Chinese man. She found that her YouTube content had been fed into the same platform that was used to generate the scam ads I’d been seeing on Instagram and an avatar bearing her likeness was now proclaiming love for Chinese men and praising Russia’s military might on Chinese social media apps. Not only was this offensive to Loiek on a personal level because of the war in Ukraine, it was the type of content she would have never agreed to participate in if she had had the option of withholding her consent.
I reached out to Ms Loiek to get her thoughts on what has happened to her. Here’s what she had to say: “Manipulating my image to say statements I would never condone violates my personal autonomy and means we need stringent regulations to protect individuals like me from such invasions of identity.”
Consent is a fundamental principle that underpins our interactions in both the physical and digital realms. It is the cornerstone of ethical conduct, affirming individuals' rights to control their own image, voice, and personal data. Without consent, we risk violating people's privacy, dignity, and agency, opening the door to manipulation, exploitation, and harm.
In my job as the head of corporate affairs for an AI company, I’ve worked with a campaign called #MyImageMyChoice, trying to raise awareness of how non-consensual images generated with deepfake apps have ruined the lives of thousands of girls and women. In the U.S., one in 12 adults have reported that they have been victims of image-based abuse. I’ve read harrowing stories from some of these victims who’ve shared how their lives were destroyed by images or videos generated by AI apps. When they tried to issue DMCA takedowns to these apps, they received no reply or were told that the companies behind the apps were not subject to any such legislation.
We’re entering an era of the internet where more and more of the content we see will be generated with AI. In this new world, consent takes on heightened importance. As the capabilities of AI continue to advance, so too must our ethical frameworks and regulatory safeguards. We need robust mechanisms to ensure that individuals' consent is obtained and respected in the creation and dissemination of AI-generated content. This includes clear guidelines for the use of facial and voice recognition technology, as well as mechanisms for verifying the authenticity of digital media.
Moreover, we must hold accountable those who seek to exploit deepfake technology for fraudulent or deceptive purposes and those who release deepfake apps that have no guardrails in place to prevent misuse. This requires collaboration between technology companies, policymakers, and civil society to develop and enforce regulations that deter malicious actors and protect users from real-world harm, instead of focusing only on imaginary doomsday scenarios from sci-fi movies. For example, we should not allow video or voice cloning companies to release products that create deepfakes of individuals without their consent. And during the process of obtaining consent, perhaps we should also mandate that these companies introduce informational labels that tell users how their likeness will be used, where it will be stored, and for how long. Many consumers might glance over these labels, but there can be real consequences to having a deepfake of someone stored on servers in countries such as China, Russia, or Belarus where there is no real recourse for victims of deepfake abuse. Finally, we need to give people mechanisms of opting out of their likeness being used online, especially if they have no control over how it is used. In the case of Loiek, the company that developed the platform used to clone her without her consent did not provide any response or take any action when they were approached by reporters for comment.
Until better regulation is in place, we need to build greater public awareness and digital literacy efforts to empower individuals to recognize manipulation and safeguard their biometric data online. We must empower consumers to make more informed decisions about the apps and platforms they use, and to recognize the potential consequences of sharing personal information, especially biometric data, in digital spaces and with companies that are prone to government surveillance or data breaches.
Generative AI apps have an undeniable allure, especially for younger people. But when people upload images or videos containing their likeness to these platforms, they unknowingly expose themselves to a myriad of risks, including privacy violations, identity theft, and potential exploitation.
While I am hopeful that one day my children can communicate with their grandparents with the help of real-time machine translation, I am deeply concerned about the impact of deepfake technology on the next generation, especially when I look at what happened to Taylor Swift, or the victims who’ve shared their stories with #MyImageMyChoice, or countless other women suffering from sexual harassment and abuse who have been forced into silence.
My children are growing up in a world where digital deception is increasingly sophisticated. Teaching them about consent, critical thinking, and media literacy is essential to helping them navigate this complex landscape and safeguard their autonomy and integrity. But that’s not enough: We need to hold the companies developing this technology accountable. We also must push governments to take action faster. For example, the U.K. will soon start to enforce the Online Safety Bill, which criminalizes deepfakes and should force tech platforms to take action and remove them. More countries should follow their lead.
And above all, we in the AI industry must be unafraid to speak out and remind our peers that this freewheeling approach to building generative AI technology is not acceptable.
This article originally appeared on Fortune.com.
And now here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
CEO says he tried to hire an AI researcher from Meta and was told to 'come back to me when you have 10,000 H100 GPUs' [Business Insider]
Confused About AI? A Guide to Which Flavor Will Boost Productivity for Business [WSJ]
What will the EU’s proposed act to regulate AI mean for consumers? [The Guardian]
AI Will Transform One of Corporate Tech’s Biggest Cost Areas—Actual Savings TBD [WSJ]
UK researchers seek to slash AI computing costs by a factor of 1,000 [FT]
AI talent war heats up in Europe [Reuters]
AI Can Build a Brighter Urban Future — If We Let People Have a Say [Bloomberg]
A.I. Is Learning What It Means to Be Alive [New York Times]
OpenAI Made AI Videos for Us. These Clips Are Good Enough to Freak Us Out. [WSJ]
⚙️Computer does
AI in the wild: how artificial intelligence is used across industry, from the internet, social media, and retail to transportation, healthcare, banking, and more
An AI that can play Goat Simulator is a step toward more useful machines [MIT Technology Review]
How AI turbocharged the development of a new drug and what it might mean for health care [Fortune]
CBP wants to use AI to scan for fentanyl at the border [The Verge]
Mercedes trials humanlike robots for ‘demanding and repetitive’ tasks [FT]
Emergency dispatchers are using AI and cloud-based tools to help those in need faster [Business Insider]
How AI can help companies better disclose their financial information [Fast Company]
Perplexity brings Yelp data to its chatbot [The Verge]
Apple has begun testing an AI-powered ad product similar to Google's [Business Insider]
Amazon will let sellers paste a link so AI can make a product page [The Verge]
DoorDash’s new AI-powered ‘SafeChat+’ tool automatically detects verbal abuse [TechCrunch]
Performance Max as it looks to supercharge its $7 billion ad business [Business Insider]
Pinterest launched an AI tool that lets users filter search results by body type [The Verge]
New AI tools can record your medical appointment or draft a message from your doctor [AP]
🧑🎓Computer learns
Interesting trends and developments from various AI fields, companies and people
An OpenAI spinoff has built an AI model that helps robots learn tasks like humans [MIT Technology Review]
Superfluous people vs AI: what the jobs revolution might look like [FT]
Databricks invests in Mistral and brings its AI models to data intelligence platform [VentureBeat]
As Generative AI Takes Off, Researchers Warn of Data Poisoning [WSJ]
Silicon Valley is pricing academics out of AI research [Washington Post]
AI Is Taking On New Work. But Change Will Be Hard—and Expensive. [WSJ]
Meta’s push to build one AI model to power videos across platforms could be an oversight nightmare, experts warn [Fast Company]
Cohere releases powerful ‘Command-R’ language model for enterprise use [VentureBeat]
No smartphone or internet? No problem; AI-backed phone has the answers [Reuters]
Walmart Shops Its AI Software to Retailers to Diversify Business [Bloomberg]
Nvidia offers developers a peek at new AI chip next week [Reuters]
Software engineers are getting closer to finding out if AI really can make them jobless [Business Insider]
AI Promises Faster Oil Drilling and Even More US Crude Supply [Bloomberg]
Microsoft expands availability of its AI-powered cybersecurity assistant [Reuters]
Top websites block Google from training AI models on their data. Nowhere near as much as OpenAI, though. [Business Insider]
'Hey, Siri:' Inside Apple's speech AI and the technology behind it [Business Insider]
Midjourney debuts feature for generating consistent characters across multiple gen AI images [VentureBeat]
Anthropic releases Claude 3 Haiku, an AI model built for speed and affordability [VentureBeat]
How to train your large language model [The Economist]
Workplace AI, robots and trackers are bad for quality of life, study finds [The Guardian]
DeepMind and Stanford’s new robot control model follow instructions from sketches [VentureBeat]
AWS wants 99% of the AI market [Axios]
Oracle adds generative AI features to finance, supply chain software [Reuters]
Conscious AI Is the Second-Scariest Kind [The Atlantic]
The Saudi humanoid robot incident shows the machines aren't taking over the world anytime soon [Business Insider]
Satya Nadella says Google should've been the 'default winner' of the AI race [Business Insider]
Italian Vespa maker Piaggio launches AI-driven factory robot [Reuters]
AI explores ‘dark genome’ to shed light on cancer growth [FT]
Larry Ellison and Elon Musk teaming up to bring AI to farming [Business Insider]
ChatGPT users to get access to news content from Le Monde, Prisa Media [Reuters]
Pika adds generative AI sound effects to its video maker [VentureBeat]
Microsoft insiders worry the company has become just 'IT for OpenAI' [Business Insider]
New AI and 5G advancements will usher in the era of edge computing on smartphones, autonomous cars, and more [Business Insider]
Thomson Reuters has $8bn war chest for AI-focused deals, says chief [FT]
The great YouTube exodus is coming — leaving AI junk and MrBeast to reign supreme [Business Insider]
How businesses can win over AI skeptics [Fortune]
Elon Musk says he's making his AI chatbot open-source — and takes another swipe at OpenAI [Business Insider]
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.