❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Welcome to the age of paranoia as deepfakes and scams abound

These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based nonprofit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams callβ€”with their camera on.

If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. β€œNow, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.

Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.

Read full article

Comments

Β© Getty Images

Take It Down Act nears passage; critics warn Trump could use it against enemies

An anti-deepfake bill is on the verge of becoming US law despite concerns from civil liberties groups that it could be used by President Trump and others to censor speech that has nothing to do with the intent of the bill.

The bill is called the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, or Take It Down Act. The Senate version co-sponsored by Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.) was approved in the Senate by unanimous consent in February and is nearing passage in the House. The House Committee on Energy and Commerce approved the bill in a 49-1 vote yesterday, sending it to the House floor.

The bill pertains to "nonconsensual intimate visual depictions," including both authentic photos shared without consent and forgeries produced by artificial intelligence or other technological means. Publishing intimate images of adults without consent could be punished by a fine and up to two years of prison. Publishing intimate images of minors under 18 could be punished with a fine or up to three years in prison.

Read full article

Comments

Β© Getty Images

YouTube expands its β€˜likeness’ detection technology, which detects AI fakes, to a handful of top creators

9 April 2025 at 09:45
YouTube on Wednesday announced an expansion of its pilot program designed to identify and manage AI-generated content that features the β€œlikeness,” including the face, of creators, artists, and other famous or influential figures. The company is also publicly declaring its support for the legislation known as the NO FAKES ACT, which aims to tackle the […]

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy's one or two-day suspension.

Mani refused to take adults' advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she's won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims "take a stand against deceptive and dangerous deepfakes" by making it a crime to create or share fake AI nudes of minors or non-consenting adultsβ€”as well as deepfakes seeking to meddle with elections or damage any individuals' or corporations' reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these "more severe consequences" will deter kids and adults from creating harmful images, as well as emphasize to schoolsβ€”whose lax response to fake nudes has been heavily criticizedβ€”that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Read full article

Comments

Β© Akiko Aoki | Moment

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser

2 April 2025 at 07:35
'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser

Content warning: This article contains mentions of self-harm and suicide.

Joanne Chew found deepfakes of herself online the same way many women have found themselves face-swapped into porn: She was searching her own name after a big accomplishment.

β€œSometimes I just Google my name to see what comes up,” Chew told me in a phone call in August 2024. β€œI want to see, like, is it my artwork, or my acting, or my main website that comes up first? And then I saw this, and I thought, β€˜Okay, this is weird.’” Someone was posting deepfakes of her with her full name in the video titles, alongside racist slurs, to popular tube sites.Β 

Chew acted in the May 2024 film Dead Wrong and suspects her harasser started ramping up his targeting her in AI face-swapped porn shortly before the time it came out.Β 

β€œAt the time, I thought, β€˜It's gonna blow over.’ Because this is bound to happen the more you move forward in your career as any sort of public person,” she said. β€œBut then I noticed he was putting up more and more... And then I started wondering, is it somebody that I know?” Although the names changed over the year, all of the deepfake content at that point was coming from the same username, β€œRon.” 404 Media isn’t publishing his screen names to avoid amplifying his accounts.

Many targets of deepfake harassment attempt to tackle the barrage of harassment themselves by finding and reporting content to sites that are difficult to reach and often rarely respond. This is a time-consuming, traumatizing process. Chew did this for a while. β€œInitially I thought it was just going to be a few videos, and I had other girlfriends who modeled and acted, with much bigger followings than me, who said unfortunately these things happen as our careers progress,” she said.Β 

She pushed what she saw out of her mind for a few months until she checked again around August. She was horrified, she said, to see how much more had been uploaded in just a few months. β€œAt the height, he had an album of over 2,000 pieces of content, [was posting] on multiple sites, multiple YouTube channels, and then he started making multiple accounts on Facebook and Instagram to direct message me.”

At that point, she enlisted the help of Charles DeBarber, an online investigator who previously helped Girls Do Porn victims reclaim their images online.Β 

β€œWe're seeing a rapid upswing of AI generated art used in harassment. The ease [with which] even a lay person can use an open source tool to create deep fakes is going to only make them increase,” DeBarber told me. β€œThe technology is inevitable, but the way it is used requires careful regulation and consequences for its abuse. We're still struggling to catch up to technology.” 

Chew’s harasser only ramped up his efforts as time went on. Ron contacted Chew directly to insult her, obsess over her, or beg for her forgiveness, all while posting more degrading content all over the internet. Nearly a year later, Chew is still dealing with the fallout of becoming a victim of non-consensual, algorithmically-generated intimate imagery.Β 

β€œAfter discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. β€œI literally felt buried.” 

When a big-name celebrity like Scarlett Johansson or Taylor Swift is targeted with deepfake harassment, it’s often from a legion of β€œfans,” people who join group efforts in Telegram channels or make Civitai models of a specific person. It’s been this way from the beginning of deepfakes, with people trading tips and tricks for the best prompts, platforms, and generative AI tools to create whatever explicit material they’re trying to achieve featuring a specific person. But when it’s someone who doesn’t have the same professional or financial power as these mega-celebrities, the harassment can take on a different form: one guy, in Chew’s case, producing what feels like an endless stream of images and videos of his obsession in videos stolen from pornographers and warped into something that threatens to take over a person’s life.Β Β 

β€œFollower of the goddess J.,” Ron’s Instagram account bio said. The account was dedicated to posting photos of Chew, with an AI-generated image of her in a kimono as the profile picture. He was also, it seemed, the one spreading this content all over every popular deepfake repository and tube site.

In August, Chew posted a video explaining the situation to her followers on Instagram. By then, Ron had made hundreds of pieces of deepfaked content of her, and a YouTube channel dedicated to posting it. She filed a complaint to YouTube, and the platform responded, telling her this account was not in violation of its privacy guidelines, which clearly forbids β€œAI-generated or other synthetic content that looks or sounds like you.”  

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser
Screenshot courtesy Joanne Chew

β€œHow is this not a violation? Someone has taken my name, my face, my professional information, against my consent, and is creating horrible, disgusting, degrading content [and] posting it all over the internet. Make this make sense,” she said in the video.

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser
Screenshot via Instagram

β€œI felt like he was watching my social media, so I was kind of just calling him out on stuff to see if he would drop more hints or say more things,” Chew told me.

Later that month, Ron removed all of the content from the YouTube channel.

But in September, Ron started commenting on Chew’s Instagram posts. And for the first time, she engaged with her harasser directly, replying to his comments.Β 

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser

Then, he sent her a barrage of messages on Instagram, pleas for attention and forgiveness mixed in with threats. β€œPlease give my life some meaning,” he wrote. β€œI dont want to just be the deepfake porn monster I started as. What did you say I was? A deranged monster. People can change. Right? Let me change and be a good person. To me you have the most beautiful face of any asian girl I have ever seen. Please let me be your devoted worshipper. Ok I will put up nice pics of you on my instagram. Until you say otherwise. You mocked my art before. But these will be real art. Inspired by you, Jo.”

He continued sending her long, emotionally-charged messages, about how he feels worthless and is a monster, how he hated himself and wanted to die. β€œI just want to say that Im with you on A.I. We got to stop it,” he said. β€œIt hurts women. But it also addicting and does terrible things to the men who use it. Sure it feels good and its exciting. But after the poison is released, there is guilt and shame. I hated myself after every release. Its terrible to be the monster you hate.” 

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser
Illustration: Lindsay Ballant

He begged her to see him as her biggest fan, and to consider letting him start an OnlyFans on her behalf. He said he made money off of making deepfakes of her. β€œMen love you. Use them for yourself,” he wrote. β€œI will stop if you ask me to. If you want me to never look at any of your social media, all you have to do is ask. I am a man of my word. If you ask me to, I will never look you up ever again. I will stop being a fan.”

β€œHe made a point of calling me Jo because I said only people who grew up with me are allowed to call me that and for a while he was purposely referring to me as β€˜Jo’ in some of the titles of his content and while messaging me,” she said.

Chew didn’t engage with any of these direct messages. But on the same day he was sending her these screeds, he uploaded a new video to a tube site: β€œHate-Fucking Joanne Chew Some Chinese Whore.” 

On Facebook, he sent her more incredibly lengthy messages about his obsession with her.Β 

β€œI don't want you dead. I am making you immortal,” one message said. He continued:Β Β 

β€œYou hate me now, but maybe someday you will see things my way. I am not the monster you think I am. I'm just honest with my nature. I'm also sorry about your dad. I lost mine when I was a kid. Yes, it's true. I do love your image. And rest in mind, I'm not anyone from your life. [...]Β  So life isn't that nice, so I've made up your personality and surrounded it with AI flesh. I have a mask of you that I make my tiny Asian girlfriend wear. Lastly, yes, I do have eight inches. It's not the biggest, but it is fine for little Asian girls. I'm good with my life and my love of the girl I have created in my mind with your face and my girlfriend's body. No one loves you as much as I do. You should be flattered that anyone loves you. And yes, my art is of the highest integrity, because it is actually truly honest. It isn't hiding or lying like all the beta males in your life. I am a real man that desires your body and isn't afraid to say so, not your real one, though, that one is bold and faded, but your AI body is forever young, Jo.”

She replied to some of his Facebook messages, trying to goad him into giving more information she could potentially bring to the police. But he never took the bait, instead continuing to send long rants about his sex life, her appearance, and his racist fetishes. (Chew still hasn’t gone directly to the police; she told me she’s had negative experiences going to her local police for assault, something many women report as a systemic issue across police forces.)

By late September, things became quiet. He’d deleted or deactivated his Instagram and Facebook accounts. But another account, under a new username, popped up in October and restarted the harassment, posting more to sites where people seek out deepfake porn.Β In some videos and images, the bodies he swapped her face onto seemed very young, and were posted alongside videos of children.

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser

In November, Chew found someone posting the same images and videos to another site with her Chinese name. β€œIt’s very sensitive for me as I’ve grown sick and tired of the fetishization of Asian women (that I’ve been exposed to my entire life) and I’ve only been open with my Chinese name in the last decade or so.” she told me in an email. β€œIt looks like it’s all preexisting content. Drives me nuts someone or multiple people are out there freely distributing said content facing no repercussions (and even profiting from it).” 

Around the same time, the videos returned to YouTube, posted by two new accounts, where the uploader titled videos with Chew’s full name.Β 

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser
Screenshot via Youtube

By December, other users were reposting the same content on porn tube sitesβ€”again with her full name in the titles. Around that time, a new username popped up in her Instagram comments, claiming that Ron died by suicide and that she was to blame.Β 

β€œInitially wasn’t planning on replying, but wanted to see if he would drop any more information (whether or not it’s true is debatable),” Chew told me at the time. β€œThen he started making excuses for Ron (whether he is him or one of his followers remains to be seen) saying he was mentally challenged and then tried to blame me for his suicide, which also may or may not have happened.”

'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser
Screenshot via Youtube

Over the course of 10 months, Chew kept finding more accounts posting her image, her full name, and graphic videos and photos alongside degrading titles and descriptions.Β 

As of writing, the harassment has slowed down. In the last year, Chew has sent me dozens of emails with links to hundreds and thousands of pieces of content and screenshots showing more deepfakes, comments, and videos on multiple platforms, many more than can be shown in one article. Much of it is gone after DeBarber’s reporting and takedown notices and searching for her name on Google no longer returns results from porn sites, but some of it is still online.

But she’s still terrified of the long-term effects this harassment could continue to have. Although she’s a working actor, she still relies on working in the corporate world to make ends meet between the more sporadic gigs in the arts, and those jobs often require background checks. And as an actor, it’s made networking and social events harder, as trusting people outside of her closest confidants has become difficult. β€œIt's made me incredibly wary of men, which I know isn't fair, but Ron could literally be anyone,” she said. β€œAnd there are a lot of men out there who don't see the issue, they wonder why we aren't flattered for the attention.”

Deepfakes started as a novel AI-powered explicit imagery abuse technique seven years ago. The technology went from crude frankenporn among the programming-savvy and morally flippant to producing fakes so realistic it was considered a national security threat within months of its inception. But its most popular use has always been as a mass-harassment tool. The platforms where people spread deepfakes have only expanded in that time, while the methods for making deepfakes have gotten simpler; so simple that schoolchildren do it. The adults in the room, as well as policymakers, continue to fail victims of deepfake harassment. Conversations about deepfakes still leave sex workers, who are doubly exploited in this content, behind. AI continues to explode exponentially, while women targeted by this kind of harassment say again and again and again that they believe sexualized online harassment is part of the deal of being a successful woman on the internet: untenable and yet part of some unwritten contract.Β 

β€œThe Violence Against Women Act Reauthorization Act of 2022 created a federal civil cause of action for victims of non-consensual content,” DeBarber said. β€œThis law allows victims to file a lawsuit against the person who disclosed their intimate images without consent. However, this law doesn't cover β€˜deepfakes’ including those created via AI. The focus tends to be on celebrities, influencers, and political figures. This itself is changing rapidly. We feel lawmakers and voters aren't seeing the larger picture β€” this is an everyone issue.” 

Even when proposed legislation takes a new stab at criminalizing deepfakes, like the TAKE IT DOWN Act is currently attempting, it risks being used as a weapon by those who would love to further curb free speech online, rather than being nuanced, effective, and inclusive β€” or learning from legislative mistakes of the past.Β Β 

While legislators and platforms continue to fumble around for solutions and police push victims to the side, everyone suffers. There is still no technological solution to deepfakes, and a perfect legal one seems far away, too. But Chew’s experience confronting her harasser gives us a new look into the mind of the people who dole out the abuse and hide behind anonymity, and the exhausting process of reclaiming one's own name.

Has GetReal cracked the code on AI deepfakes? $18M and an impressive client list say yes

26 March 2025 at 09:28
The proliferation of scarily realistic deepfakes is one of the more pernicious by-products of the rise of AI, and falling victim to scams based on these deepfakes is already costing companies millions of dollars β€” not to mention the implications these could have on national security. A startup that’s built a toolset aimed at governments […]

Deepfake nudes are the perfect issue for Melania Trump to take on: It's bipartisan and everyone agrees it's a problem.

3 March 2025 at 14:19
Melania Trump in a red suit waving
First lady Melania Trump is weighing in on deepfake nudes.

Leon Neal/Getty Images

  • Melania Trump is supporting a bill that aims to stop AI-generated nudes.
  • Deepfake nudes are a growing issue in schools that is affecting young people β€” especially girls.
  • This is a perfect issue for the first lady to take up: It's bipartisan and relevant.

Melania Trump appeared at a roundtable on Capitol Hill on Monday to support a proposed bill that would tackle the issue of nonconsensual sexual images, especially deepfake nudes.

This is a perfect issue for the first lady to take on. It's bipartisan β€” and is something everyone agrees is awful and a scourge to young people.

It also hits a rare sweet spot of retaining some anti-tech and anti-AI feelings without actually impeding Big Tech. (In fact, Big Tech companies like Meta and TikTok have supported the bill.)

Meta supports @SenTedCruz & @SenAmyKlobuchar's TAKE IT DOWN Act, and we appreciate that @FLOTUS is highlighting the issue. Having an intimate image – real or AI-generated - shared without consent can be devastating and Meta developed and backs many efforts to help prevent it. https://t.co/R1OC14p3UV

β€” Andy Stone (@andymstone) March 3, 2025

Nonconsensual sexual images created with AI, or deepfake nudes, have exploded as an issue in high schools and even in middle schools, where images are created to harass and shame peers. It's especially affected young women, although it's also affected boys.

People have created sexualized images and videos of celebrities for years by using photo-editing software, but AI has made it that much easier. There are "nudify" apps that do this with just a few taps. 404 Media has reported on how Instagram has struggled to take down ads for these kinds of apps, which are against Meta's rules.

On Monday, Thorn, an organization that advocates against online child sexual abuse and exploitation,Β published a report based on its survey of 13-to-20-year-olds: It said one out of eight young people who responded said theyΒ knew someone personally who had been a victim of a deepfake nude image. These images, sickeningly, put a victim's real face on an AI-generated body.

"By closing a key legal gap, this bill criminalizes the knowing distribution of intimate visual depictions of minors β€” whether real or AI-generated β€” when shared with intent to harm, harass, or exploit," Emily Slifer, Thorn's director of policy, told Business Insider.

For the people affected, such images can be devastating. The New York Times called the issue an "epidemic" in a story about how AI images have wreaked havoc in suburban high schools and middle schools. During Monday's roundtable, several young women shared their experiences.

In the past, Melania Trump has rarely gotten involved in the weeds of legislation, so putting her weight behind this bill sends a signal. Using her prominent position, she can push for action that actually will make a difference.

The "Take It Down Act," a bill introduced by Republican Sen. Ted Cruz of Texas, along with a bipartisan group of senators, including Democrat Amy Klobuchar of Minnesota and Democratic Sen. Richard Blumenthal of Connecticut, would criminalize publishing these kinds of nonconsensual images and make it easier for victims to get images removed quickly.

During Donald Trump's first presidency, the first lady launched a "Be Best" initiative, which was aimed at stopping online bullying. At the time, the amorphous slogan and the fact that the president himself was no stranger to hurling insults on social media made it a bit of a punchline among some people.

But in the last two years, the effect of social media on teen mental health and the dangers of a "phone-based childhood" have become front-of-mind issues for parents and regulators. The timing for the return of Be Best is perfect.

BE BEST: On my way to The Hill to advocate for the Take It Down Act bill. I urge Congress to pass this important legislation to safeguard our youth. pic.twitter.com/A2qoet0Y2c

β€” First Lady Melania Trump (@FLOTUS) March 3, 2025

"As first lady, my commitment to the Be Best initiative underscores the importance of online safety," Melania Trump said during the roundtable with Cruz and others. "In an era where digital interactions are integral to daily life, it is imperative that we safeguard children from mean-spirited and hurtful online behavior."

Cyberbullying is actually quite complicated.

For social platforms, there's a tension between banning people for being jerks online and upholding the values of free speech. Elon Musk's X and now Mark Zuckerberg's Meta platforms have both loosened policies around speech that some people, including me, would consider abhorrent β€” like anti-trans slurs.

But some stuff, of course, should clearly not be allowed β€” like child sexual abuse material, or sexualized photos that are either fake or used without someone's consent. It makes sense for Melania Trump to say that out loud.

Read the original article on Business Insider

β€œIt’s not actually you”: Teens cope while adults debate harms of fake nudes

Teens increasingly traumatized by deepfake nudes clearly understand that the AI-generated images are harmful.

And apparently so do many of their tormentors, who typically use free or cheap "nudify" apps or web tools to "undress" innocuous pics of victims, then pass the fake nudes around school or to people they know online. A surprising recent Thorn survey suggests there's growing consensus among young people under 20 that making and sharing fake nudes is obviously abusive.

That's a little bit of "good news" on the deepfake nudes front, Thorn's director of research, Melissa Stroebel, told Ars. Last year, The New York Times declared that teens are now confronting an "epidemic" of fake nudes in middle and high schools that are completely unprepared to support or in some cases even acknowledge victims.

Read full article

Comments

Β© Maskot | Maskot

UK’s internet watchdog toughens approach to deepfake porn

24 February 2025 at 16:01

Ofcom, the U.K.’s internet safety regulator, has published another new draft guidance as it continues to implement the Online Safety Act (OSA) β€” the latest set of recommendations aim to support in-scope firms to meet legal obligations to protect women and girls from online threats like harassment and bullying, misogyny, and intimate image abuse. The […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

An AI deepfake of celebs criticizing Kanye West may have picked the wrong star to replicate

12 February 2025 at 11:03
Scarlett Johansson
Scarlett Johansson has been vocal about the risks of AI.

Samir Hussein/Samir Hussein/Getty Images

  • Scarlett Johansson said she's a "victim of AI" after a video featuring a fake her went viral.
  • The video, made by AI creators, features celebrities responding to Kanye West's remarks on X.
  • The creators of the video defended their work, calling it an "artistic and cultural statement."

Scarlett Johansson pointed to her concerns about artificial intelligence usage again after an AI video featuring her and other celebrities went viral.

Johansson, who has been vocal about AI regulation in the past, addressed a video responding to Kanye West's antisemitic remarks over the weekend on X.

On Tuesday, Ori Bejerano, a self-described generative AI expert, uploaded a video of various public figures wearing a white T-shirt featuring a middle finger marked with the Star of David and "Kanye" underneath it. The clip ended with the phrase, "Enough is enough."

As the video circulated online, Johansson said, "We must call out the misuse of AI," and urged the US government to pass legislation to regulate the technology.

"I have unfortunately been a very public victim of AI, but the truth is that the threat of AI affects each and every one of us," Johansson said in a statement to Business Insider.

The faces of several public figures, many of whom identify as Jewish, appeared in the clip β€” including Drake, Mike Bloomberg, and Natalie Portman.

The Israeli AI entrepreneur Guy Bar and Bejerano told BI that they're the creators of the video. The celebrities featured were chosen because of their Jewish heritage and their proximity to Ye's "social and cultural environment," Bar said.

"We wanted to use their voices, so to speak, to tell Kanye West: 'Your antisemitism and incitement to violence have crossed every possible line. Enough is enough,'" Bar told BI.

Bar, speaking on behalf of himself and Bejerano, said they "deeply respect" Johansson's concerns about AI but defended their work.

"The video in question was not created for commercial purposes but rather as an artistic and cultural statement aimed at confronting rising antisemitism," Bar said.

BI reached out to representatives for each public figure whose likeness was included in the clip; most did not reply. A spokesperson for David Schwimmer said the actor had no comment at this time.

West, whose legal name is Ye, appeared in an ad during the Super Bowl on Sunday promoting his website Yeezy.com. After the commercial aired, the online store had a T-shirt featuring a swastika as the only product for sale.

The move followed a string of antisemitic X posts Ye put up on Friday and Saturday. The rapper praised Adolf Hitler and described himself as a Nazi. Shopify, which powered the web store, confirmed Tuesday that it shut down the site.

Johansson's previous AI legal battles

Last year, Johansson called out ChatGPT's maker, OpenAI, saying its "Sky" voice assistant sounded "eerily similar" to her own. She said she declined OpenAI's request to voice its virtual assistant before the company announced the technology.

OpenAI denied that the actor who voiced the assistant was hired to imitate Johansson but eventually "paused" Sky to address the issue. As comparisons mounted, Johansson called for legislation to protect "individual rights" regarding AI and deepfakes.

Gen AI has been a point of contention between Silicon Valley and the general public. Companies are ramping up their AI investments, while the US government is taking a more relaxed approach to regulation compared with some other countries.

"We believe that excessive regulation of the AI sector could kill a transformative industry," Vice President JD Vance said Tuesday at the Paris AI summit.

Read Johansson's full statement on Bar and Bejerano's video:

It has been brought to my attention by family members and friends that an AI-generated video featuring my likeness, in response to an antisemitic view, has been circulating online and gaining traction. I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind. But I also firmly believe that the potential for hate speech multiplied by AI is a far greater threat than any one person who takes accountability for it. We must call out the misuse of AI, no matter its messaging, or we risk losing a hold on reality.
I have unfortunately been a very public victim of AI, but the truth is that the threat of AI affects each and every one of us.
There is a 1000-foot wave coming regarding AI that several progressive countries, not including the United States, have responded to in a responsible manner. It is terrifying that the US government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of AI.
I urge the US government to make the passing of legislation limiting AI use a top priority; it is a bipartisan issue that enormously affects the immediate future of humanity at large.
Read the original article on Business Insider

Deepfake videos are getting shockingly good

4 February 2025 at 08:18

Researchers from TikTok owner ByteDance have demoed a new AI system, OmniHuman-1, that can generate perhaps the most realistic deepfake videos to date. Deepfaking AI is a commodity. There’s no shortage of apps that can insert someone into a photo, or make a person appear to say something they didn’t actually say. But most deepfakes […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

UK confirms plans to criminalize the creation of sexually explicit deepfake content

7 January 2025 at 07:01

The U.K. is forging ahead with plans to make the act of creating sexually explicit β€œdeepfake” images a specific criminal offence. A deepfake refers to manipulated media, often video or audio, created using AI to make someone appear to say or do something they didn’t. The U.K. had already made sharing β€” and the threat […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌