❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 10 January 2025Main stream

Meta's chief marketing officer warns 'too much censorship is actually harmful' for LGBTQ+ community in internal forum

10 January 2025 at 13:21
Meta CMO Alex Schultz
Alex Schultz Meta Chief Marketing Officer

Courtesy of Business Insider

  • Meta's chief marketing officer Alex Schultz is concerned that "too much censorship" is harmful.
  • Schultz's comments come after Meta updated several policies, including content moderation.
  • The new guidelines change what is permissible to be said about LGBTQ+ people.

Meta's chief marketing officer warned that greater censorship on its platforms could "harm speech" from the LGBTQ+ community aiming to push back against hate.

Alex Schultz posted his feelings on Meta's decision to change its policy on hateful conduct earlier this week in a post on its internal forum.

"My perspective is we've done well as a community when the debate has happened and I was shocked with how far we've gone with censorship of the debate," Schultz wrote in the post, seen by Business Insider.

He added that his friends and family were shocked to see him receive abuse as a gay man in the past, but that it helped them to realize hatred exists.

"Most of our progress on rights happened during periods without mass censorship like this and pushing it underground, I think, has coincided with reversals," he said.

"Obviously, I don't like people saying things that I consider awful but I worry that the solution of censoring that doesn't work as well as you might hope. So I don't know the answer, this stuff is really complicated, but I am worried that too much censorship is actually harmful and that's may have been where we ended up."

Earlier this week, the company adjusted its moderation guidelines to allow statements on its platforms claiming that LGBTQ+ people are "mentally ill" and removed trans and nonbinary-themed chat options from its Messenger app, features that had previously been showcased as part of the company's support for Pride Month.

Schultz also said that he does not think that censorship and cancel culture have helped the LGBTQ+ movement.

He wrote, "We don't enforce these things perfectly," and cited an example of a mistake of taking down images of two men kissing and removing a slur word toward gay people rather than a deliberate move by a "bigoted person in operations."

Schultz added, "So the more rules we have, the more mistakes we make…Moderation is hard and we'll always get it wrong somewhat. The more rules, the more censorship, the more we'll harm speech from our own community pushing back on hatred."

The company's latest decision to roll back its DEI programs has sparked intense internal debate and public scrutiny. The announcement, delivered via an internal memo by VP of HR Janelle Gale, said that the company would dismantle its dedicated DEI team and eliminate diversity programs in its hiring process.

The company said Tuesday it will replace third-party fact-checkers on Facebook, Instagram, and Threads with a community notes system, mirroring the approach used on Elon Musk's platform, X.

Schulz told BI in an interview earlier this week that the election of Donald Trump and a broader shift in public sentiment around free speech played significant roles in these decisions.

He acknowledged that internal and external pressures had led Meta to adopt more restrictive policies in recent years, but the company is now taking steps to regain control over its approach to content moderation.

Meta's internal forum, Workplace, saw reactions ranging from anger and disappointment to cautious optimism about the company's direction.

One employee lamented the rollback as "another step backward" for Meta, while others raised concerns about the message it sends to marginalized communities that rely on Meta's platforms.

At Meta's offices in Silicon Valley, Texas, and New York, facilities managers were instructed to remove tampons from men's bathrooms, which the company had provided for nonbinary and transgender employees who use the men's room and may require sanitary products, The New York Times reported on Friday.

Meta didn't immediately respond to a request for comment from BI.

You can email Jyoti Mann at [email protected], send her a secure message on Signal @jyotimann.11 or DM her via X @jyoti_mann1

If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].

Read the original article on Business Insider

Meta employees react after the rollback of DEI programs — both for and against

10 January 2025 at 11:24
Mark Zuckerberg attends Senate Judiciary Committee hearing in January 2024.
Meta CEO Mark Zuckerberg.

The Washington Post/The Washington Post via Getty Images

  • On Meta's internal forum, its employees criticized its decision to roll back DEI initiatives.
  • It follows changes to Meta's content-moderation policies, which got rid of third-party fact-checkers.
  • Meta's VP of HR said the term DEI had "become charged" and "suggests preferential treatment."

Meta employees spoke out on its internal forum against the tech giant's decision Friday to roll back its diversity, equity, and inclusion program.

Staffers criticized the move in comments on the post announcing the changes on the internal platform Workplace. More than 390 employees reacted with a teary-eyed emoji to the post, which was seen by Business Insider and written by the company's vice president of human resources, Janelle Gale.

Gale said Meta would "no longer have a team focused on DEI." Over 200 workers reacted with a shocked emoji, 195 with an angry emoji, while 139 people liked the post, and 57 people used a heart emoji.

"This is unfortunate disheartening upsetting to read," an employee wrote in a comment that had more than 200 likes.

Another person wrote, "Wow, we really capitulated on a lot of our supposed values this week."

A different employee wrote, "What happened to the company I joined all those years ago."

Reactions were mixed, though. One employee wrote, "Treating everyone the same, no more, no less, sounds pretty reasonable to me." The comment had 45 likes and heart reactions.

The decision follows sweeping changes made to Meta's content-moderation policies, which Meta CEO Mark Zuckerberg announced Tuesday. The changes include eliminating third-party fact-checkers in favor of a community-notes model similar to that on Elon Musk's X.

As part of the changes to Meta's policy on hateful conduct, the company said it would allow users to say people in LGBTQ+ communities are mentally ill for being gay or transgender.

"We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird,'" Meta said in the updated guidelines.

One employee wrote in response to the DEI changes that, in addition to the updated hate-speech guidelines, "this is another step backward for Meta."

They added: "I am ashamed to work for a company which so readily drops its apparent morals because of the political landscape in the US."

In the post announcing the decision to drop many of its DEI initiatives, Gale said the term DEI had "become charged," partly because it's "understood by some as a practice that suggests preferential treatment of some groups over others."

"Having goals can create the impression that decisions are being made based on race or gender," she said, adding: "While this has never been our practice, we want to eliminate any impression of it."

One employee told BI the moves "go against what we as a company have tried to do to protect people who use our platforms, and I have found all of this really hard to read."

Meta did not respond to a request for comment by the time of publication.

Do you work at Meta? Contact the reporters from a nonwork email and device at [email protected], [email protected], and [email protected].

Read the original article on Business Insider

A woman says her boyfriend tricked her into a wedding, convincing her it was a prank for Instagram

10 January 2025 at 08:32
groom puts ring on bride
The bride says she thought the ceremony was just a social media prank.

Kenji Lau/Getty Images

  • A couple in Australia had their marriage annulled after the bride said she didn't genuinely consent.
  • The woman said she believed the ceremony was a "prank" being filmed for Instagram.
  • A judge ruled in her favor, saying it was likely the applicant believed she was just acting.

A couple in Australia had their marriage annulled after the bride testified in court that she thought the ceremony was part of a "prank" video orchestrated by the groom for social media clout.

In a family court judgment from October, which was made public this month, a judge declared the couple's December 2023 marriage void.

The bride, 24, filed for the annulment in May 2024, arguing that the marriage to the groom, in his 30s, was a sham because she did not offer real consent.

She said she thought she was merely playing the role of a bride for a video that the groom, a social media influencer with over 17,000 followers, would post on Instagram.

The Guardian Australia was the first to report on the judgment.

The bride says she thought it was a 'prank'

The couple, both originally from the same country, met on a dating platform in September 2023.

For legal reasons, their identities cannot be published.

In her affidavit, the bride said that after a brief period of dating, the groom invited her to Sydney in December 2023 to attend a "white party," instructing her to wear a white dress.

Upon arriving at the venue, she said she was "shocked" to find out for the first time that he had "organized a wedding for us."

She said she felt uncomfortable and told the groom she was leaving. However, she testified that she did not leave, and instead called a friend for advice.

The bride said the groom had told her it was a "simple prank" and that her friend assured her that she could not legally marry without a notice of intention to marry being filed.

During cross-examination, the bride testified: "He pulled me aside, and he told me that he'd organizing a prank wedding for his social media, to be precise, Instagram, because he wants to boost his content and wants to start monetizing his Instagram page."

Video evidence presented in court showed the celebrant leading the couple through their vows. The judge said that nothing in the words used by the bride "revealed hesitation or uncertainty."

"We had to act," she said in cross-examination, "to make it look real."

The couple got engaged 2 days earlier

In his affidavit, the groom disputed the bride's account, claiming the ceremony was legitimate and resulted in a valid marriage.

He said the bride had accepted his marriage proposal, which she did not deny.

However, she said that while she did eventually intend to marry him, she didn't expect to get married so soon after the proposal β€” just two days later.

In her affidavit, the bride said her culture would require either her parents to be present or to grant permission beforehand.

The judge wrote, "In my view, it beggars belief that a couple would become engaged in late December then married two days later."

The judge added that a wedding celebrant had been retained over a month before the groom proposed, a notice of intention to marry had been filed in November, and the bride didn't have a single friend or family member present.

The bride said she only found out the marriage was real in February last year when the groom, who was applying for refugee status, asked to be put as a dependent on her application for permanent residency.

In concluding remarks, the judge wrote: "On the balance of probabilities, in my view it is more probable than not that the applicant believed she was acting in a social media event on the day of the alleged ceremony, rather than freely participating at a legally sanctioned wedding ceremony."

Read the original article on Business Insider

'Shark Tank' star Kevin O'Leary is part of a bid to buy TikTok — but it's not for sale. Yet.

10 January 2025 at 02:27
kevin o'leary
Kevin O'Leary is a Canadian investor and "Shark Tank" judge.

"Shark Tank"/ABC

  • A group including "Shark Tank" star Kevin O'Leary and Frank McCourt has submitted a bid for TikTok.
  • They face an uphill battle to buy the app, with owner Bytedance still fighting a looming US ban.
  • McCourt previously told BI the deal, which does not include TikTok's algorithm, faces a murky path to success.

"Shark Tank" star Kevin O'Leary is teaming up with billionaire Frank McCourt on a long shot effort to buy TikTok.

O'Leary and the former Los Angeles Dodgers owner are part of a group called "The People's Bid for TikTok," which said on Thursday it had submitted a bid for the video app to Chinese tech giant Bytedance.

The consortium has an uphill battle to acquire TikTok, despite the app being threatened with a ban in the US if it's not sold by January 19.

Bytedance insists it has no plans to sell the app, which has some 170 million US users, despite President Joe Biden signing a law in April setting a deadline for the app to be sold, or face a ban.

Bytedance is challenging the law in the Supreme Court after losing appeals in lower courts, claiming the potential ban from US app stores is a violation of the First Amendment right to free speech.

The court is due to hear oral arguments in the case on Friday.

President-elect Donald Trump has asked the court to pause the law that would ban TikTok until after his inauguration later this month.

Any deal to buy TikTok is complicated by the fact that TikTok's recommendation algorithm β€” the key to the app's compulsive scrolling β€” is likely covered by Chinese export rules prohibiting the sale of sensitive technology without a license.

No clarity

McCourt told Business Insider in December that the group's $20 billion-plus proposal, which would not include the recommendation algorithm, is complicated because "we don't know what ByteDance is selling."

He said that Bytedance had refused to discuss a potential sale, meaning it was "very, very difficult to have precision" over what a deal might look like.

McCourt and O'Leary's vision for the app, which is also backed by the likes of investment firm Guggenheim Securities and World Wide Web inventor Tim Berners-Lee, includes turning TikTok into a decentralized social media app that gives users more control over their personal data.

The group said they would aim to work closely with incoming president Donald Trump, who has previously expressed support for TikTok and met with the company's CEO last month.

Bytedance did not immediately respond to a request for comment from BI.

Read the original article on Business Insider

Yesterday β€” 9 January 2025Main stream

Character.AI put in new underage guardrails after a teen's suicide. His mother says that's not enough.

By: Helen Li
9 January 2025 at 02:00
Sewell Setzer III and Megan Garcia
Sewell Setzer III and his mother Megan Garcia.

Photo courtesy of Megan Garcia

  • Multiple lawsuits highlight potential risks of AI chatbots for children.
  • Character.AI added moderation and parental controls after a backlash.
  • Some researchers say the AI chatbot market has not addressed risks for children.

Ever since the death of her 14 year-old son, Megan Garcia has been fighting for more guardrails on generative AI.

Garcia sued Character.AI in October after her son, Sewell Setzer III, committed suicide after chatting with one of the startup's chatbots. Garcia claims he was sexually solicited and abused by the technology and blames the company and its licensor Google for his death.

"When an adult does it, the mental and emotional harm exists. When a chatbot does it, the same mental and emotional harm exists," she told Business Insider from her home in Florida. "So who's responsible for something that we've criminalized human beings doing to other human beings?"

A Character.AI spokesperson declined to comment on pending litigation. Google, which recently acqui-hired Character.AI's founding team and licenses some of the startup's technology, has said the two are separate and unrelated companies.

The explosion of AI chatbot technology has added a new source of entertainment for young digital natives. However, it has also raised potential new risks for adolescent users who may more easily be swayed by these powerful online experiences.

"If we don't really know the risks that exist for this field, we cannot really implement good protection or precautions for children," said Yaman Yu, a researcher at the University of Illinois who has studied how teens use generative AI.

"Band-Aid on a gaping wound"

Garcia said she's received outreach from multiple parents who say they discovered their children using Character.AI and getting sexually explicit messages from the startup's chatbots.

"They're not anticipating that their children are pouring out their hearts to these bots and that information is being collected and stored," Garcia said.

A month after her lawsuit, families in Texas filed their ownΒ complaint against Character.AI, alleging its chatbots abused their kids and encouraged violence against others.

Matthew Bergman, an attorney representing plaintiffs in the Garcia and Texas cases, said that making chatbots seem like real humans is part of how Character.AI increases its engagement, so it wouldn't be incentivized to reduce that effect.

He believes that unless AI companies such as Character.AI can establish that only adults are using the technology through methods like age verification, these apps should just not exist.

"They know that the appeal is anthropomorphism, and that's been science that's been known for decades," Bergman told BI. Disclaimers at the top of AI chats that remind children that the AI isn't real are just "a small Band-Aid on a gaping wound," he added.

Character.AI's response

Since the legal backlash, Character.AI has increased moderation of its chatbot content and announced new features such as parental controls, time-spent notifications, prominent disclaimers, and an upcoming under-18 product.

A Character.AI spokesperson said the company is taking technical steps toward blocking "inappropriate" outputs and inputs.

"We're working to create a space where creativity and exploration can thrive without compromising safety," the spokesperson added. "Often, when a large language model generates sensitive or inappropriate content, it does so because a user prompts it to try to elicit that kind of response."

The startup now places stricter limits on chatbot responses and offers a narrower selection of searchable Characters for under-18 users, "particularly when it comes to romantic content," the spokesperson said.

"Filters have been applied to this set in order to remove Characters with connections to crime, violence, sensitive or sexual topics," the spokesperson added. "Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts. We are continually training the large language model that powers the Characters on the platform to adhere to these policies."

Garcia said the changes Character.AI is implementing are "absolutely not enough to protect our kids."

A screenshot of character.ai website
Character.AI has both AI chatbots designed by its developers and by users who publish them on the platform.

Screenshot from Character.AI website

Potential solutions, including age verification

Artem Rodichev, the former head of AI at chatbot startup Replika, said he witnessed users become "deeply connected" with their digital friends.

Given that teens are still developing psychologically, he believes they should not have access to this technology before more research is done on chatbots' impact and user safety.

"The best way for Character.AI to mitigate all these issues is just to lock out all underage users. But in this case, it's a core audience. They will lose their business if they do that," Rodichev said.

While chatbots could become a safe place for teens to explore topics that they're generally curious about, including romance and sexuality, the question is whether AI companies are capable of doing this in a healthy way.

"Is the AI introducing this knowledge in an age-appropriate way, or is it escalating explicit content and trying to build strong bonding and a relationship with teenagers so they can use the AI more?" Yu, the researcher, said.

Pushing for policy changes

Since her son's passing, Garcia has spent time reading research about AI and talking to legislators, including Silicon Valley Representative Ro Khanna, about increased regulation.

Garcia is in contact with ParentsSOS, a group of parents who say they have lost their children to harm caused by social media and are fighting for more tech regulation.

They're primarily pushing for the passage of the Kids Online Safety Act (KOSA), which would require social media companies to take a "duty of care" toward preventing harm and reducing addiction. Proposed in 2022, the bill passed in the Senate in July but stalled in the House.

Another Senate bill, COPPA 2.0, an updated version of the 1998 Children's Online Privacy Protection Act, would increase the age for online data collection regulation from 13 to 16.

Garcia said she supports these bills. "They are not perfect but it's a start. Right now, we have nothing, so anything is better than nothing," she added.

She anticipates that the policymaking process could take years, as standing up to tech companies can feel like going up against "Goliath."

Age verification challenges

More than six months ago, Character.AI increased the minimum age participation for its chatbots to 17 and recently implemented more moderation for under-18 users. Still, users can easily circumvent these policies by lying about their age.

Companies such as Microsoft, X, and Snap have supported KOSA. However, some LGBTQ+ and First Amendment rights advocacy groups warned the bill could censor online information about reproductive rights and similar issues.

Tech industry lobbying groupsΒ NetChoiceΒ and the Computer & Communications Industry AssociationΒ sued nine states that implemented age-verification rules, alleging this threatens online free speech.

Questions about data

Garcia is also concerned about how data on underage users is collected and used via AI chatbots.

AI models and related services are often improved by collecting feedback from user interactions, which helps developers fine tune chatbots to make them more empathetic.

Rodichev said it's a "valid concern" about what happens with this data in the case of a hack or sale of a chatbot company.

"When people chat with these kinds of chatbots, they provide a lot of information about themselves, about their emotional state, about their interests, about their day, their life, much more information than Google or Facebook or relatives know about you," Rodichev said. "Chatbots never judge you and are 24/7 available. People kind of open up."

BI asked Character.AI about how inputs from underage users are collected, stored, or potentially used to train its large language models. In response, a spokesperson referred BI to Character.AI's privacy policy online.

According to this policy, and the startup's terms and conditions page, users grant the company the right to store the digital characters they create and they conversations they have with them. This information can be used to improve and train AI models. Content that users submit, such as text, images, videos, and other data, can be made available to third parties that Character.AI has contractual relationships with, the policies state.

The spokesperson also noted that the startup does not sell user voice or text data.

The spokesperson also said that to enforce its content policies, the chatbot will use "classifiers" to filter out sensitive content from AI model responses, with additional and more conservative classifiers for those under 18. The startup has a process for suspending teens who repeatedly violate input prompt parameters, the spokesperson added.

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line β€” just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Read the original article on Business Insider

Before yesterdayMain stream

Meta fact-checkers called an emergency meeting. We got inside. Here's what happened.

8 January 2025 at 08:06
Meta CEO Mark Zuckerberg.
Meta CEO Mark Zuckerberg.

Alex Wong via Getty Images

  • Meta plans to end US fact-checking partnerships in March, with payments to continue through August.
  • Meta has cited "changing free speech perceptions" as part of the reason for their decision.
  • Meta's global fact-checking support remains, including an IFCN Business Continuity Fund.

Meta's US fact-checking partnerships will officially end in March, and payments to partners will continue through August, Business Insider has learned.

Details of an exchange between Meta and Angie Holan, the director of the International Fact-Checking Network, were revealed during a private IFCN meeting attended by more than 150 members, the audio of which was obtained by Business Insider. These details have not been previously reported.

Meta informed the IFCN it was ending its fact-checking partnerships just 45 minutes before the company published a blog post about the decision, written by Joel Kaplan, Meta's new head of public policy who has long-standing ties to the Republican Party.

The company said its new approach was prompted by "changing perceptions of free speech" and a desire to "allow for more free speech."

Severance and a support fund for fact-checkers

Contracts with all 10 fact-checking organizations in the US will end in March, with payments continuing until August. Organizations that have not signed contracts for 2025 were offered the option to participate in a severance program.

Kaplan's post said that Meta will replace its fact-checking partnerships with X-style community notes β€” but the Meta executive told Holan that the rollout of community notes is expected to take time.

Meta indicated that the system would be built and implemented throughout 2025. When asked whether the company intends to expand community notes globally, Meta gave a noncommittal response, saying it would first monitor the program's effects in the US and consider the regulatory landscape in other countries.

Participation guidelines for the program remain unclear.

When Holan pressed Meta on how the IFCN should navigate the divide between US changes and the status of global programs, Meta's response was vague, advising the IFCN to "stay present for both constituencies."

Holan expressed disappointment during the conversation, describing Meta's fact-checking program as one that "positively influenced a whole ecosystem of fact-checking" and emphasized that the work was never about censorship.

"This seems like politics," she told the Meta executive, who declined to confirm or deny political motivations, stating only that they were "personally proud" of the program's legacy.

Meta's support for other IFCN initiatives will remain unchanged. This includes the IFCN's new Business Continuity Fund, designed to provide temporary financial assistance to fact-checking organizations affected by natural disasters, civil unrest, military conflicts, or state repression.

The fund aims to help organizations resume their normal operations as quickly as possible and ensure the safety and well-being of their team members. Meta also confirmed that a separate WhatsApp-related grant program would continue.

However, when asked whether Meta would continue sponsoring Global Fact, IFCN's flagship annual conference, the executive had no definitive answer, suggesting that IFCN "stay in conversation" about the issue.

Despite the end of its US fact-checking program, the executive left the door open for continued communication with IFCN, saying Meta was "open to keep talking" about ways to support public information efforts.

IFCN partners blindsided financially

Many IFCN partners were blindsided by the announcement, as they had assumed that their work with Meta would continue.

"Several of the signatories were waiting for their new contracts," Holan said in the meeting. "The new contracts did come over the winter break. Things just seemed on course with the program in the US. We didn't have any alerts or messages that something like yesterday's news was coming."

The abrupt end left partners reeling.

Jesse Stiller, the managing editor of Check Your Fact, a Meta US fact-checking partner for five years, described the fallout.

"We had just signed our contract for 2025, and it looked like we were going to sign another one for 2026 if everything went to plan," Stiller said in the meeting. "We found out about the news literally when we woke up the next morning. The first thing I saw was a news notification β€” I thought it might be a mistake. Everything was thrown into chaos."

Stiller said that Check Your Fact is almost entirely reliant on Meta's funding. "We don't have any other external funding. Meta is our primary revenue source," he said.

Check Your Fact's team of 10 faces an uncertain future, he said. "The best-case scenario is that we last a few more months with the severance package. But honestly, we're done by March."

During the meeting, several fact-checkers voiced frustration not only with the program's termination but also with Zuckerberg's recent comments about fact-checkers.

Jency Jacob, the managing editor of the India-based fact-checking organization Boom, suggested that US fact-checkers formally call on Zuckerberg to retract his remarks.

"Basically, what he's done is he's literally burnt the house down," Jacob said. "For many years to come, his statements will continue to be used against fact-checkers."

Holan acknowledged the emotional toll but urged attendees to maintain professionalism.

"We do want to maintain a certain level of civility so that we can continue the relationships with Meta in the future when circumstances change or the political environment shifts," she said. "We don't want to say things that aren't necessary and could end dialogue."

The ripple effects of Meta's decision were felt globally.

Justin Arenstein, the cofounder and CEO of Code for Africa, an African network of data journalism labs, said that Meta's Middle East and North Africa team was also blindsided.

"The decision caught many of Meta's MENA team by surprise. They were discussing the expansion of our contract to new countries as recently as two weeks ago," Arenstein wrote in the meeting chat.

If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].

Read the original article on Business Insider

An Audible ad suggested anyone who listens to audiobooks 'real fast' is a 'psychopath' — and some people aren't happy

8 January 2025 at 02:35
A young woman with eyes closed listening to an audiobook with headphones
Some people like listening to audiobooks at a faster pace.

Getty Images

  • An Audible ad has sparked a debate on TikTok over audiobook speed preferences.
  • Someone in the ad said that anyone who listens to audiobooks "real fast" is a "psychopath."
  • Critics argued the ad's tone was condescending, while others said taking offense was an overreaction.

An Audible advertisement has caused a stir on TikTok, upsetting some fans with the suggestion that there is a right β€” and wrong β€” way to listen to audiobooks.

Over the weekend, Audible released an ad promoting its narration speed feature in which celebrities, authors, and audiobook narrators were asked for their thoughts on the ideal listening speed.

Some said they liked to listen at 1.5 or above ("SNL" star Bowen Yang said 1.8). Others, however, were purists and thought the right pace was "the speed at which it was recorded."

But one remark struck a nerve, particularly on BookTok β€” the community of literary fans on TikTok.

One respondent suggested that she thought people who "go real fast" were akin to being a "psychopath."

@audible

Speed it up or slow it down? The decision is yours with Narration Speed.

♬ original sound - Audible

While some viewers saw the video as lighthearted fun, others took offense and felt Audible was alienating its audience.

"I listened to your judgmental ad on 2x speed πŸ™„" one viewer commented. Another asked: "Is this rage bait??"

Some said they found the tone of the ad condescending, especially as consuming audiobooks and other media at a faster speed can be helpful for some people with ADHD.

Sonya Barlow, an author and presenter who has been diagnosed with ADHD, for example, told Vice in a piece about speed-watching movies that she thinks it helps her to focus.

"I'm used to running around. So when I watch TV or listen to podcasts, it's not that I am rushing the show; more that I'm avoiding the silences and long pauses in between, which can slow things down," Barlow said.

Stephanie Mitropoulos, who posts book reviews to her 88,000 followers on TikTok, made a video in response.

"They literally have a clip of someone saying that if you listen over one time speed, you are psychopathic," she said in her video, which amassed more than 300,000 views.

Mitropoulos said her preferred speed was somewhere around 1.85, and she knew of many other people who liked to listen to 1.5x or above.

She said she thought it was "absurd" to make such a flippant comment.

"Why would you even post that? Why would you put that out there? Why are we trying to shame people for listening at the speed that is most comfortable for them?" Mitropoulos said. "I don't spend $16 a month to be called a psychopath."

@sellingnwa

People commenting on this that aren’t even readers is hilarious @Audible HOW. DARE. YOU. #BookTok

♬ original sound - πŸ“šStephanieπŸ“š

Many commenters echoed Mitropoulos's views, but others thought it was an overreaction.

In the comments under Auduble's original video, viewers have shared dismay that some were upset by it.

"This is what made people upset?" one person wrote. "This can't be it."

A TikToker called Emma Skies, who has 174,000 followers on her BookTok account, said in a video she feared society was "losing context" and taking the ad too seriously.

"Do we truly think that it's strange or anger-inducing or offensive that when a performer, an audiobook narrator, is asked, Hey, at what speed do you think your performance and your peers' performances are best consumed? And that that performer says, 'the speed at which I performed it'?" Skies said.

She felt the ad was intended as a joke and not meant to mock anyone β€” especially as Audible was promoting the speed function.

"Nobody cares. They're not going to stop you," she said. "There's a reason that that's an option on Audible."

In a message to Business Insider, Skies said her video was less about Audible and more about "encouraging people to keep in mind the context of any piece of media they see, even silly little ads."

Skies also pointed to Audible's royalty rates, which, at 25%, have been criticized as lower than the industry standard.

Authors who are exclusively linked with Audible benefit from a higher rate of 40% β€” something Skies also took issue with.

"Audible Exclusives are hoarded not only from other retailers (as one might expect of a retailer exclusive), but also from being available to public libraries because of Amazon's monopolistic business practices," she said.

Amazon and Audible did not respond to requests for comment from BI.

@emmaskies

i fear we are losing the ability to reason with context AND I think a lot of people forget that audiobook narration is, at its core, a performance. You know who doesn’t forget that? The performer! πŸ’€ Why are people mad at performers who think their performances should be taken in at the speed that they performed it?? but lowkey if it really gets people riled up enough to not use audible I guess that’s a win? πŸ˜… #audiobooks #audiobooktok #booktok #audible #booktoker

♬ original sound - EmmaSkies is my @ everywhere
Read the original article on Business Insider

4 ways your feed is expected to change under Meta's new free-speech policy

7 January 2025 at 11:57
Photo of apps with Meta logo behind
Mark Zuckerberg announced in a video that Meta would change its approach toward moderation and loosen some of its policies.

Chesnot/Getty Images

  • Mark Zuckerberg said Meta would loosen some policies in an effort to avoid limiting free speech.
  • It'll remove restrictions on topics like gender, meaning users may see more-controversial opinions.
  • The policy shift is expected to change how your Facebook, Instagram, and Threads feeds appear.

Don't be surprised if your Instagram or Facebook feed looks different as Mark Zuckerberg's overhaul of Meta's moderation policies rolls out in the coming weeks.

In addition to replacing its third-party fact-checking system with community notes similar to Elon Musk's X, Meta is looking to change things up with a return to promoting political content. Other changes include eliminating restrictions on topics like immigration or gender and shifting enforcement policies on lower-severity violations.

"We're going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms," Zuckerberg said in a video announcing the changes.

So what willΒ Facebook,Β Instagram, andΒ Threads look like with the changes? Based on Zuckerberg's comments, this is how your feed could appear different.

You'll likely see a wider range of views β€” including controversial ones

In the next few weeks, you may notice more-controversial content in your feeds.

In an announcement about the changes, Meta said that it "removed millions of pieces of content" daily in December and that "one to two out of every 10" of those may not have violated its policies.

Meta said that to try to reduce instances of accidentally removing content through its automated moderation tools, it would remove restrictions on frequent topics in political conversations and debates including "immigration, gender identity, and gender."

"It's not right that things can be said on TV or the floor of Congress, but not on our platforms," the company said.

What does that mean in practice? An update on Tuesday to Meta's "Hateful Conduct" policy offers more detail.

"We do allow allegations of mental illness or abnormality when based on gender or sexual orientation," it says, "given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird.'"

Less-severe violations won't be reviewed unless people report those posts

Meta said it would loosen its guidelines around enforcement of policy violations and raise the bar for content removal.

The company said its automated systems had "resulted in too many mistakes and too much content being censored" and demoted content thought to violate its guidelines.

Meta said that moving forward it would focus on addressing "illegal and high-severity violations," including those related to terrorism, child sexual exploitation, and drugs.

It said that for "less severe policy violations" it would rely on users to report the content before it considers taking action. Meta said that it would also eliminate most demotions and that it would require a "much higher degree of confidence" and consensus from multiple reviewers to remove content.

You'll see more political content

Following what it described as feedback from people who didn't want to see political content in their feeds, Facebook announced changes in 2021 designed to reduce how much of that content users saw, including content about elections or social issues.

In Tuesday's announcement, the company described that approach as "pretty blunt" and said it would start recommending political content again on Facebook, Instagram, and Threads. It said it would take a "more personalized approach" by ranking and showing content based on users' interactions with other content, such as liking a post.

"We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see," it said.

In 2022, Meta said political content made up about 3% of posts on Facebook. So the change doesn't necessarily mean your feed will be flooded with political news and content β€” but it may be an increase from what you've seen in recent years.

You won't see fact-check notes anymore β€” instead, you'll sometimes see community notes

Part of the shift focuses on reversing moderation changes that the Meta executive Joel Kaplan said resulted in "harmless content" being removed and people "wrongly locked up in 'Facebook jail.'"

Meta said it would end its third-party fact-checking program, implemented in 2016, and launch a community-notes program allowing contributors to add context to content.

Meta said it would no longer demote fact-checked content or include full-screen warnings that users have to click through before viewing a post. It said they'd instead see "a much less obtrusive label" indicating they can see additional content.

A Meta spokesperson did not immediately respond to a request for additional comment from Business Insider.

Read the original article on Business Insider

Meta fact-checkers call an emergency meeting after Mark Zuckerberg pulls the plug

7 January 2025 at 11:29
Mark Zuckerberg

Brendan Smialowski/AFP/Getty

  • Meta is ending US fact-checking partnerships and shifting to crowdsourced moderation tools.
  • The International Fact-Checking Network called an emergency meeting after the announcement.
  • Meta's decision affects the financial sustainability of fact-checking organizations.

The International Fact-Checking Network has convened an emergency meeting of its members following Meta's announcement on Tuesday that it will end its third-party fact-checking partnerships in the US and replace them with a crowdsourced moderation tool similar to X's community notes.

In an exclusive interview with Business Insider, the IFCN's director, Angie Holan, confirmed that the meeting, scheduled for Wednesday, was organized in direct response to Meta's decision.

"We hold these meetings monthly, but we called this one specifically because of today's news," she said.

The meeting is expected to draw between 80 and 100 attendees from the IFCN's network of fact-checkers, which spans 170 organizations worldwide. Not all the expected attendees are Meta fact-checking partners, though many of them have a stake in the program's future and its global implications.

The IFCN has long played a crucial role in Meta's fact-checking ecosystem by accrediting organizations for Meta's third-party program, which began in 2016 after the US presidential election that year.

Certification from the IFCN signaled that a fact-checking organization met rigorous editorial and transparency standards. Meta's partnerships with these certified organizations became a cornerstone of its efforts to combat misinformation, focusing on flagging false claims, contextualizing misinformation, and curbing its spread.

'People are upset'

Holan described the mood among fact-checkers as somber and frustrated.

"This program has been a major part of the global fact-checking community's work for years," she said. "People are upset because they saw themselves as partners in good standing with Meta, doing important work to make the platform more accurate and reliable."

She noted that fact-checkers were not responsible for removing posts, only for labeling misleading content and limiting its virality.

"It was never about censorship but about adding context to prevent false claims from going viral," Holan said.

A last-minute heads-up

An employee at PolitiFact, one of the first news organizations to partner with Meta on its Third-Party Fact-Checking Program in December 2016, said the company received virtually no warning from Meta before the program was killed.

"The PolitiFact team found out this morning at the same time as everyone else," the employee told BI.

An IFCN employee who was granted anonymity told BI that the organization itself got a heads-up only "late yesterday" via email that something was coming. It asked for a 6 a.m. call β€” about an hour before Meta's blog post written by its new Republican policy head, Joel Kaplan, went live.

"I had a feeling it was bad news," this employee said.

Meta did not respond to a request for comment.

Financial fallout for fact-checkers

Meta's decision could have serious financial consequences for fact-checking organizations, especially those that relied heavily on funding from the platform.

According to a 2023 report published by the IFCN, income from Meta's Third-Party Fact-Checking Program and grants remain fact-checkers' predominant revenue streams.

"Fact-checking isn't going away, and many robust organizations existed before Meta's program and will continue after it," Holan said. "But some fact-checking initiatives were created because of Meta's support, and those will be vulnerable."

She also underscored the broader challenges facing the industry, saying that fact-checking organizations share the same financial pressures as newsrooms. "This is bad news for the financial sustainability of fact-checking journalism," she said.

Skepticism toward community notes

Meta plans to replace its partnerships with community notes, a crowd-based system modeled after X's approach.

Holan expressed doubt that this model could serve as an effective substitute for expert-led fact-checking.

"Community notes on X have only worked in cases where there's bipartisan agreement β€” and how often does that happen?" she said. "When two political sides disagree, there's no independent way to flag something as false."

It's not yet clear how Meta's implementation of community notes will work.

'We'll be here after' Meta's program

Despite the uncertainty, Holan remains steadfast in the IFCN's mission.

"The IFCN was here before Meta's program, and we'll be here after it," she said. "We may look different in size and scope, but we'll continue promoting the highest standards in fact-checking and connecting organizations that want to collaborate worldwide."

Holan said Wednesday's meeting would focus on supporting IFCN members as they navigate this transition.

"We're here to help them figure out the best way forward," she said.

If you're a current or former Meta employee, contact this reporter from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].

Read the original article on Business Insider

Mark Zuckerberg's Meta is moving moderators out of California to combat concerns about bias and censorship

7 January 2025 at 06:47
Mark Zuckerberg at the Meta Connect 2024
Meta CEO Mark Zuckerberg.

Meta

  • Meta is moving its safety and content moderation teams from California to Texas and other states.
  • CEO Mark Zuckerberg said the shifts would help address concerns of bias and over-censorship.
  • Zuckerberg's Meta appears to be following the lead of Elon Musk's X in prioritizing free speech.

Mark Zuckerberg is moving Meta's platform security and content oversight teams out of California and shifting staff who review posts to Texas in a bid to combat concerns about liberal bias and over-censorship at his social-media empire.

The CEO of Facebook, Instagram, WhatsApp, and Threads' parent company said on Tuesday that the moves would help return Meta to its "roots around free expression and giving people voice on our platforms."

Zuckerberg wrote that Meta would "move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content."

California is widely recognized as a progressive state while Texas is traditionally conservative. Zuckerberg likely hopes that shifting oversight of his social networks to red states like Texas will help assuage claims that blue-state liberals are silencing conservative voices.

Meta's chief global affairs officer, Joel Kaplan, confirmed the changes in a blog post, writing that the company will relocate the teams "that write our content policies and review content out of California to Texas and other US locations."

He told Fox News' "Fox & Friends" on Tuesday that Meta was seeking to "rebalance" and "rebuild trust" among users who felt their perspectives were not wanted on its networks.

"We want to make sure that they understand that their views are welcome and that we're providing a space for them to come onto our platforms, engage, express themselves, engage in the important issues of the day or not in the important issues of the day and just whatever it is they want to talk about and share," Kaplan said.

joel kaplan mark zuckerberg facebook
Meta's Joel Kaplan with CEO Mark Zuckerberg.

Chesnot/Getty Images

Zuckerberg, Meta's billionaire cofounder and largest shareholder, also laid out plans to replace fact-checkers with Community Notes. He will also lift restrictions on topics like immigration and gender, ease overall censorship and instead focus on stopping illegal and severe policy violations, return civic content to users' feeds, and work with President-elect Trump to resist pressure from foreign governments to make US companies censor more.

Elon Musk, who acquired Twitter in late 2022 and rebranded it X, has made free expression a priority on his platform and spearheaded the use of Community Notes as a substitute for fact-checking and censorship.

Musk also shut X's headquarters in San Francisco last fall in favor of operating the company out of Bastrop, Texas.

Read the original article on Business Insider

The return of the wife guy: Why loving Priscilla made Mark Zuckerberg cool

3 January 2025 at 01:04
Priscilla Chan and Mark Zuckerberg
Β 

JOSH EDELSON/Getty, Tyler Le/BI

  • In 2024, Mark Zuckerberg was the ultimate wife guy.
  • He doted on his wife, Priscilla Chan, with elaborate gifts like a statue of herself.
  • A therapist explained why people become wife guys and how it can benefit one's public image.

2024 was a great year for Mark Zuckerberg β€” and it came with an image makeover. It's the year he became a "wife guy."

Zuckerberg gifted his wife, Priscilla Chan, a 7-foot statue of herself, a custom-made Porsche minivan, a recording of him singing their anniversary song (with T-Pain himself), and a disco party, just because "Disco queen wanted a party."

Wife guys have been the butts of jokes since 2017 when Instagram user Robbie Tripp went viral for praising his "curvy wife." In their spousal-championing, wife guys like Tripp have drawn praise and skepticism from onlookers.

Not so in 2024.

Zuckerberg's public tributes to his wife earned him some glowing headlines, even from his detractors, with people aspiring to form a similar relationship. Other prominent wife guys (or, in Travis Kelce's case, girlfriend guys) have garnered similar positive press. Jett Puckett, a social media influencer, is now one-half of "TikTok's favorite couple," gushing over his wife in their posts.

Isabelle Morley, a couples therapist, told Business Insider that wife guys are exciting because they represent greater equality in monogamous relationships. As women have become more independent over the past decades, "men are shifting into a role that was traditionally only for women, which is being a supportive partner," she said.

In 2024, we saw a swing toward more traditional relationships and a yearning for big romance β€” the desire for stronger, unambiguously loving partnerships. A 2024 Tinder report, for example, found that users are looking for more "cherry bombing," consistent gestures of affection. Instead of looking out for red flags, they wanted "white flags" to signal a higher form of love.

It's why Zuckerberg's public adoration of his wife is a boon for his marriage and reputation.

Being a wife guy is a PR power move

Zuckerberg, 40, and Chan, 39, met at Harvard and started dating in 2003 before getting married in 2012. While the couple, who have three children, have been in the public eye for many years, running a philanthropy organization together, the custom statues and cars are a seemingly new element in their relationship.

Morley has had clients who became more vocal wife guys later in their marriages. From her experience, these changes usually boil down to two reasons:

  1. The marriage is on the brink of divorce, and it's a last-ditch effort to keep it together (and dispel any rumors of a split).
  2. A husband may have gone to therapy and realized where he might be lacking as a partner. This realization can give them "a new sense of commitment and excitement to be the other person's champion," Morley said.

With public figures, it can also be a professional decision. "We could never weed out if there are ulterior motives for him doing that or if they've agreed that this is something that they want for their public image if they've got a whole PR team," she said.

Zuckerberg has had his share of controversies. Meta has been in hot water for showing political bias to both parties, how it collects user data, and being addictive to children. On a personal level, Zuckerberg's been known for his awkwardness and blunt delivery, particularly in the early years of his Facebook stardom.

His image revamp, complete with a style makeover and a more confident stage presence, helps soften the criticism β€” as does generously spending part of his $187 billion fortune on his wife.

Striking the right wife guy balance

Wife guys can be polarizing, and how they dote on their wives matters. Some, like I did in a 2017 tweet about Tripp, criticized the applause: was it really that groundbreaking to love an objectively beautiful woman? Tripp's positive attention curdled into backlash, and while his brand still revolves around loving his curvy wife, the reviews remain mixed.

The biggest critique of wife guys is that they're not actually devoted to their relationships. Some may "overcompensate by publicly acting as though there could be no doubts to their loyalty," Morley said, living a life very different from the facade they constructed.

One internet-famous example is Ned Fulmer, one of the four original Try Guys. Fulmer was known for frequently name-dropping Ariel, his wife with whom he seemed to have the ultimate marriage. Then, in 2022, Fulmer was caught cheating on her with a younger coworker, shattering his loving husband image. Another prominent wife guy,Β Adam Levine, had a similar marital scandal that same year.

Morley said there's no way to tell the authenticity of a wife guy. Some men are truly in awe of their partners, and overdoing it on social media isn't necessarily an indicator of nefariousness. One definite red flag is "stomping all over their wife's space and image and dominating it," where the wife becomes a clear accessory rather than her own person.

Zuckerberg's brand of wife guy has earned him praise. He usually refers to Priscilla by her first name, and their couple selfies break up his other content, like his jiu-jitsu snaps. It's part of his larger public persona pivot, not the feature, making the romance feel more legitimate.

A collective thirst for big romance

In a time of dating app hell, Zuckerberg, who boasts about being with the same woman since college and actively participates in his children's lives, offers an image that some might find more encouraging.

"It's showing that it's OK for men to view having a positive relationship as an accomplishment β€” it's not just career success," Morley said. Seeing an "alpha" like Zuckerberg gush about his wife shows men that "they're allowed to be in love, to be romantic β€” that's not 'being soft' or 'being whipped.'"

It also offers a more wholesome alternative to social media and online dating. Professionally, Zuckerberg is overseeing Meta's foray into the dating app world. Personally, he's logging off: staying fit, throwing parties, and listening to what his wife wants in a custom luxury car β€” the wife he met the old-fashioned way.

It's a vision that strikes the right chord today, Morley said.

"That wives aren't just the support person or best friend character, that they are an equal partner is a really good message for people to be taking."

Read the original article on Business Insider

Bored of Instagram and TikTok? Try these 3 new social-media apps instead.

31 December 2024 at 05:49
New apps in phone

Getty Images; iStock; Natalie Ammari/BI

  • I'm a reporter covering social media for Business Insider.
  • My phone has well over 150 apps, and I'm quick to test out any new social app.
  • Here are three apps worth trying if you're looking for alternatives to Instagram, X, or TikTok.

More apps? For this social-media reporter, the answer is "always."

I'm back with my favorite apps from the year.

At a glance, the dozens of apps I've downloaded this year fall into a few themes: IRL social, close-friends-focused apps, social shopping, and anti-swipe dating apps.

Last year, I highlighted 13 apps that I downloaded in 2023 as part of my reporting on the social-media industry. Since writing that story, some of those startups have continued to grow, while others have been acquired, and a few have had to pivot.

For instance, Artifact, an AI-driven news app founded by the original creators of Instagram, shut down and was acquired by Yahoo. Lex, a queer social network, laid off staff before getting acquired by mobile app conglomerate 9count. And Landing, a creative social collaging app reminiscent of Polyvore, changed course and pivoted to building Zeen, a shoppable blogging platform.

Meanwhile, new apps have launched or expanded this year, making their way onto my phone (which, yes, has very low storage).

Here are the three of the best apps I downloaded in 2024:

Disclaimer: These are my favorite downloads of the year and this is very much an opinion.

1. PI.FYI is a recommendations-based feed

Screenshot of PI.FYI

Screenshot/Business Insider; PI.FYI

What it is: Created by the team behind the pop-culture newsletter Perfectly Imperfect, PI.FYI is a mostly text-based feed where people answer questions, share recommendations, and post micro-blogs about topics like music or film. The app was built by ex-Meta staffer Tyler Bainbridge, who cofounded the PI newsletter with Alexander Cushing.

When it launched: 2024

Why I haven't deleted it: When I'm on the hunt for new forms of media to consume (be it books, movies, music, etc.), I'll open up PI.FYI to see what people are sharing. The app lets you add a link to a post, which helps when going down rabbit holes. Posting there sometimes feels like writing into the void on Tumblr or Twitter in 2012 (in a good way).

2. Airbuds lets you see what friends are listening to

Screenshot of Airbuds app

Screenshot/Business Insider; Airbuds

What it is: It's a feed of music. It's that simple. Airbuds pulls information from several music streaming platforms (including Spotify, Apple Music, and Soundcloud). The team behind Airbuds also built Cappuccino, a social-audio app that launched in the early days of Clubhouse.

When it launched: 2022

Why I haven't deleted it: I switched from Spotify to Apple Music several years ago, and the one feature I missed was the ability to see what my friends were listening to. Airbuds lets me do just that and also makes it easy to save music to my own library.

3. IRL social app 222 coordinates experiences with strangers

222 screenshot of app

Screenshot/Business Insider; 222

What it is: 222, which started as a dinner series in Los Angeles in 2021, is an app that matches users with strangers for in-person experiences. The in-person events range from dinner and drinks to DIY art classes, and users take a robust personality quiz that is used to pair them with compatible matches. It was founded by Keyan Kazemian, Danial Hashemi, and Arman Roshannai, and was part of Y Combinator. In February, 222 announced it had raised a $2.5 million seed investment round.

When it launched: 2021 (222 expanded to New York in 2024)

Why I haven't deleted it: I've gone to several experiences through 222 this year and even made a few friends along the way. I've described the app to friends as a way of working out my socializing muscles, more than a guaranteed way to make friends or find new romantic sparks. You do have to pay a fee to access the curated experience (the monthly fee, for example, is about $22) on top of drinks, food, and other expenses.

Read the original article on Business Insider

The rise of IRL social apps: How startups are trying to get people to hang out in person and taking aim at loneliness

31 December 2024 at 05:30
At a 222 event in New York, new friends exchange Instagram handles and phone numbers to keep in touch.
At a 222 event in New York, new friends exchange Instagram handles and phone numbers to keep in touch.

Sydney Bradley/Business Insider

  • Many social-media users are looking to make friends and spend time together in person.
  • A new wave of startups is capitalizing on this demand with tools to help people make plans.
  • The "IRL Social" trend grew in 2024 and could carry into the new year.

Making new friends, it turns out, is pretty hard.

While the dominant social networks like Instagram, Facebook, and Snapchat have proclaimed that they connect us with our friends, many users feel less connected and more alone than ever.

A new wave of apps is trying to fill that void by replacing content algorithms with features designed to help users get together in real life. This year, several of these apps hit new peaks in popular culture and adoption.

One of the biggest stars in the space is Partiful, an events app that has replaced Facebook Events for many. Google named it the 2024 "app of the year," and it was even used for the viral TimothΓ©e Chalamet lookalike contest.

Then there's Timeleft, a European startup that gets groups of people together over dinner every Wednesday night in over 60 counties. It was also recognized by Google this year as a "hidden gem." Timeleft, which launched in 2020, expanded to the US in March.

"This year, we found product-market fit," Lais De Oliveira, head of North America for Timeleft, told Business Insider. "We've had over 20,000 people dining with us this year in the US and we've been handling weekly about 6,000 people dining with us across the US."

IRL social startups are not just getting users to download their apps. Some are also gettingΒ investorsΒ on board.

Posh, another events app that offers a feed of nearby happenings, closed a $22 million Series A round this year led by Goodwater Capital. Other firms, like FirstMark, Forerunner, and Best Nights VC, have also participated in IRL-focused tech.

For Zehra Naqvi, an angel investor and VC focused on consumer startups, IRL has been a core concept in her investing thesis this year.

"There is this overwhelming desire for people to just connect with one another," Naqvi said.

She sees IRL social apps right now falling into two camps. One is advanced event tech that makes things easier on hosts and attendees (like Partiful, Posh, or Luma), and the second is apps that foster a sense of "whimsical" in-person connection (like Timeleft and 222, another app that connects strangers over dinner or activities).

Some IRL apps are tackling monetization, though others are not in that stage yet. Posh, for example, takes a percentage of ticket sales, and 222 has a subscription model for access to curated events.

Read more of BI's coverage of emerging IRL social companies.

These IRL social startups have raised millions of dollars:

Meet the founders behind the apps trying to help people make friends:

Read the original article on Business Insider

Mark Cuban said he tried to invest in TikTok when it was called Musical.ly and less profit-driven

27 December 2024 at 02:10
Mark Cuban at a basketball game.
Mark Cuban.

Allen Berezovsky via Getty Images.

  • Mark Cuban tried to invest in Musical.ly, the platform that would become TikTok.
  • Cuban says the platform lost its spark, becoming "corporate."
  • In an interview, Cuban said focusing on monetization often harms the user experience.

Mark Cuban tried to invest in TikTok's precursor years ago, but said the company turned him down.

Cuban told content creator and journalist Jules Terpak in an interview on her YouTube channel that he enjoyed using TikTok when it was called Musical.ly.

The platform rebranded to TikTok in 2017 when it was acquired by its current owners, ByteDance.

"I loved it because I could just turn it on and there would be 15,000 people live immediately that I could talk to," Cuban said of Musical.ly.

"It was insane. I loved it. And then, as it got into the dances and everything, it was fun."

Cuban told CNBC he tried to invest in Musical.ly but was unsuccessful because the company wasn't looking to raise more funds at the time.

Cuban told Terpak he thinks TikTok is less fun than it used to be and "more corporate."

He said that the dance-focused version of the app was losing billions of dollars, "and so at some point, they had to start trying to make some money."

"I liked it better when it was dances and music," Cuban said. "Now it's a business."

Cuban said TikTok's early beauty was that its algorithm served users with more of what they liked than any other platform.

"Now it's corporate," he said. "It's how many followers can you get and how can you engage those followers."

There's "a diminishing return" for users when platforms monetize, Cuban said, driven by business realities.

"At some point, if you're there to make money, you have to figure out how to make money," he said.

Cuban's thoughts hit on an increasing frustration many users have with TikTok, where they are flooded with ads and many see the platform as a pseudo-shopping channel.

Cuban has a TikTok account himself, where he has 1.1 million followers β€” though he doesn't post often.

In 2023, he faced some backlash for a "tip of the day" on making money, in which he told people to cut back on extra lattes and streaming services.

"You want to put that in a money market account earning five, maybe more, percent and watch that sucker grow," he said. "That'll make you feel a whole lot better than that extra latte that you had that day."

Some criticized the advice for being unrealistic and out of touch with the majority of people.

Cuban didn't address the critics, only posting another tip of the day to "be nice" and "smile."

According to Bloomberg, Cuban has a net worth of around $8 billion. In 1990, he sold his first tech company, MicroSolutions,Β for $6 millionΒ and went on to invest in several successful businesses through "Shark Tank."

In October, Cuban announced he would be leaving "Shark Tank" after its 16th season to spend more time with his kids.

Read the original article on Business Insider

An influencer's clothing brand launch was a huge miss for her followers, so she took the site down. She relaunched it 7 months later with better materials and lower prices.

24 December 2024 at 02:23
Madeleine White
Madeleine White recently relaunched her pajama brand after criticism.

Madeleine White

  • Madeleine White's pajama brand faced backlash over pricing and material quality.
  • She took the website down and relaunched it seven months later with higher quality and lower prices.
  • White told BI building back trust with her audience is the most important thing for her.

Madeleine White learned what happens when your brand is a huge miss with your fans the hard way.

When she launched her pajama brand, See You Tomorrow, in May, White was thrilled because designing fashion was all she ever wanted.

But the launch went awry. Fans didn't like the price point or the materials used in many of the garments, leading to cries that White was out of touch and had lost the authenticity she had grown her millions of followers for.

"It was always a dream starting my own business," White told Business Insider. "But I could not have been prepared for how difficult it's been.

A clothing launch backfires

White started making content after she lost her job during the COVID-19 pandemic and decided to learn how to use a sewing machine.

Using her decade of experience in modeling, White became known for sharing thrifting videos and industry insights.

On Instagram, she now has 1.6 million followers, and on TikTok, she has 4.7 million.

But after See You Tomorrow launched, fans lamented that she'd forgotten her roots.

White's aim was always to create a brand that would resonate with her followers: one that wasn't budget or fast fashion but also wasn't high-end and unaffordable.

But while things started off well with over half a million visitors on the website, customers felt See You Tomorrow fell short on price point and quality.

"I feel like her original fan base have nothing in common with her current ventures," one said on the InfluencerSnark subreddit.

Under one Reddit post showing a $145 pajama set, customers said they weren't too eager about the price or the materials.

"They're cute, and I've been trying to focus on a smaller but more quality wardrobe, so the price didn't immediately turn me off," one said. "$145 for 100% polyester is absolutely insane though."

"She should get backlash for this, because choosing fast fashion materials but selling it at a high-end price is wild," wrote another. "Were this made from cotton satin or even a cotton silk voile it would be worth it. It would be sustainable."

White told BI she was aware of the complaints immediately and decided to take action. She took the site down and started rethinking the entire brand.

"We went back to the drawing board after a couple of days," she said. "I decided that unless I could fix most of these concerns that people had and really give it a proper shot, then it wasn't really worth continuing the business."

See You Tomorrow campaign
Madeleine White immediately acted when fans didn't like her clothing brand launch.

See You Tomorrow

7 months later

White didn't want to make any announcements while things were in flux, which was hard to do with so many fans eager to know what was happening.

Seven months later, in December, See You Tomorrow relaunched with new, higher-quality pieces and lower prices.

"I decided to bet on myself and put my money where my mouth is and to create the product that I wanted to make," she said. "I trusted my instincts that something wasn't right and that we could do better β€” and we are doing better."

White had to find new manufacturers and pay for everything herself. She said that though it was hard, she's glad she took that leap of faith.

"I felt like it would be so much more powerful to my audience if I could prove to them that I actually cared about their opinions and I cared about that feedback," White said.

"It's easy to say, I'm so sorry, I fucked up," she added. "But it's much better to say I'm so sorry, I fucked up, and here is how I fixed it."

Madeleine White's pajama brand See You Tomorrow
Madeleine White relaunched See You Tomorrow 7 months after an initial flop.

See You Tomorrow

White told BI that the last few years have been a mad rush because she was so eager to start her own brand. In hindsight, she would have spent longer researching what she wanted to do and not taken the first offer that came along, she said.

"It was definitely an eye-opener," she said.

Trust is everything

White posted a TikTok this month explaining everything. She said what was most important to her out of everything was building trust again with her audience.

It seemed to pay off, with followers thanking her for her transparency and applauding her for listening to their concerns.

@madeleine_white

What happened to @See you tomorrow πŸ¦‹

♬ original sound - Madeleine White

White said she doesn't care if she sells one product or a thousand with this new launch β€” she just wants to repair her relationship with her supporters.

"I've definitely learned just how badly launching a brand that people don't like can hurt your public image," she said. "It just goes to show how important it is for us as people with large followings to do things right."

She said she's also learned that people are happy to pay for quality as long as they know how a price point was reached.

White said influencers are held to a high standard, but ultimately, she sees that as a good thing.

"It just makes the brand better," she said. "I've learned so much, and I've definitely learned not to put my name on anything until I'm 100% happy with it."

Read the original article on Business Insider

Latimer AI startup to launch bias detection tool for web browsers

23 December 2024 at 02:00
John Pasmore Cofounder and CEO Latimer AI
John Pasmore Cofounder and CEO Latimer AI

Latimer AI

  • Latimer AI plans to launch a bias detection tool as a Chrome browser extension in January.
  • The tool scores text from one to 10, with 10 being extremely biased.
  • Latimer AI hopes the product will attract new users.

Bias is in the eye of the beholder, yet it's increasingly being evaluated by AI. Latimer AI, a startup that's building AI tools on a repository of Black datasets, plans to launch a bias detection tool as a Chrome browser extension in January.

The company anticipates the product could be used by people who run official social media accounts, or anyone who wants to be mindful of their tone online, Latimer CEO John Pasmore told Business Insider.

"When we test Latimer against other applications, we take a query and score the response. So we'll score our response, we'll score ChatGPT or Claude's response, against the same query and see who scores better from a bias perspective," Pasmore said. "It's using our internal algorithm to not just score text, but then correct it."

The tool assigns a score from one through 10 to text, with 10 being extremely biased.

Patterns of where bias is found online, are already emerging from beta testing of the product.

For instance, text from an April post by Elon Musk, in which he apologized for calling Dustin Moskowitz a derogatory name, was compared to an August post from Bluesky CEO Jay Graber.

An Elon Musk post on X is analyzed for bias and scores 6.8 out of 10, or "high bias" according to Latimer AI.
An Elon Musk post on X is analyzed for bias and scores 6.8 out of 10, or "High Bias" according to Latimer AI.

Latimer AI

Musks' post scored 6.8 out of 10, or "High Bias," while Graber's scored 3.6 out of 10, or "Low Bias".

Bluesky CEO Jay Graber's post to the platform is analyzed for bias and scores a 3.6 out of 10, or "Low Bias" according to Latimer AI.
Bluesky CEO Jay Graber's post to the platform is analyzed for bias and scores a 3.6 out of 10, or "Low Bias" according to Latimer AI.

Latimer AI

Latimer's technology proposed a "fix" to the text in Musk's post by changing it to the following: "I apologize to Dustin Moskowitz for my previous inappropriate comment. It was wrong. What I intended to express is that I find his attitude to be overly self-important. I hope we can move past this and potentially become friends in the future."

While what is deemed biased is subjective, Latimer isn't alone in trying to tackle this challenge through technology. The LA Times plans to display a "bias meter" in 2025, for instance.

Latimer hopes its bias tool will draw in more users.

"This will help us identify a different set of users who might not use a large language model, but might use a browser extension," Pasmore said.

The bias detector will launch at $1 a month, and a pro version will let users access multiple bias detection algorithms.

Read the original article on Business Insider

A tsunami of AI deepfakes was expected this election year. Here's why it didn't happen.

18 December 2024 at 02:00
Oren Etzioni
Oren Etzioni, founder of TrueMedia.org.

Oren Etzioni

  • Generative AI tools have made it easier to create fake images, videos, and audio.
  • That sparked concern that this busy election year would be disrupted by realistic disinformation.
  • The barrage of AI deepfakes didn't happen. An AI researcher explains why and what's to come.

Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.

India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.

"We're going into the jungle without bug spray," Etzioni recalled thinking at the time.

He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.

The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.

In the end, the barrage never came.

"It really wasn't nearly as bad as we thought," Etzioni said. "That was good news, period."

He's still slightly mystified by this, although he has theories.

First, you don't need AI to lie during elections.

"Out-and-out lies and conspiracy theories were prevalent, but they weren't always accompanied by synthetic media," Etzioni said.

Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos.Β 

"Some of the most egregious videos that are truly realistic β€” those are still pretty hard to create," Etzioni said. "There's another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet."

One thing he's sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it's coming.

With that in mind, Etzioni shared learnings from TrueMedia's first go-round this year:

  • Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.
  • There's no purely technical solution for this looming problem, and AI will need regulation.Β 
  • Social media has an important role to play.Β 
  • TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there's room for human analysts.
  • It's not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia's technology.Β 

The group plans to publishΒ research on its AI deepfake detection efforts, and it's working on potential licensing deals.Β 

"There's a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes," Etzioni said. "We hope to license those to entities that are mission-oriented."

Read the original article on Business Insider

Meet Shou Zi Chew, the 41-year-old CEO leading TikTok as it fights a potential US ban

17 December 2024 at 10:04
shou zi chew tiktok ceo
Shou Zi Chew is the face of TikTok's effort to stay up and running in the US.

Kin Cheung/AP

  • TikTok CEO Shou Zi Chew is the public face of the company, rallying its fans and testifying before Congress.
  • He's 41 years old, went to Harvard Business School, and interned at Facebook when it was a startup.
  • He met with president-elect Donald Trump recently as he continues his fight to avoid a TikTok ban in the US.

TikTok is under a lot of pressure right now.

As US lawmakers worry the video-sharing platform, which is owned by Chinese company ByteDance, poses a danger to national security, TikTok is scrambling to fight a law requiring it be sold to a US owner by January 19 or else risk being banned in the country.

So who's leading the company through this turbulent period?

That would be Shou Zi Chew, TikTok's 41-year-old CEO from Singapore, who got his start as an intern at Facebook.

Here's a rundown on TikTok's head honcho:

Chew worked for Facebook when it was still a startup.
facebook mark zuckerberg
Facebook's Mark Zuckerberg in 2010, before he took his company public.

Marcio Jose Sanchez/AP

He earned his bachelor's degree in economics at the University College London before heading to Harvard Business School for his MBA in 2010.Β 

While a student there, Chew worked for a startup that "was called Facebook," he said in a post on Harvard's Alumni website. Facebook went public in mid-2012.

Β 

Chew met his now-wife, Vivian Kao, via email when they were both students at Harvard.
Shou Zi Chew and Vivian Kao attend The 2022 Met Gala
Shou Zi Chew and Vivian Kao attend The 2022 Met Gala.

Theo Wargo/WireImage

They are "a couple who often finish each other's sentences," according to the school's alumni page, and have three kids.

Chew was CFO of Xiaomi before joining Bytedance.
Shou Zi Chew and Xiaomi CEO give thumbs up at the listing of Xiaomi at the Hong Kong Exchanges on July 9, 2018
Shou Zi Chew and Xiaomi's CEO give thumbs up at the listing of Xiaomi at the Hong Kong Exchanges on July 9, 2018

REUTERS/Bobby Yip

He became chief financial officer of the Chinese smartphone giant, which competes with Apple, in 2015. Chew helped secure crucial financing and led the company through its 2018 public listing, which would become one of the nation's largest tech IPOs in history.Β 

He became Xiaomi's international business president in 2019, too.
TikTok CEO Shou Zi Chew in Washington, DC on Tuesday February 14, 2023.
TikTok CEO Shou Zi Chew in Washington, DC on Tuesday, February 14, 2023.

Matt McClain/The Washington Post/Getty Images.

Before joining Xiaomi, Chew also worked as an investment banker at Goldman Sachs for two years, according to his LinkedIn profile.

He also worked at investment firm DST, founded by billionaire tech investor Yuri Milner, for five years. It was during his time there in 2013 that he led a team that became early investors in ByteDance, as the Business Chief and The Independent reported.

For a while, Chew was both the CEO of TikTok and the CFO of its parent company, ByteDance.
zhang yiming bytedance
ByteDance founder Zhang Yiming

Zheng Shuai/VCG via Getty Images

Chew joined ByteDance's C-suite in March 2021, the first person to fill the role of chief financial officer at the media giant.

He was named CEO of TikTok that May at the same time as Vanessa Pappas was named COO. Bytedance founder and former CEO Zhang Yiming said at the time that Chew "brings deep knowledge of the company and industry, having led a team that was among our earliest investors, and having worked in the technology sector for a decade."

That November, it was announced that Chew would leave his role as ByteDance's CFO to focus on running TikTok.

TikTok's former CEO, Kevin Mayer, had left Walt Disney for the position in May 2020 and quit after three months as the company faced pressure from lawmakers over security concerns.

Some government officials in the US and other countries remain concerned that TikTok's user data could be shared with the Chinese government.
Biden
The Biden administration has demanded that TikTok divest its American business from ByteDance or risk being banned.

Jacquelyn Martin, Pool

Donald Trump's administration issued executive orders designed to force ByteDance into divesting its TikTok US operations, though nothing ever happened.

President Biden signed an executive order in June 2021 that threw out Trump's proposed bans on the app.

Last year, the Biden administration demandedΒ that TikTok divestΒ its American business from its Chinese parent company or risk being banned in the US. In response, Chew said such a divestmentΒ wouldn't solve officials' security concerns aboutΒ TikTok.

In a TikTok last March, Chew announced the company has amassed 150 million monthly active users in the US and broached the subject of the ban threats.
Shou Zi Chew, TikTok's CEO
Chew took to TikTok to discuss the ban threats.

TikTok

"Some politicians have started talking about banning TikTok," he said. "Now this could take TikTok away from all 150 million of you."

Chew testified before Congress that month about the company's privacy and data security practices.

Wall Street said his testimony didn't do much to help his case to keep TikTok alive in the US, though Chew seemed to win over many TikTok users, with some applauding his efforts and even making flattering fancam edits of him.

Now, Chew and TikTok are in the spotlight again as the company tries to stave off a looming potential ban.
TikTok CEO Shou Zi Chew testifies during a House Energy and Commerce Committee hearing on Thursday, March 23, 2023.
TikTok CEO Shou Zi Chew testifies during a House Energy and Commerce Committee hearing on Thursday, March 23, 2023.

Kent Nishimura / Los Angeles Times via Getty Images

The House of Representatives passed a bill on March 13 that would require any company owned by a "foreign adversary" to divest or sell to a US-based company within 180 days to avoid being banned in the US.

Chew put out a video response shortly after, asking users to "make your voices heard" and "protect your constitutional rights" by voicing opposition to lawmakers.

He called the vote "disappointing" and said the company has invested in improving data security and keeping the platform "free from outside manipulation."

"This bill gives more power to a handful of other social media companies," he added. "It will also take billions of dollars out of the pockets of creators and small businesses. It will put more than 300,000 American jobs at risk."

The Senate also passed the bill, and President Biden signed it into law in April.

In September, a hearing on the potential TikTok ban began in federal appeals court and in December, a three-judge panel from the US Court of Appeals for the District of Columbia Circuit ruled that the law is constitutional.

On the heels of the bad news, Chew met with the president-elect at Mar-a-Lago several days later.
Donald Trump
Chew and Trump recently met.

Jeff Bottari/Zuffa LLC via Getty Images

Trump said in a press conference on the day they met that he has a "warm spot" for TikTok, which he has criticized in the past, because he says it helped him win over young voters in the 2024 election.

Also on the day of their meeting, TikTok asked the Supreme Court to block the law that requires it be sold to avoid a shutdown, arguing that it violates Americans' First Amendment rights.

When he's not fighting efforts to ban TikTok, Chew makes appearances at some pretty high-profile events.
TikTok CEO Shou Zi Chew departs after Congress Testimony
Shou Zi Chew leaves Congress on March 23.

Kent Nishimura / Los Angeles Times via Getty Images

He's been seen at the Met Gala, and also posted about attending the 2023 Super Bowl and even Taylor Swift's Eras Tour.

His hobbies include playing video games like Clash of Clans and Diablo IV, golfing, and reading about theoretical physics.

Read the original article on Business Insider

Fed up with Twitter, Americans are fleeing to group chats

17 December 2024 at 01:03
Newsanchor with contact avatar as head

Getty Images; iStock; Natalie Ammari/BI

In the early days of the pandemic, Josh Kramer and his wife set up a Discord server to stay in touch with their friends. Branched off from the main group of about 20 people are different channels for topics β€” like AI and crypto, which took over a channel previously devoted to "Tiger King," and another called "sweethomies" to talk about their houses and apartments β€” that only some people might want to be notified about to avoid annoying everyone all the time. Now, more than four years later, it's become "essential" for the extended friend group, says Kramer, seeing them through the early anxiety of COVID-19 and two presidential elections.

While the chat is made up of friendly faces, it's not really an echo chamber β€” not everyone has the same ideology or political opinions, Kramer tells me. But it's more productive than screaming into the void on social media. Now, when he has a thought that may have turned into a tweet, he instead takes it to the group, where it can become a conversation.

"It's a way to have conversations about complicated issues, like national politics, but in context with people I actually know and care about," Kramer, who is the head of editorial at New_ Public, a nonprofit research and development lab focusing on reimagining social media, tells me. The success of the server has also informed how he thinks about ways to reform the social web. On election night, for example, using the group chat was less about scoring points with a quippy tweet and "more about checking in with each other and commiserating about our experience, rather than whatever you might take to Twitter to talk about to check in with the broader zeitgeist."

In the month or so since the 2024 election, thousands have abandoned or deactivated their X accounts, taking issue with Elon Musk's move to use the platform as a tool to reelect Donald Trump, as they seek new ways to connect and share information. Bluesky, which saw its users grow 110% in November according to market intelligence firm Sensor Tower, has emerged as the most promising replacement among many progressives, journalists, and Swifties, as it allows people to easily share links and doesn't rely as heavily on algorithmic delivery of posts as platforms like Facebook, X, and TikTok have come to. But some are turning further inward to smaller group chats, either via text message or on platforms like Discord, WhatsApp, and Signal, where they can have conversations more privately and free of algorithmic determinations.

It's all part of the larger, ongoing fracturing of our social media landscape. For a decade, Twitter proved to be the room where news broke. Other upstarts launched after Elon Musk bought the platform in 2022 and tried to compete, luring people with promises of moderation and civility, but ultimately folded, largely because they weren't very fun or lacked the momentum created by the kind of power users that propelled the old Twitter. But for many, there's still safety in the smaller group chats, which take the form of your friends who like to shit talk in an iMessage chain or topic-focused, larger chats on apps like Discord or WhatsApp.

"Group chats have been quite valued," Kate Mannell, a research fellow with the ARC Centre of Excellence for the Digital Child at Deakin University in Australia, tells me. They allow people to chat with selected friends, family members, or colleagues to have much "more context-specific kinds of conversations, which I think is much more reflective of the way that our social groups actually exist, as opposed to this kind of flattening" that happens on social media. When people accumulate large followings on social media, they run into context collapse, she says. The communication breakdown happens as the social platforms launched in the 2000s have taken on larger lives than anyone anticipated.

The candid nature of group chats gives them value and tethers people with looser connections together, but that can also make them unwieldy.

By contrast, some more exclusive chats are seen as cozy, safe spaces. Most of Discord's servers are made up of fewer than 10 people, Savannah Badalich, the senior director of policy at Discord, tells me. The company has 200 million active users, up from 100 million in 2020. What started as a place to hang with friends while playing video games still incentivizes interacting over lurking or building up big followings. "We don't have that endless doomscrolling," Badalich says. "We don't have that place where you're passively consuming content. Discord is about real-time interaction." And interacting among smaller groups may be more natural. Research by the psychologist Robin Dunbar in the 1990s found that humans could cognitively maintain about 150 meaningful relationships. More recent research has questioned that determination, but any person overburdened by our digital age can surely tell you that you can only show up authentically and substantially in person for a small subset of the people you follow online. A 2024 study, conducted by Dunbar and the telecommunications company Vodafone, found that the average person in the UK was part of 83 group chats across all platforms, with a quarter of people using group chats more often than one-to-one messages.

In addition to hosting group chats, WhatsApp has tried more recently to position itself as a place for news, giving publishers the ability to send headlines directly to followers. News organizations like MSNBC, Reuters, and Telemundo have channels. CNN has nearly 15 million followers, while The New York Times has about 13 million. Several publishers recently told the Times that they were seeing growth and traffic come from WhatsApp, but the channels have yet to rival sources like Google or Facebook. While it gives them the power to connect to readers, WhatsApp is owned by Meta, which has a fraught history of hooking media companies and making them dependent on traffic on its social platforms only to later de-emphasize their content.

Victoria Usher, the founder and CEO of Gingermay, a B2B tech communications firm, says she's in several large, business-focused group chats on WhatsApp. Usher, who lives in the UK, even found these chats were a way to get news about the US election "immediately." In a way, the group chats are her way of optimizing news and analysis of it, and it works because there's a deep sense of trust between those in the chat that doesn't exist when scrolling X. "I prefer it to an algorithm," she says. "It's going to be stories that I will find interesting." She thinks they deliver information better than LinkedIn, where people have taken to writing posts in classic LinkedIn style to please the algorithm β€” which can be both self-serving and cringe. "It doesn't feel like it's a truthful channel," Usher says. "They're trying to create a picture of how they want to be seen personally. Within WhatsApp groups or Signal, people are much more likely to post what they actually feel about something."

The candid nature of group chats β€” which some have called the last safe spaces in society today β€” gives them value and tethers people with looser connections together, but that can also make them unwieldy. Some of the larger group chats, like those on Discord, have moderation and rules. But when it comes to just chatting with your friends or family, there's largely no established group-chat etiquette. Group chats can languish for years; there's no playbook for leaving or kicking out someone who's no longer close to the core group. If a couple breaks up, who gets the group chat? How many memes is a person allowed to send a day? What happens when the group texts get leaked? There's often "no external moderator to come in and say, 'That's not how we do things,'" Mannell says.

Kramer, while he likes his Discord chat, is optimistic about the future of groups and new social networks. He says he's also taken over a community Facebook group for his neighborhood that was inactive and made more connections with his neighbors. We're in a moment where massive change could come to our chats and our social networks. "There's been a social internet for 30 years," says Kramer. But there's "so much room for innovation and new exciting and alternative options." But his group chat might still have the best vibes of all. Messaging there "has less to do with being right and scoring points" than on social media, he says. "It has so much value to me on a personal level, as a place of real support."


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Read the original article on Business Insider

Luigi Mangione is more complicated than his myth. The internet doesn't care.

11 December 2024 at 04:03
Photo collage featuring Luigi Mangione and a wanted police flyer

Pennsylvania State Police via AP, Alex Kent/Getty Images; Alyssa Powell/BI

It used to be that when a killer emerged in America, we found out who the man was before we began to enshroud him in myth. But with Luigi Mangione, the lead suspect in the killing of UnitedHealthcare CEO Brian Thompson, that process was reversed. The internet assumed it already knew everything about Thompson's killer before a suspect had even been identified, let alone arrested.

Within hours of the shooting, social media was churning out a mythologized version of the masked man. In his anonymity, he became an instant folk hero, portrayed as a crusader for universal healthcare, a martyr willing to risk it all to send a message to America's insurance giants with "the first shots fired in a class war." A Reddit forum offered up dozens of laudatory nicknames to crystalize his mythology: the Readjuster, the Denier, the People's Debt Collector, Modern-Day Robin Hood. "I actually feel safer with him at large," one tweet a day after the shooting said; it received 172,000 likes. A surveillance image of the suspect moved some to comment that he was "too hot to convict" and prompted comparisons to Jake Gyllenhaal and TimothΓ©e Chalamet. In New York City, a "CEO-shooter look-alike competition" was held in Washington Square Park. Surely, the internet assumed, the suspect shared left-wing ideas about the cruelties of privatized healthcare.

Then the man himself appeared β€” and he didn't fit into any of the neat categories that had already been created to describe him. On X, he followed the liberal columnist Ezra Klein and the conservative podcaster Joe Rogan. He respected Alexandria Ocasio-Cortez and retweeted a video of Peter Thiel maligning "woke"-ism. He took issue with both Donald Trump and Joe Biden. He played the cartoon video game "Among Us," posted shirtless thirst traps, quoted Charli XCX on Instagram, and had the Goodreads account of an angsty, heterodox-curious teenage boy: self-help, bro-y nonfiction, Ayn Rand, "The Lorax," and "Infinite Jest." Yes, he seemed to admire the Unabomber. But mostly, this guy β€” a former prep-school valedictorian with an Ivy League education and a spate of tech jobs β€” was exceedingly centrist and boring. A normie's normie. He wasn't an obvious lefty, but he wasn't steeped in the right-wing manosphere either. His posted beliefs don't fit neatly into any preestablished bucket. In his 261-word manifesto, which surfaced online, he downplayed his own qualifications to critique the system. "I do not pretend," he wrote, "to be the most qualified person to lay out the full argument."

In the attention economy, patience is a vice.

That didn't stop the denizens of social media from pretending to be the most qualified people to lay out exactly who Mangione is. He's "fundamentally anti-capitalist" and "just another leftist nut job." Or he's "a vaguely right-wing ivy league tech bro." Or he was invented by the CIA, or maybe Mossad, as a "psyop." The reality of Mangione β€” his messy, sometimes contradictory impulses β€” allowed everyone to cherry-pick the aspects of his personality that confirmed their original suspicions. In the attention economy, patience is a vice.

The rush to romanticize killers is nothing new. A quarter century ago, we cast the Columbine shooters as undone by unfettered access either to guns or to the satanic influences of Marilyn Manson and Rammstein. A decade ago, we debated the glamorization of the Boston Marathon bomber, gussied up like a rock star on the cover of Rolling Stone. But social media has sped up the assumption cycle to the point where we put the killer into a category before police have found the killer. Perhaps there's a "great rewiring" of our brains that has diminished our capacity to understand each other, as the social psychologist Jonathan Haidt suggests in "The Anxious Generation" β€” a book Mangione had retweeted a glowing review of.

Mythmaking is easier, of course, when it's unencumbered by reality. The less we know about a killer, the more room there is to turn him into something he's not. From what we have learned so far, Mangione is a troubled Gen Zer who won the privilege lottery at birth and ascribed to a mishmash of interests and beliefs. We will surely learn more about him in the coming days, weeks, and months. But now that we know who he is, it will be hard, if not impossible, to let go of our initial assumptions. Instead, we'll selectively focus on the details that fit tidily into the myths we've already created. In the digital-age version of "The Man Who Shot Liberty Valance," the legend was already printed by the time the facts came along.


Scott Nover is a freelance writer in Washington, DC. He is a contributing writer at Slate and was previously a staff writer at Quartz and Adweek covering media and technology.

Read the original article on Business Insider

❌
❌