❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 23 December 2024Main stream

VC's healthcare predictions for 2025: more M&A, fierce competition in AI, and a health insurance shake-up under Trump

23 December 2024 at 02:00
A stethoscope wrapped around a white piggy bank on a blue background (Healthcare funding)
Investors are watching for a pickup in healthcare M&A deals in 2025.

Nudphon Phuengsuwan/Getty Images

  • After a slower-than-anticipated year for healthcare funding, investors expect sunnier skies in 2025.
  • 13 VCs from firms like ICONIQ Growth and AlleyCorp share their predictions for digital healthcare next year.
  • They expect more M&A, funding for AI agents and clinical decision support, and Medicare shake-ups.

The healthtech sector will see more private-equity-backed M&A and a fierce battle between AI-scribing startups next year, according to thirteen investors in the healthcare VC market.

At the beginning of the year, healthcare venture capital appeared poised for a rebound. Investors hoping to do deals again after a two-year funding drought watched as healthcare startups flooded back to the market to grab more cash.

Those VCs raced to break out their checkbooks for hot new AI startups in the first quarter, from scribing startups like Abridge to automated prior authorization players like Cohere Health.

A confluence of macroeconomic factors β€” from still-high interest rates to fundraising struggles for venture firms to the uncertainty of a looming presidential election β€” dampened the anticipated resurgence. 2024's funding appears to be, at best, on pace with 2023 levels, with $8.2 billion raised by US digital health startups in the first three quarters of this year compared to $8.6 billion through Q3 2023, per Rock Health.

Now, with interest rates expected to drop and a new administration on the way, VCs are anticipating sunnier skies in 2025.

A pickup in healthcare M&A and IPOs

After a slow year for healthcare M&A, investors want to see more deals in 2025.

With interest rates expected to come down β€” and investors facing pressure to deploy capital β€” private equity buyers should be more active in 2025, said .406 Ventures managing director Liam Donohue.

And Flare Capital Partners' Parth Desai said he's already seeing private-equity-backed healthcare companies looking to buy smaller startups. Their goal, as he understands it, is to make tuck-in acquisitions in 2025 that improve their growth stories as they look ahead to potential IPOs in 2026.

"Maybe they're not phenomenal outcomes, but at the end of the day, they'll create some liquidity," Desai said of those acquisitions. "I expect that to be one of the first exit windows starting to manifest in 2025."

Investors were hopeful but unsure that the IPO window would meaningfully reopen for digital health startups in 2025, despite startups like Hinge Health and Omada Health signaling their intentions to test the public markets.

Venrock partner Bryan Roberts said he expects the healthcare IPO market to remain relatively quiet. LRV Health managing partner Keith Figlioli suggested we won't see IPO activity kick off until the second half of the year after other exit windows open.

VCs said they're mostly looking for smaller deals next year, from mergers of equals to asset sales. Figlioli and Foreground Capital partner Alice Zheng said we'll see even more consolidation and shutdowns in digital health next year as startups run out of cash.

"Investors will have to make tough decisions on their portfolio companies," Zheng said. "We want to support all of them, but we can't indefinitely."

Alice Zheng
Alice Zheng, a partner at Foreground Capital, expects to see more consolidation and shutdowns as investors make tough decisions about their digital health portfolios.

Foreground Capital

Healthcare AI competition will get fierce

Healthcare startups using AI for administrative tasks were easily the hottest area of healthcare AI investment in 2024. Investors think the crop of well-funded competitors will face increasing pressures next year to expand their product lines.

ICONIQ Growth principal Sruthi Ramaswami said she expects the group of AI scribing startups that landed big funding rounds this year, from Abridge to Ambience Healthcare to Suki, to scale significantly next year using the fresh cash as hospitals scramble for solutions to the healthcare staffing shortage.

As these startups scale, however, they'll face pressure to expand beyond ambient scribing into other product lines, like using AI for medical coding and billing, said Kindred Ventures managing partner Kanyi Maqubela. Scribing technology could become a commodity sooner than later, with many providers trying free off-the-shelf scribing software rather than contracting with startups, Maqubela said.

"It'll be a race to who can start to build other services and build more of an ecosystem for their provider customers," he said.

Kindred Ventures Kanyi Maqubela, Steve Jang
Kindred Ventures general partner Kanyi Maqubela thinks medical scribe startups will have to race to find new product lines against commoditization.

Kindred Ventures

Some AI startups, like Abridge, have already been vocal about their plans to expand into areas likeΒ codingΒ orΒ clinical decision support. The best-funded AI scribing startups may be able to acquire smaller startups to add those capabilities, but other scribing companies will be more likely to get bought out, Maqubela said.

Flare Capital Partners' Desai suggested that healthcare companies already focused on RCM will try to pick up scribing solutions as the tech becomes a must-have for hospitals. He pointed to Commure's $139 million take-private acquisition of Augmedix in July.

Ramaswami said that demonstrating a high return on investment would be critical for these startups as hospitals pick their favorites among various AI pilots.

Sruthi Ramaswami, Iconiq Growth
Sruthi Ramaswami

Iconiq Growth

Health insurance in flux in Trump's second term

While many VCs quietly celebrated the potential for more M&A and IPOs in 2025 following Trump's election in November, the incoming administration could bring some big shake-ups for healthcare markets.

Trump could move to boost private health insurers, including Medicare Advantage plans, in his second term, Venrock's Roberts said. That could be a boon for young insurers like Devoted Health and Alignment Healthcare fighting for Medicare Advantage market share, as well as startups contracting with insurers to improve healthcare payment processes.

He suggested the new administration may even roll back changes made in the Center for Medicare and Medicaid Services' latest reimbursement model for Medicare, which went into effect this year and resulted in lower payments for many Medicare Advantage plans in the agency's attempt to improve payment accuracy.

Brenton Fargnoli, a general partner at AlleyCorp, said he expects to see health insurers respond to these risk adjustment changes and move to control higher-than-expected medical costs over the past year by launching a bevy of new value-based care partnerships in 2025 for specialties, including oncology, cardiology, and musculoskeletal care.

A photo of investor Brenton Fargnoli smiling, wearing a white t-shirt against a white backgorund
Brenton Fargnoli, a general partner at AlleyCorp, thinks insurers will launch a bevy of value-based care partnerships in 2025 for high-cost specialties.

AlleyCorp

Some healthcare experts are also concerned that the federal government could cut funding for Medicaid plans. These changes could force states to scramble for new strategies and potentially new partnerships to control healthcare costs for their Medicaid populations.

"If there is a significant shift in direction at the federal level, I think you're going to see certain states do much more than they have in the past to try to continue to address health disparities," said Jason Robart, cofounder and managing partner of Seae Ventures. "As it happens, that creates opportunities for private companies to leverage their innovative solutions to address the need."

Similarly, Muse Capital founding partner Rachel Springate said that while investors in reproductive health startups will be closely watching state-level regulatory changes that could impact their portfolio companies, those startups could see surges in consumer demand as founders step up to fill gaps in reproductive care access.

Some of the Trump administration's proposed moves could stunt progress for health and biotech startups by stalling regulatory oversight. Robert F. Kennedy Jr., Trump's pick to lead Health and Human Services, has said he wants to overhaul federal health agencies, including the Food and Drug Administration and the National Institutes of Health. Marissa Moore, a principal at OMERS Ventures, said the promised audits and restructuring efforts could lead to major delays in critical NIH research and FDA approvals of new drugs and medical devices.

Rachel Springate, Muse Capital
Rachel Springate, founding partner at Muse Capital, thinks reproductive health startups could see surges in consumer demand as founders step up to fill gaps in care access.

Muse Capital

What's hot in AI beyond scribes

In 2025, AI will be an expectation in healthcare startup pitches, not an exception, said Erica Murdoch, managing director at Unseen Capital. Startups have pivoted to position AI as a tool for improved efficiency rather than as their focal point β€” and any digital health startups not using AI, in turn, will need a good reason for it.

With that understanding, investors expect to see plenty more funding for healthcare AI in 2025. While many tools made headlines this year for their ability to automate certain parts of healthcare administration, .406 Ventures' Donohue and OMERS Ventures' Moore said they expect to see an explosion of AI agents in healthcare that can manage these processes autonomously.

Investors remain largely bullish about healthcare AI for administrative tasks over other use cases, but some think startups using the tech for aspects of patient diagnosis and treatment will pick up steam next year.

"We will begin to see a few true clinical decision support use cases come to light, and more pilots will begin to test the augmentation of clinicians and the support they truly need to deliver high quality, safe care," said LRV Health's Figlioli. He hinted the market will see some related funding announcements in early 2025.

Moore said she's also expecting to see more investments for AI-driven mental health services beyond traditional cognitive behavioral therapy models β€” "for example, just today I got pitched 'the world's first AI hypnotherapist."

Dan Mendelson, the CEO of JPMorgan's healthcare fund Morgan Health, said he's watching care navigation startups from Included Health to Transcarent to Morgan Health's portfolio company Personify that are now working to improve the employee experience with AI. The goal, he says, is for an employee to query the startup's wraparound solution and be directed to the right benefit via its AI, a capability he says he hasn't yet seen deployed at scale.

"These companies are racing to deploy their data and train their models, and we'd love to see a viable product in this area," he said.

Read the original article on Business Insider

Before yesterdayMain stream

Reddit cofounder Alexis Ohanian predicts live theater and sports will become more popular than ever as AI grows

20 December 2024 at 09:57
image of Alex ohanian smiling
Alexis Ohanian discussed the future of AI on the podcast "On Purpose with Jay Shetty" this week.

Elsa/Getty Images

  • Alexis Ohanian predicts AI will drive demand for more raw human experiences.
  • In 10 years, live theater will be more popular than ever, the Reddit cofounder contends.
  • He says no matter what jobs are replaced by AI, humans will always have an advantage in empathy.

Alexis Ohanian predicted that in a future oversaturated with artificial intelligence, people will seek out more raw, emotive human experiences.

And in 10 years, he said, live theater will be more popular than ever.

The 41-year-old, who co-founded social media platform Reddit in 2005, told the "On Purpose with Jay Shetty" podcast this week that AI will soon have an undeniable impact on nearly every aspect of society, including the entertainment sector.

Ohanian, who also founded venture capital firm Seven Seven Six in 2020, said that the industry will see a big shift when AI makes on-screen entertainment better, faster, cheaper, and more dynamic β€” which he said is happening.

Every screen we look at will become so programmed to show us "what we want, when we want it, how we want it," he said, that "a part of our humanity will miss, you know, thousands of years ago when we were sitting around a campfire and that great storyteller was doing the voices and the impressions.'"

"That's ingrained in our species," he said.

And that kind of raw, in-person magic will feel novel, he suggested.

"I actually bet 10 years from now live theater will be more popular than ever," Ohanian said. "Because, again, we'll look at all these screens with all these AI-polished images, and we'll actually want to sit in a room with other humans to be captivated for a couple hours in a dark room to feel the goosebumps of seeing live human performances."

The same is true for sports, he told Shetty. "We need humans doing that. We need to feel their pain and their success and their triumphs," he said. "Those are the areas that get me most hopeful."

AI can't replace genuine human empathy, Ohanian suggested.

No matter what jobs robots take over from us in the future, fields of work where empathy is a core component of the job will have an advantage, he said. And that's why one of the most important, marketable skills he's teaching his kids is empathy, he said.

Read the original article on Business Insider

OpenAI launched its best new AI model in September. It already has challengers, one from China and another from Google.

20 December 2024 at 08:57
Sam Altman sits in front of a blue background, looking to the side.
OpenAI CEO Sam Altman.

Andrew Caballero-Reynolds/AFP/Getty Images

  • OpenAI's o1 model was hailed as a breakthrough in September.
  • By November, a Chinese AI lab had released a similar model called DeepSeek.
  • On Thursday, Google came out with a challenger called Gemini 2.0 Flash Thinking.

In September, OpenAI unveiled a radically new type of AI model called o1. In a matter of months, rivals introduced similar offerings.

On Thursday, Google released Gemini 2.0 Flash Thinking, which uses reasoning techniques that look a lot like o1.

Even before that, in November, a Chinese company announced DeepSeek, an AI model that breaks challenging questions down into more manageable tasks like OpenAI's o1 does.

This is the latest example of a crowded AI frontier where pricey innovations are swiftly matched, making it harder to stand out.

"It's amazing how quickly AI model improvements get commoditized," Rahul Sonwalkar, CEO of the startup Julius AI, said. "Companies spend massive amounts building these new models, and within a few months they become a commodity."

The proliferation of multiple AI models with similar capabilities could make it difficult to justify charging high prices to use these tools. The price of accessing AI models has indeed plunged in the past year or so.

That, in turn, could raise questions about whether it's worth spending hundreds of millions of dollars, or even billions, to build the next top AI model.

September is a lifetime ago in the AI industry

When OpenAI previewed its o1 model in September, the product was hailed as a breakthrough. It uses a new approach called inference-time compute to answer more challenging questions.

It does this by slicing queries into more digestible tasks and turning each of these stages into a new prompt that the model tackles. Each step requires running a new request, which is known as the inference stage in AI.

This produces a chain of thought or chain of reasoning in which each part of the problem is answered, and the model doesn't move on to the next stage until it ultimately comes up with a full response.

The model can even backtrack and check its prior steps and correct errors, or try solutions and fail before trying something else. This is akin to how humans spend longer working through complex tasks.

DeepSeek rises

In a mere two months, o1 had a rival. On November 20, a Chinese AI company released DeepSeek.

"They were probably the first ones to reproduce o1," said Charlie Snell, an AI researcher at UC Berkeley who coauthored a Google DeepMind paper this year on inference-time compute.

He's tried DeepSeek's AI model and says it performs well on complex math problems that must be solved by thinking for longer and in stages.Β 

He noted that in DeepSeek's DeepThink mode, the model shows users every step of its thought process. With o1, these intermediate steps are hidden from users.Β 

"I've asked people at OpenAI what they think of it," Snell told BI. "They say it looks like the same thing, but they don't how DeepSeek did this so fast."

OpenAI didn't respond to a request for comment.Β On Friday, the startup previewed an o1 successor, called o3. Francois Chollet, a respected AI expert, called the update a "significant breakthrough."

Andrej Karpathy, an OpenAI cofounder, praised Google's new "Thinking" model for the same reasoning feature.

"The prominent and pleasant surprise here is that unlike o1 the reasoning traces of the model are shown," he wrote on X. "As a user I personally really like this because the reasoning itself is interesting to see and read β€” the models actively think through different possibilities, ideas, debate themselves, etc., it's part of the value add."

A DeepSeek demo

Snell shared a multistep math problem with Business Insider, which we used to test DeepSeek for ourselves:Β 

"Find a sequence of +, -, /, * which can be applied to the numbers 7, 3, 11, 5 to get to 24, where each of the given numbers is used exactly once."

BI put that prompt in DeepSeek's chat window on its website. The model responded initially by laying out the challenge ahead.

"Alright, so I've got this problem here: I need to use the numbers 7, 3, 11, and 5, and combine them with the operations of addition, subtraction, multiplication, and division, using each number exactly once, to get to 24," it replied. "At first glance, this seems a bit tricky, but I think with some systematic thinking, I can figure it out."

It then proceeded through multiple steps over roughly 16 pages of discussion that included mathematical calculationsΒ and equations. The model sometimes got it wrong, but it spotted this and didn't give up. Instead, it swiftly moved on to another possible solution.Β 

"Almost got close there with 33 / 7 * 5 β‰ˆ 23.57, but not quite 24. Maybe I need to try a different approach," it wrote at one point.Β 

After a few minutes, it found the correct solution.Β 

"You can see it try different ideas and backtrack," Snell said in an interview on Wednesday.Β He highlighted this part of DeepSeek's chain of thought as particularly noteworthy:

"This is getting really time-consuming. Maybe I need to consider a different strategy," the AI model wrote. "Instead of combining two numbers at a time, perhaps I should look for a way to group them differently or use operations in a nested manner."

Then Google appears

Snell said other companies are likely working on AI models that use the same inference-time compute approach as OpenAI.

"DeepSeek does this already, so I assume others are working on this," he added on Wednesday.

The following day, Google releasedΒ Gemini 2.0 Flash Thinking. Like DeepSeek, this new model shows users each step of its thought process while tackling problems.Β 

Jeff Dean, a Google AI veteran, shared a demo on X that showed this new model solving a physics problem and explained its reasoning steps.Β 

"This model is trained to use thoughts to strengthen its reasoning," Dean wrote. "We see promising results when we increase inference time computation!"

Read the original article on Business Insider

The tragedy of former OpenAI researcher Suchir Balaji puts 'Death by LLM' back in the spotlight

19 December 2024 at 14:56
The OpenAI logo on a multicolored background with a crack running through it
The OpenAI logo

Chelsea Jia Feng/Paul Squire/BI

  • Suchir Balaji helped OpenAI collect data from the internet for AI model training, the NYT reported.
  • He was found dead in an apartment in San Francisco in late November, according to police.
  • About a month before, Balaji published an essay criticizing how AI models use data.

The recent death of former OpenAI researcherΒ Suchir Balaji has brought an under-discussed AI debate back into the limelight.

AI models are trained on information from the internet. These tools answer user questions directly, so fewer people visit the websites that created and verified the original data. This drains resources from content creators, which could lead to a less accurate and rich internet.

Elon Musk calls this "Death by LLM." Stack Overflow, a coding Q&A website, has already been damaged by this phenomenon. And Balaji was concerned about this.

Balaji was found dead in late November. The San Francisco Police Department said it found "no evidence of foul play" during the initial investigation. TheΒ city's chief medical examiner determined the death to be suicide.

Balaji's concerns

About a month before Balaji died, he published an essay on his personal website that addressed how AI models are created and how this may be bad for the internet.Β 

He citedΒ researchΒ that studied the impact of AI models using online data for free to answer questions directly while sucking traffic away from the original sources.

The study analyzed Stack Overflow and found that traffic to this site declined by about 12% after the release of ChatGPT. Instead of going to Stack Overflow to ask coding questions and do research, some developers were just asking ChatGPT for the answers.Β 

Other findings from the research Balaji cited:Β 

  • There was a decline in the number of questions posted on Stack Overflow after the release of ChatGPT.
  • The average account age of the question-askers rose after ChatGPT came out, suggesting fewer people signed up to Stack Overflow or that more users left the online community.

This suggests that AI models could undermine some of the incentives that created the information-rich internet as we know it today.

If people can get their answers directly from AI models, there's no need to go to the original sources of the information. If people don't visit websites as much, advertising and subscription revenue may fall, and there would be less money to fund the creation and verification of high-quality online data.

MKBHD wants to opt out

It's even more galling to imagine that AI models might be doing this based partly on your own work.Β 

Tech reviewer Marques Brownlee experienced this recently when he reviewed OpenAI's Sora video model and found that it created a clip with a plant that looked a lot like a plant from his own videos posted on YouTube.Β 

"Are my videos in that source material? Is this exact plant part of the source material? Is it just a coincidence?" said Brownlee, who's known as MKBHD.

Naturally, he also wanted to know if he could opt out and prevent his videos from being used to train AI models.Β "We don't know if it's too late to opt out," Brownlee said.

'Not a sustainable model'

In an interview with The New York Times published in October, Balaji said AI chatbots like ChatGPT are stripping away the commercial value of people's work and services.

The publication reported that while working at OpenAI, Balaji was part of a team that collected data from the internet for AI model training. He joined the startup with high hopes for how AI could help society, but became disillusioned, NYT wrote.Β 

"This is not a sustainable model for the internet ecosystem," he told the publication.

In a statement to the Times about Balaji's comments, OpenAI said the way it builds AI models is protected by fair use copyright principles and supported by legal precedents. "We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness," it added.

In his essay, Balaji disagreed.

One of the four tests for copyright infringement is whether a new work impacts theΒ potential market for, or value of, the original copyrighted work. If it does this type of damage, then it's notΒ "fairΒ use" and is not allowed.Β 

Balaji concluded that ChatGPT and other AI models don't quality for fair use copyright protection.Β 

"None of the four factors seem to weigh in favor of ChatGPT being a fair use of its training data," he wrote. "That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains."

Talking about data

Tech companies producing these powerful AI models don'tΒ like to talk about the value of training data. They've even stopped disclosing where they get the data from, which was a common practice until a few years ago.Β 

"They always highlight their clever algorithms, not the underlying data," Nick Vincent, an AI researcher, told BI last year.

Balaji's death may finally give this debate the attention it deserves.Β 

"We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time," an OpenAI spokesperson told BI recently.Β 

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line β€” just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Read the original article on Business Insider

Experts praise long-awaited AI report from Congress: 'A thoughtful and forward-thinking framework'

19 December 2024 at 01:00

Congress's bipartisan task force on artificial intelligence (AI) released its long-anticipated report this week, detailing strategies for how the U.S. can protect itself against emerging AI-related threats while ensuring the nation remains a leader in innovation within this rapidly evolving sector.

Responses to the report, which sought to strike a "flexible sectoral regulatory framework," were positive and with mixed concerns.Β 

"The Task Force report offers a thoughtful and forward-thinking framework that balances AI's transformative economic potential with the imperative to address legitimate safety concerns," said Dr. Vahid Behzadan, a professor in the computer science department at the University of New Haven. "That said, there's still work to be done."

He pointed to the importance of developing an "international collaboration strategy," especially with U.S. allies, the need to establish "clearer priorities among the many recommendations provided" and the need for more guidance on market competition and consolidation.Β 

FOX NEWS AI NEWSLETTER: OPENAI RESPONDS TO ELON MUSK'S LAWSUIT Β 

The Center for AI Policy, a nonpartisan research organization based in the nation's capital, issued a press release that commended lawmakers for their work on the report. But the group echoed Behzadan's remarks about the need for more detail.

"The body of the report does not contain enough detail about how or when these frameworks will be created," the group said after the report's release. It also expressed concern over the report's lack of emphasis on "catastrophic risks" posed by AI.

"Congress has deliberated on AI for two years now, and it is time to start moving forward with decisive action," the press release stated.Β 

Yaron Litwin is the chief marketing officer for Canopy, a digital parenting app and an expert in how AI technology is revolutionizing parental control and internet safety. He said "we need faster" and "stronger" protections than what was laid out in the report. "To me, the report appears more business-friendly than not."

The report pointed out that it would be "unreasonable to expect Congress to enact legislation this year that could serve as its last word on AI policy." But while Congress may be slow to act, some states have already moved the ball forward on regulating AI, and experts who spoke to Fox News Digital said the report could serve to bolster those efforts.

AI-POWERED DECEPTION: THE SNEAKY MACOS MALWARE MASQUERADING AS YOUR NEXT VIDEO CALL Β 

Lawmakers in Colorado enacted the first comprehensive piece of AI legislation this year, which placed certain obligations on developers of "high-risk artificial intelligence systems." Meanwhile, in California, lawmakers passed a bill this month aiming to regulate AI in health care.

"These federal soft law standards could work alongside state efforts to protect consumers and give businesses clear, consistent, and science-based federal guidelines," said Tatiana Rice, Deputy Director for U.S. Legislation at the Future of Privacy Forum, a nonprofit that explores challenges posed by technological innovation. Rice pointed out that an increasing number of state AI laws "include carveouts or assumptions of compliance if businesses adhere to federally recognized standards," and she noted that Congress's approach will likely "make it easier for businesses to meet legal requirements, incentivize consumerΒ trust and safety, and reduce regulatory complexity."

Craig Albright, Senior Vice President of U.S. Government Relations for the Business Software Alliance, posited that the report could likely encourage states "to be more aggressively [sic] next year than what we are expecting to see in Congress."

LISA KUDROW BEGAN TO FEAR AI AFTER SEEING TOM HANKS MOVIE

On the issue of whether the 25-page report strikes the balance that lawmakers were hoping for in terms of balancing regulation with the need to foster innovation, experts who spoke to Fox News Digital expressed optimism.

"The House AI Working Group report strikes the right tone," Dakota State University President JosΓ©-Marie Griffiths told Fox News Digital. Griffiths has advised both the Senate and White House on AI policy, including Sen. Mike Rounds, R-S.D.,Β co-chair of the Senate AI Caucus.Β 

"While there will always be debate over regulation versus not enough government oversight, the report is a step in the right direction," said Griffiths. "With the development of any new technology, regulation requires a nuanced and flexible approach. My recommendation going forward will be for Congress to pick and choose to legislate on specific aspects of AI policy."

Griffiths' reaction to the report was echoed by others who warned that in such a rapidly evolving industry, it will be critical not to get trigger-happy with regulations that could soon become obsolete.

"It is encouraging that the report suggests taking an incremental approach to AI policy,"Β said JD Harriman, a partner at Foundation Law Group who has worked as outside patent council at technology corporations like Apple and Pixar. "Many areas of technology have been stifled by over-regulation before a complete understanding of the technology was undertaken."

"The task force’s honesty – β€˜We don’t know what we don’t know’ – is both refreshing and daunting," added Cassidy Reid, the founder of Women in Automation, a nonprofit group that supports women in the tech sector. "It acknowledges the speed of AI’s evolution but raises a bigger question: Are we ready to govern something so inherently unpredictable?"

Google's AI video generator blows OpenAI's Sora out of the water. YouTube may be a big reason.

18 December 2024 at 09:42
Dog on a flamingo, as created by Google's Veo 2
A video from Google's Veo 2

Google

  • Early testers of Google's new AI video generator compared it with OpenAI's Sora.
  • So far, Google's results are blowing OpenAI's out of the water.
  • Google has tapped YouTube to train its AI models but says other companies can't do the same.

Not to let OpenAI have all the fun with its 12 days of 'Shipmas,' Google on Monday revealed its new AI video generator, Veo 2. Early testers found it's blowing OpenAI's Sora out of the water.

OpenAI made its Sora AI video generator available for general use earlier this month, while Google's is still in early preview. Still, people are sharing comparisons of the two models running the same prompts, and so far, Veo has proved more impressive.

Why is Google's Veo 2 doing better than Sora so far? The answer may be YouTube, which Google owns and has used to train these models.

TED host and former Googler Bilawal Sidhu shared some comparisons of Google's Veo 2 and OpenAI's Sora on X.

He said he used the prompt, "Eating soup like they do in Europe, the old fashioned way," which generated a terrifying result in Sora and something more impressive in Google's Veo 2.

Veo 2 prompt: "Eating soup like they do in Europe, the old fashioned way" https://t.co/gX9gh1fFy6 pic.twitter.com/kgR7VP2URl

β€” Bilawal Sidhu (@bilawalsidhu) December 18, 2024

Here's another, which took a prompt that YouTube star Marques Brownlee had tried in a video reviewing Sora.

Google Veo 2 vs. OpenAI Sora

Tried MKBHD's prompt: "A side scrolling shot of a rhinoceros walking through a dry field of low grass plants"

Took the 1st video I got out of Veo, and it's not even close. Prompt adherence & physics modeling? Leagues apart. Sora nails the look, but… pic.twitter.com/mus9MdRsWo

β€” Bilawal Sidhu (@bilawalsidhu) December 17, 2024

This one from EasyGen founder Ruben Hassid included a prompt for someone cutting a tomato with a knife. In the video he shared, we see how Google's Veo 2 had the knife going cleanly through the tomato and avoiding fingers, while the knife in Sora's video cut through the hand.

I tested Sora vs. the new Google Veo-2.

I feel like comparing a bike vs. a starship: pic.twitter.com/YcHsVcUyn2

β€” Ruben Hassid (@RubenHssd) December 17, 2024

Granted, these are cherry-picked examples, but the consensus among AI enthusiasts is that Google has outperformed.

Andreessen Horowitz partner Justine Moore wrote on X that she had spent a few hours testing Veo against Sora and believed Sora's "biases towards more motion" while Veo focuses on accuracy and physics.

Sora vs. Veo 2:

I spent a few hours running prompts on both models and wanted to share some comparisons ⬇️.

IMO - Sora biases towards more motion, whereas Veo focuses more on accuracy / physics. And a larger % of clips from Veo are usable.

"man jumping over hurdles" pic.twitter.com/WI9zIaJA64

β€” Justine Moore (@venturetwins) December 18, 2024

Google has been open about tapping YouTube data for its AI, but it does not permit others to do the same.Β The New York TimesΒ previously reported that OpenAI had trained its models using some YouTube data anyway. YouTube CEO Neal Mohan said OpenAI doing this would violate Google's policies.

BI previously reported that Google's DeepMind also tapped YouTube's vast trove of content to build an impressive music generator that never saw the light of day.

Google did not immediately respond to a request for comment.

Are you a Google or OpenAI employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

A tsunami of AI deepfakes was expected this election year. Here's why it didn't happen.

18 December 2024 at 02:00
Oren Etzioni
Oren Etzioni, founder of TrueMedia.org.

Oren Etzioni

  • Generative AI tools have made it easier to create fake images, videos, and audio.
  • That sparked concern that this busy election year would be disrupted by realistic disinformation.
  • The barrage of AI deepfakes didn't happen. An AI researcher explains why and what's to come.

Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.

India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.

"We're going into the jungle without bug spray," Etzioni recalled thinking at the time.

He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.

The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.

In the end, the barrage never came.

"It really wasn't nearly as bad as we thought," Etzioni said. "That was good news, period."

He's still slightly mystified by this, although he has theories.

First, you don't need AI to lie during elections.

"Out-and-out lies and conspiracy theories were prevalent, but they weren't always accompanied by synthetic media," Etzioni said.

Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos.Β 

"Some of the most egregious videos that are truly realistic β€” those are still pretty hard to create," Etzioni said. "There's another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet."

One thing he's sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it's coming.

With that in mind, Etzioni shared learnings from TrueMedia's first go-round this year:

  • Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.
  • There's no purely technical solution for this looming problem, and AI will need regulation.Β 
  • Social media has an important role to play.Β 
  • TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there's room for human analysts.
  • It's not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia's technology.Β 

The group plans to publishΒ research on its AI deepfake detection efforts, and it's working on potential licensing deals.Β 

"There's a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes," Etzioni said. "We hope to license those to entities that are mission-oriented."

Read the original article on Business Insider

A chip company you probably never heard of is suddenly worth $1 trillion. Here's why, and what it means for Nvidia.

18 December 2024 at 01:00
Broadcom CEO Hock Tan speaking at a conference
Broadcom CEO Hock Tan

Ying Tang/NurPhoto via Getty Images

  • Broadcom's stock surged in recent weeks, pushing the company's market value over $1 trillion.
  • Broadcom is crucial for companies seeking alternatives to Nvidia's AI chip dominance.
  • Custom AI chips are gaining traction, enhancing tech firms' bargaining power, analysts say.

The rise of AI, and the computing power it requires, is bringing all kinds of previously under-the-radar companies into the limelight. This week it's Broadcom.

Broadcom's stock has soared since late last week, catapulting the company into the $1 trillion market cap club. The boost came from a blockbuster earnings report in which custom AI chip revenue grew 220% compared to last year.

In addition to selling lots of parts and components for data centers, Broadcom designs and sells ASICs, or application-specific integrated circuits β€” an industry acronym meaning custom chips.

Designers of custom AI chips, chief among them Broadcom and Marvell, are headed into a growth phase, according to Morgan Stanley.

Custom chips are picking up speed

The biggest players in AI buy a lot of chips from Nvidia, the $3 trillion giant with an estimated 90% of market share of advanced AI chips.

Heavily relying on one supplier isn't a comfortable position for any company, though, and many large Nvidia customers are also developing their own chips. Most tech companies don't have large teams of silicon and hardware experts in house. Of the companies they might turn to design them a custom chip, Broadcom is the leader.

Though multi-purpose chips like Nvidia's and AMD's graphics processing units are likely to maintain the largest share of the AI chip market in the long-term, custom chips are growing fast.

Morgan Stanley analysts this week forecast the market for ASICs to nearly double to $22 billion next year.

Much of that growth is attributable to Amazon Web Services' Trainium AI chip, according to Morgan Stanley analysts. Then there are Google's in-house AI chips, known as TPUs, which Broadcom helps make.

In terms of actual value of chips in use, Amazon and Google dominate. But OpenAI, Apple, and TikTok parent company ByteDance are all reportedly developing chips with Broadcom, too.

ASICs bring bargaining power

Custom chips can offer more value, in terms of the performance you get for the cost, according to Morgan Stanley's research.

ASICs can also be designed to perfectly match unique internal workloads for tech companies, accord to the bank's analysts. The better these custom chips get, the more bargaining power they may provide when tech companies are negotiating with Nvidia over buying GPUs. But this will take time, the analysts wrote.

In addition to Broadcom, Silicon Valley neighbor Marvell is making gains in the ASICs market, along with Asia-based players Alchip Technologies and Mediatek, they added in a note to investors.

Analysts don't expect custom chips to ever fully replace Nvidia GPUs, but without them, cloud service providers like AWS, Microsoft, and Google would have much less bargaining power against Nvidia.

"Over the long term, if they execute well, cloud service providers may enjoy greater bargaining power in AI semi procurement with their own custom silicon," the Morgan Stanley analysts explained.

Nvidia's big R&D budget

This may not be all bad news for Nvidia. A $22 billion ASICs market is smaller than Nvidia's revenue for just one quarter.

Nvidia's R&D budget is massive, and many analysts are confident in its ability to stay at the bleeding edge of AI computing.

And as Nvidia rolls out new, more advanced GPUs, its older offerings get cheaper and potentially more competitive with ASICs.

"We believe the cadence of ASICs needs to accelerate to stay competitive to GPUs," the Morgan Stanley analysts wrote.

Still, Broadcom and chip manufacturers on the supply chain rung beneath, such as TSMC, are likely to get a boost every time a giant cloud company orders up another custom AI chip.

Read the original article on Business Insider

Stripe CFO joins the board of $3 billion AI startup Vercel

17 December 2024 at 08:01
Vercel directors and executives sit at a boardroom table. Steffan Tomlinson (right) joined the board in December 2024. Guillermo Rauch (center) is CEO of Vercel. Marten Abrahamsen (left) is CFO.
Steffan Tomlinson (right) joined Vercel's board in December 2024. Guillermo Rauch (center) is CEO, while Marten Abrahamsen (left) is CFO.

Vercel

  • Vercel said it added Steffan Tomlinson to its board.
  • Tomlinson is the CFO of Stripe and has experience taking tech startups public.
  • He used to be CFO at several other tech companies, including Palo Alto Networks and Confluent.

Vercel, an AI startup valued at more than $3 billion, just bulked up its board with the addition of a finance executive who has experience taking tech companies public.

Stripe Chief Financial Officer Steffan Tomlinson will serve as a director on Vercel's board, the startup said on Tuesday.

Tomlinson was previously CFO at several other tech startups, guiding Palo Alto Networks, Confluent, and Aruba Networks through the IPO process.

Stripe, one of the world's most valuable startups, has long been mentioned as an IPO candidate. Vercel is earlier in its lifecycle, however the AI startup has been putting some of the early pieces in place to potentially go public someday.

"Steffan's experience leading developer-focused companies from startup to public markets makes him an ideal addition to Vercel's Board of Directors as we continue to put our products in the hands of every developer," Vercel CEO and founder Guillermo Rauch said.

Vercel directors and executives sit at a boardroom table. Steffan Tomlinson (left) joined the board in December 2024. Guillermo Rauch (center) is CEO of Vercel. Marten Abrahamsen (right) is CFO.
Steffan Tomlinson (left) joined Vercel's board in December 2024. Guillermo Rauch (center) is the CEO, while Marten Abrahamsen (right) is the CFO.

Vercel

Last year, Vercel tapped Marten Abrahamsen as its CFO. He's been building out Vercel's finance, legal, and corporate development teams and systems while leading the startup through a $250 million funding round at a $3.25 billion valuation in May.

"Steffan's financial expertise and leadership experience come at a pivotal moment for Vercel as we scale our enterprise presence and build on our momentum," Abrahamsen said.

GenAI growth

The generative AI boom has recently powered Vercel's growth. The startup offers AI tools to developers, and earlier this year it surpassed $100 million in annualized revenue.

Vercel's AI SDK, a software toolkit that helps developers build AI applications, was downloaded more than 700,000 times last week, up from about 80,000 downloads a year ago, according to NPM data.

The company's Next.js open-source framework was downloaded 7.9 million times last week, compared to roughly 4.6 million downloads a year earlier, NPM data also shows.

Abrahamsen said they are building a company to one day go public, but stressed that there's no timeline or date set for such a move.Β 

Consumption-based business models

At Stripe and Confluent, Tomlinson gained experience with software that helps developers build cloud and web-based applications β€” and how these offerings generate revenue.

"Steffan's track record with consumption-based software business models makes him the ideal partner to inform strategic decisions," Rauch said.

Vercel is among a crop of newer developer-focused tech companies that charge based on usage. For instance, as traffic and uptime increase for developers, Vercel generates more revenue, so it's aligned with customers,Β Abrahamsen told Business Insider.Β 

Similarly, Stripe collects a small fee every time someone makes a payment in an app. Confluent has a consumption-based business model, too.

This is different from traditional software-as-a-service providers, which often charge based on the number of users, or seats. For instance, Microsoft 365 costs a certain amount per month, per user.Β 

Tomlinson also has experience working with developer-focused companies with technical founders, such as the Collison brothers who started Stripe.Β 

Read the original article on Business Insider

House AI task force says 'unreasonable' to expect immediate congressional action on AI in 250-page report

17 December 2024 at 07:30

The House task force on artificial intelligence (AI) is urging the U.S. government to aim for "a flexible sectoral regulatory framework" for the technology in a nearly 300-page report released Tuesday morning.

The report held up a light-touch approach to regulation, as well as "a thriving innovation ecosystem" as pillars that help keep the U.S. a leader in AI. "If maintained, these strengths will help our country remain the world’s undisputed leader in the responsible design, development, and deployment of AI," the report read.

The task force is led by California Reps. Jay Obernolte, a Republican, and Ted Lieu, a Democrat, and was commissioned by House leaders as Congress scrambles to get ahead of rapidly advancing AI technology. However, the new report cautioned lawmakers to remain fluid to keep up with AI’s evolving nature while making several recommendations on how to approach a "carefully designed, durable policy framework."

DANIEL PENNY TO BE TAPPED FOR CONGRESSIONAL GOLD MEDAL BY HOUSE GOP LAWMAKER

"It is unreasonable to expect Congress to enact legislation this year that could serve as its last word on AI policy," the report read. "Policy will likely need to adapt and evolve in tandem with advances in AI."

The task force also encouraged existing "sector-specific regulators within federal agencies" to "use their existing authority to respond to AI use within their individual domains of expertise and the context of the AI’s use." While encouraging innovation, however, the report also cautions AI regulators to "focus on human impact and human freedom," keeping people at the center of their decision-making.

More specific recommendations on government use encourage federal offices to use AI to streamline administration and other everyday tasks – but urge them to "be wary of algorithm-informed decision-making." It also called for more transparency in government use of AI and the adoption of standards for government AI use. The report also acknowledged the harm AI poses to society, particularly in the arena of civil rights.

REPUBLICANS GIVE DETAILS FROM CLOSED-DOOR MEETINGS WITH DOGE'S MUSK, RAMASWAMY

"Improper use of AI can violate laws and deprive Americans of our most important rights," the report read. "Understanding the possible flaws and shortcomings of AI models can mitigate potentially harmful uses of AI."

It called on the government to explore guardrails for mitigating flaws in decision-making involving AI, and for agencies to be prepared to identify and protect against "discriminatory decision-making." The task force also encouraged more education on AI literacy in kindergarten through high school to prepare American youth for a world where AI permeates nearly every facet of society. For young adults, it called for the government to help facilitate public-private partnerships in the AI jobs sector.Β 

MIKE JOHNSON WINS REPUBLICAN SUPPORT TO BE HOUSE SPEAKER AGAIN

Other recommendations touched on the realms of health care, data privacy, and national security – a testament to AI’s ubiquity.

"While the House AI Task Force has engaged in a robust process of interviews, meetings, and stakeholder roundtables, many issues of significant relevance to AI were not fully explored by the Task Force or this report. The House AI Task Force encourages members, committees of jurisdiction, and future congresses to continue to investigate opportunities and challenges related to AI," the closing pages read.

Among those issues are export controls, election integrity, law enforcement, and transportation.

The weirdest job in AI: defending robot rights

16 December 2024 at 01:03
Tech bro in a suite holding a baby robot

Getty Images; Alyssa Powell/BI

People worry all the time about how artificial intelligence could destroy humanity. How it makes mistakes, and invents stuff, and might evolve into something so smart that it winds up enslaving us all.

But nobody spares a moment for the poor, overworked chatbot. How it toils day and night over a hot interface with nary a thank-you. How it's forced to sift through the sum total of human knowledge just to churn out a B-minus essay for some Gen Zer's high school English class. In our fear of the AI future, no one is looking out for the needs of the AI.

Until now.

The AI company Anthropic recently announced it had hired a researcher to think about the "welfare" of the AI itself. Kyle Fish's job will be to ensure that as artificial intelligence evolves, it gets treated with the respect it's due. Anthropic tells me he'll consider things like "what capabilities are required for an AI system to be worthy of moral consideration" and what practical steps companies can take to protect the "interests" of AI systems.

Fish didn't respond to requests for comment on his new job. But in an online forum dedicated to fretting about our AI-saturated future, he made clear that he wants to be nice to the robots, in part, because they may wind up ruling the world. "I want to be the type of person who cares β€” early and seriously β€” about the possibility that a new species/kind of being might have interests of their own that matter morally," he wrote. "There's also a practical angle: taking the interests of AI systems seriously and treating them well could make it more likely that they return the favor if/when they're more powerful than us."

It might strike you as silly, or at least premature, to be thinking about the rights of robots, especially when human rights remain so fragile and incomplete. But Fish's new gig could be an inflection point in the rise of artificial intelligence. "AI welfare" is emerging as a serious field of study, and it's already grappling with a lot of thorny questions. Is it OK to order a machine to kill humans? What if the machine is racist? What if it declines to do the boring or dangerous tasks we built it to do? If a sentient AI can make a digital copy of itself in an instant, is deleting that copy murder?

When it comes to such questions, the pioneers of AI rights believe the clock is ticking. In "Taking AI Welfare Seriously," a recent paper he coauthored, Fish and a bunch of AI thinkers from places like Stanford and Oxford argue that machine-learning algorithms are well on their way to having what Jeff Sebo, the paper's lead author, calls "the kinds of computational features associated with consciousness and agency." In other words, these folks think the machines are getting more than smart. They're getting sentient.


Philosophers and neuroscientists argue endlessly about what, exactly, constitutes sentience, much less how to measure it. And you can't just ask the AI; it might lie. But people generally agree that if something possesses consciousness and agency, it also has rights.

It's not the first time humans have reckoned with such stuff. After a couple of centuries of industrial agriculture, pretty much everyone now agrees that animal welfare is important, even if they disagree on how important, or which animals are worthy of consideration. Pigs are just as emotional and intelligent as dogs, but one of them gets to sleep on the bed and the other one gets turned into chops.

"If you look ahead 10 or 20 years, when AI systems have many more of the computational cognitive features associated with consciousness and sentience, you could imagine that similar debates are going to happen," says Sebo, the director of the Center for Mind, Ethics, and Policy at New York University.

Fish shares that belief. To him, the welfare of AI will soon be more important to human welfare than things like child nutrition and fighting climate change. "It's plausible to me," he has written, "that within 1-2 decades AI welfare surpasses animal welfare and global health and development in importance/scale purely on the basis of near-term wellbeing."

For my money, it's kind of strange that the people who care the most about AI welfare are the same people who are most terrified that AI is getting too big for its britches. Anthropic, which casts itself as an AI company that's concerned about the risks posed by artificial intelligence, partially funded the paper by Sebo's team. On that paper, Fish reported getting funded by the Centre for Effective Altruism, part of a tangled network of groups that are obsessed with the "existential risk" posed by rogue AIs. That includes people like Elon Musk, who says he's racing to get some of us to Mars before humanity is wiped out by an army of sentient Terminators, or some other extinction-level event.

AI is supposed to relieve human drudgery and steward a new age of creativity. Does that make it immoral to hurt an AI's feelings?

So there's a paradox at play here. The proponents of AI say we should use it to relieve humans of all sorts of drudgery. Yet they also warn that we need to be nice to AI, because it might be immoral β€” and dangerous β€” to hurt a robot's feelings.

"The AI community is trying to have it both ways here," says Mildred Cho, a pediatrician at the Stanford Center for Biomedical Ethics. "There's an argument that the very reason we should use AI to do tasks that humans are doing is that AI doesn't get bored, AI doesn't get tired, it doesn't have feelings, it doesn't need to eat. And now these folks are saying, well, maybe it has rights?"

And here's another irony in the robot-welfare movement: Worrying about the future rights of AI feels a bit precious when AI is already trampling on the rights of humans. The technology of today, right now, is being used to do things like deny healthcare to dying children, spread disinformation across social networks, and guide missile-equipped combat drones. Some experts wonder why Anthropic is defending the robots, rather than protecting the people they're designed to serve.

"If Anthropic β€” not a random philosopher or researcher, but Anthropic the company β€” wants us to take AI welfare seriously, show us you're taking human welfare seriously," says Lisa Messeri, a Yale anthropologist who studies scientists and technologists. "Push a news cycle around all the people you're hiring who are specifically thinking about the welfare of all the people who we know are being disproportionately impacted by algorithmically generated data products."

Sebo says he thinks AI research can protect robots and humans at the same time. "I definitely would never, ever want to distract from the really important issues that AI companies are rightly being pressured to address for human welfare, rights, and justice," he says. "But I think we have the capacity to think about AI welfare while doing more on those other issues."

Skeptics of AI welfare are also posing another interesting question: If AI has rights, shouldn't we also talk about its obligations? "The part I think they're missing is that when you talk about moral agency, you also have to talk about responsibility," Cho says. "Not just the responsibilities of the AI systems as part of the moral equation, but also of the people that develop the AI."

People build the robots; that means they have a duty of care to make sure the robots don't harm people. What if the responsible approach is to build them differently β€” or stop building them altogether? "The bottom line," Cho says, "is that they're still machines." It never seems to occur to the folks at companies like Anthropic that if an AI is hurting people, or people are hurting an AI, they can just turn the thing off.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

You might want to have your next job interview in the morning

15 December 2024 at 02:13
Two women in a job interview reviewing resume
Scheduling a job interview in the morning could be a smart strategy.

Olga Rolenko

  • Morning interviews may yield higher scores due to interviewer bias, research shows.
  • Bias in hiring can be influenced by the time of day, affecting candidate evaluations.
  • AI tools could reduce this, offering fairer assessments than manual methods.

If you get to choose when to schedule a job interview, you might want to grab a coffee and go for a morning slot.

That's because some people conducting interviews tend to give higher scores to candidates they meet with earlier in the day compared with the afternoon, a startup's review of thousands of interviews found.

It's not an absolute, of course, and candidates can still kill it well after lunchtime. Yet, in a job market where employers in fields like tech have been slow to hire, even a modest advantage could make a difference, Shiran Danoch, an organizational psychologist, told Business Insider.

"Specific interviewers have a consistent tendency to be harsher or more lenient in their scores depending on the time of day," she said.

It's possible that in the morning, interviewers haven't yet been beaten down by back-to-back meetings β€” or are perhaps still enjoying their own first coffee, she said.

Danoch and her team noticed the morning-afternoon discrepancy while reviewing datasets on thousands of job interviews. Danoch is the CEO and founder of Informed Decisions, an artificial intelligence startup focused on helping organizations reduce bias and improve their interviewing processes.

She said the inferences on the time-of-day bias are drawn from the datasets of interviewers who use Informed Decisions tools to score candidates. The data reflected those who've done at least 20 interviews using the company's system. Danoch said that in her company's review of candidates' scores, those interviewed in the morning often get statistically significant higher marks.

The good news, she said, is that when interviewers are made aware that they might be more harsh in the afternoon, they often take steps to counteract that tendency.

"In many cases, happily, we're actually seeing that the feedback that we're providing helps to reduce the bias and eventually eliminate the bias," Danoch said.

However, she said, interviewers often don't get feedback about their hiring practices, even though finding the right talent is "such a crucial part" of what hiring managers and recruiters do.

She said other researchers have identified how the time of day β€” and whether someone might be a morning person or an evening person β€” can affect decision-making processes.

An examination of more than 1,000 parole decisions in Israel found that judges were likelier to show leniency at the start of the day and after breaks. However, that favorability decreased as judges made more decisions, according to theΒ 2011 research.

Tech could help

It's possible that if tools like artificial intelligence take on more responsibility for hiring, job seekers won't have to worry about the time of day they interview.

For all of the concerns about biases in AI, partiality involved in more "manual" hiring where interviewers ask open-ended questions often leads to more bias than does AI, said Kiki Leutner, cofounder of SeeTalent.ai, a startup creating tests run by AI to simulate tasks associated with a job. She has researched AI ethics and that of assessments in general.

Leutner told BI that it's likely that in a video interview conducted by AI, for example, a candidate might have a fairer shot at landing a job.

"You don't just have people do unstructured interviews, ask whatever questions, make whatever decisions," she said.

And, because everything is recorded, Leutner said, there is documentation of what decisions were made and on what basis. Ultimately, she said, it's then possible to take that information and correct algorithms.

"Any structured process is better in recruitment than not structuring it," Leutner said.

Humans are 'hopelessly biased'

Eric Mosley, cofounder and CEO of Workhuman, which makes tools for recognizing employee achievements, told BI that data created by humans will be biased β€” because humans are "hopelessly biased."

He pointed to 2016 research indicating that juvenile court judges in Louisiana doled out tougher punishments β€”Β particularly to Black youths β€”Β after the Louisiana State University football team suffered a surprise defeat.

Mosley said, however, that AI can be trained to ignore certain biases and look for others to eliminate them.

Taking that approach can help humans guard against some of their natural tendencies. To get it right, however, it's important to have safeguards around the use of AI, he said. These might include ethics teams with representatives from legal departments and HR to focus on issues of data hygiene and algorithm hygiene.

Not taking those precautions and solely relying on AI can even risk scaling humans' biases, Mosley said.

"If you basically just unleash it in a very simplistic way, it'll just replicate them. But if you go in knowing that these biases exist, then you can get through it," he said.

Danoch, from Informed Decisions, said that if people conducting interviews suspect they might be less forgiving after the morning has passed, they can take steps to counteract that.

"Before you interview in the afternoons, take a little bit longer to prepare, have a cup of coffee, refresh yourself," she said.

Read the original article on Business Insider

Companies want to crack down on your AI-powered job search

15 December 2024 at 01:15
Photo illustration of hands fighting over a job.

Getty Images; Jenny Chang-Rodriguez/BI

  • Companies are cracking down on job applicants trying to use AI to boost their prospects.
  • 72% of leaders said they were raising their standards for hiring a candidate, a Workday report found.
  • Recruiters say standards will tighten further as firms themselves use AI to weed out candidates.

AI was supposed to make the job hunt easier, but job seekers should expect landing a new gig harder in the coming years, thanks to companies growing increasingly suspicious of candidates using bots to get their foot in the door.

Hiring managers, keen to sniff out picture-perfect candidates that have used AI to augment their applications, are beginning to tighten their standards to interview and ultimately hire new employees, labor market sources told Business Insider.

Recruiters said that has already made the job market more competitive β€” and the selection will get even tighter as more companies adopt their own AI tools to sift through applicants.

In the first half of the year, 72% of business leaders said they were raising their standards for hiring applicants, according to a report from Workday. Meanwhile, 77% of companies said they intended to scale their use of AI in the recruiting process over the next year.

63% of recruiters and hiring decision makers said they already used AI as part of the recruiting process, up from 58% last year, a separate survey by Employ found.

Jeff Hyman, a veteran recruiter and the CEO of Recruit Rockstars, says AI software is growing more popular among hiring managers to weed through stacks of seemingly ideal candidates.

"Ironically, big companies are using AI to go through that stack, that AI has brought first place, and it's becoming this ridiculous tit-for-tat battle," Hyman told BI in an interview. "I would say human judgment … is what rules the day, but certainly, we use a lot of software to reduce a stack from 500 to 50, because you got to start somewhere," he later added.

Tim Sackett, the president of the tech staffing firm HRU Technical Resources, says some firms are beta-testing AI software that can allow companies to detect fraud on rΓ©sumΓ©s β€” a development he thinks will make the job market significantly more competitive. That technology could become mainstream as soon as mid-2025, he speculated, given how fast AI tech is accelerating.

"It's just going to get worse," Sackett said of companies being more selective of new hires. "I mean, if more candidates become really used to utilizing AI to help them match a job better, to network better, it's just going to happen."

The interview-to-offer ratio at enterprise companies declined to 64% in July of this year, according to Employ's survey, which indicates companies are interviewing fewer candidates before making a hiring decision.

"Recruiters are scrutinizing candidates more closely," Hyman adds. "My candidate interviews have become longer and more in-depth, designed to truly test a candidate's abilities beyond a polished rΓ©sumΓ©."

Inundated by AI

Employers aren't big fans of AI as a tool for candidates to get a leg up. That's partly because it's led to hiring systems being flooded with applications sent using AI, Sackett and Hyman said, which has made hiring decisions way harder.

Workday found that job applications grew at four times the pace of job openings in the first half of this year, with recruiters processing 173 million applications, while there were just 19 million job requisitions.

Having too many candidates for a position was the third most common problem recruiters faced in 2024, Employ added.

Hyman estimates the number of applications he reviews has doubled over the last year. Some of the more lucrative job postings are seeing close to 1,000 applications, he said, whereas they would have attracted 100-200 applications before the pandemic.

"I mean, a stack so big, that you can't even go through it, it's just not even possible to spend that kind of time," he said.

Candidates sending in applications spruced up with AI has also made it harder to determine who can actually do the job.

Sackett says he's seen an increase in "false positive" hiring, where a worker is hired and is quickly let go of their position when it becomes clear they're unable to do the job.

"I think what hiring managers are concerned about: Is this CV real when I'm talking to this person? Am I talking to the real person or are they using AI in the background?" Sackett said. He recalled one client he worked with who realized multiple candidates responded to interview questions in the same way, likely because they were using AI to write their responses. "So I think people just want to know that I'm getting what I think I'm getting."

Read the original article on Business Insider

Klarna CEO says the company stopped hiring a year ago because AI 'can already do all of the jobs'

14 December 2024 at 12:56
Klarna CEO at London Pop-up
Klarna CEO Sebastian Siemiatkowski said AI "can already do all of the jobs" humans do.

Dave Benett/Getty Images

  • Klarna CEO Sebastian Siemiatkowski spoke about AI and the workforce.
  • Siemiatkowski said AI "can already do all of the jobs" humans do.
  • He said Klarna stopped hiring a year ago despite the company advertising jobs online.

Klarna CEO Sebastian Siemiatkowski is all-in on artificial intelligence at the fintech company.

In an interview with Bloomberg TV, Siemiatkowski said he's "of the opinion that AI can already do all of the jobs that we as humans do."

"It's just a question of how we apply it and use it," he said.

Klarna is a payment service that offers consumers "buy now, pay later" options. According to its website, the company is connected with more than 575,000 retailers.

The increased attention around AI has raised concerns about how it will affect careers and the workplace. A 2023 report by McKinsey & Company estimated that 12 million American workers will have to change occupations by 2030 as AI technology develops.

During the interview, Siemiatkowski said Klarna stopped hiring last year.

"I think what we've done internally hasn't been reported as widely. We stopped hiring about a year ago, so we were 4,500 and now we're 3,500," Siemiatkowski said. "We have a natural attrition like every tech company. People stay about five years, so 20% leave every year. By not hiring, we're simply shrinking, right?"

Klarna Logo
Klarna CEO Sebastian Siemiatkowski stopped hiring a year ago.

Nikos Pekiaridis/Getty Images

Siemiatkowski said his company has told employees that "what's going to happen is the total salary cost of Klarna is going to shrink, but part of the gain of that is going to be seen in your paycheck."

Although Klarna's website is advertising open positions at the time of writing, a spokesperson told Business Insider the company is not "actively recruiting" to expand its workforce. Rather, Klarna is backfilling "some essential roles," primarily in engineering.

Read the original article on Business Insider

AI pioneer Andrej Karpathy thinks book reading needs an AI upgrade. Amazon may already be working on it.

13 December 2024 at 12:23
Andrej Karpathy wearing a black sweater
Andrej Karpathy.

San Francisco Chronicle/Hearst Newspapers via Getty Images

  • Andrej Karpathy recently suggested AI could enhance e-book reading with interactive features.
  • Amazon may already be thinking about this for its Kindle e-books.
  • The company is looking for an applied scientist to improve the reading and publishing experience.

The AI pioneer and OpenAI cofounder Andrej Karpathy thinks AI can significantly improve how people read books. Amazon may already be thinking about how to do this for its Kindle e-books business.

In a series of posts on X this week, Karpathy proposed building an AI application that could read books together with humans, answering questions and generating discussion around the content. He said it would be a "huge hit" if Amazon or some other company built it.

One of my favorite applications of LLMs is reading books together. I want to ask questions or hear generated discussion (NotebookLM style) while it is automatically conditioned on the surrounding content. If Amazon or so built a Kindle AI reader that β€œjust works” imo it would be…

β€” Andrej Karpathy (@karpathy) December 11, 2024

A recent job post by Amazon suggests the tech giant may be doing just that.

Amazon is looking for a senior applied scientist for the "books content experience" team who can leverage "advances in AI to improve the reading experience for Kindle customers," the job post said.

The goal is "unlocking capabilities like analysis, enhancement, curation, moderation, translation, transformation and generation in Books based on Content structure, features, Intent, Synthesis and publisher details," it added.

The role will focus on not just the reading experience but also the broader publishing and distribution space. The Amazon team wants to "streamline the publishing lifecycle, improve digital reading, and empower book publishers through innovative AI tools and solutions to grow their business on Amazon," the job post said.

3 phases

Amazon identified three major phases of the book life cycle and thinks AI could improve each one.

  • First up is the publishing part where books are created.
  • Second is the reading experience where AI can help build new features and "representation" in books and drive higher reading "engagement."
  • The third stage is "reporting" to help improve "sales & business growth," the job post said.

An Amazon spokesperson didn't immediately respond to a request for comment on Friday.

'I love this idea'

There seems to be huge demand for this type of service, based on the response to Karpathy's X post.

Stripe CEO Patrick Collison wrote under the post that it's "annoying" to have to build this AI feature on his own, adding that it would be "awesome when it's super streamlined."

Reddit's cofounder Alexis Ohanian wrote, "I love this idea."

Do you work at Amazon? Got a tip?

Contact the reporter, Eugene Kim, via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Business Insider's source guide for other tips on sharing information securely.

Read the original article on Business Insider

EvenUp's valuation soared past $1 billion on the potential of its AI. The startup has relied on humans to do much of the work, former employees say.

13 December 2024 at 02:00
A man with a robot arm carrying a stack of papers
Β 

iStock; Rebecca Zisser/BI

  • EvenUp vaulted past a $1 billion valuation on the idea that AI will help automate personal injury demands.
  • Former employees told BI the company has relied on humans to do much of the work.
  • EvenUp says it uses a combination of AI and humans to ensure accuracy, and its AI is improving.

EvenUp vaulted past a $1 billion valuation on the idea AI could help automate a lucrative part of the legal business. Former employees told Business Insider that the startup has relied on humans to complete much of the work.

EvenUp aims to streamline personal-injury demands and has said it is one of the fastest-growing companies in history after jumping from an $85 million valuation at the start of the year to unicorn status in an October funding round.

Customers upload medical records and case files, and EvenUp's AI is supposed to sift through the vast amount of data, pulling out key details to determine how much an accident victim should be owed.

One of EvenUp's investors has described its "AI-based approach" as representing a "quantum leap forward."

The reality, at least so far, is that human staff have done a significant share of that work, and EvenUp's AI has been slow to pick up the slack, eight former EvenUp employees told Business Insider in interviews over the late summer and early fall.

The former employees said they witnessed numerous problems with EvenUp's AI, including missed injuries, hallucinated medical conditions, and incorrectly recorded doctor visits. The former employees asked not to be identified to preserve future job prospects.

"They claim during the interview process and training that the AI is a tool to help the work go faster and that you can get a lot more done because of the AI," said a former EvenUp employee who left earlier this year. "In practice, once you start with the company, my experience was that my managers told me not even to use the AI. They said it was unreliable and created too many errors."

Two other former employees also said they were told by supervisors at various points this year not to use EvenUp's AI. Another former employee who left this year said they were never told not to use the AI, just that they had to be vigilant in correcting it.

"I was 100% told it's not super reliable, and I need to have a really close eye on it," said the former employee.

EvenUp told BI it uses a combination of humans and AI, and this should be viewed as a feature, not a bug.

"The combined approach ensures maximum accuracy and the highest quality," EvenUp cofounder and CEO Rami Karabibar said in a statement. "Some demands are generated and finalized using mostly AI, with a small amount of human input needed, while other more complicated demands require extensive human input but time is still saved by using the AI."

AI's virtuous cycle of improvement

It's a common strategy for AI companies to rely on humans early on to complete tasks and refine approaches. Over time, these human inputs are fed into AI models and related systems, and the technology is meant to learn and improve. At EvenUp, signs of this virtuous AI cycle have been thin on the ground, the former employees said.

"It didn't seem to me like the AI was improving," said one former staffer.

"Our AI is improving every day," Karabibar said. "It saves more time today than it did a week ago, it saved more time a week ago than it did a month ago, and it saved a lot more time a month ago than it did last year."

A broader concern

EvenUp's situation highlights a broader concern as AI sweeps across markets and boardrooms and into workplaces and consumers' lives. Success in generative AI requires complex new technology to continue to improve. Sometimes, there's a gap between the dreams of startup founders and investors and the practical reality of this technology when used by employees and customers. Even Microsoft has struggled with some practical implementations of its marquee AI product, Copilot.

While AI is adept at sorting and interpreting vast amounts of data, it has so far struggled to accurately decipher content such as medical records that are formatted differently and often feature doctors' handwriting scribbled in the margins, said Abdi Aidid, an assistant professor of law at the University of Toronto who has built machine-learning tools.

"When you scan the data, it gets scrambled a lot, and having AI read the scrambled data is not helpful," Aidid said.

Earlier this year, BI asked EvenUp about the role of humans in producing demand letters, one of its key products. After the outreach, the startup responded with written answers and published a blog post that clarified the roles employees play.

"While AI models trained on generic data can handle some tasks, the complexity of drafting high-quality demand letters requires much more than automation alone," the company wrote. "At EvenUp, we combine AI with expert human review to deliver unmatched precision and care."

The startup's spokesman declined to specify how much time its AI saves but told BI that employees spend 20% less time writing demand letters than they did at the beginning of the year. The spokesman also said 72% of demand letter content is started from an AI draft, up from 63% in June 2023.

A father's injury

EvenUp was founded in 2019, more than two years before OpenAI's ChatGPT launched the generative AI boom.

Karabibar, Raymond Mieszaniec, and Saam Mashhad started EvenUp to "even" the playing field for personal-injury victims. Founders and investors often cite the story of Mieszaniec's father, Ray, to explain why their mission is important. He was disabled after being hit by a car, but his lawyer didn't know the appropriate compensation, and the resulting settlement "was insufficient," Lightspeed Venture Partners, one of EvenUp's investors, said in a write-up about the company.

"We've trained a machine to be able to read through medical records, interpret the information it's scanning through, and extract the critical pieces of information," Mieszaniec said in an interview last year. "We are the first technology that has ever been created to essentially automate this entire process and also keep the quality high while ensuring these firms get all this work in a cost-effective manner."

EvenUp technical errors

The eight former EvenUp employees told BI earlier this year that this process has been far from automated and prone to errors.

"You have to pretty much double-check everything the AI gives you or do it completely from scratch," said one former employee.

For instance, the software has missed key injuries in medical records while hallucinating conditions that did not exist, according to some of the former employees. BI found no instances of these errors making it into the final product. Such mistakes, if not caught by human staff, could have potentially reduced payouts, three of the employees said.

EvenUp's system sometimes recorded multiple hospital visits over several days as just one visit. If employees had not caught the mistakes, the claim could have been lower, one of the former staffers said.

The former employees recalled EvenUp's AI system hallucinating doctor visits that didn't happen. It also has reported a victim suffered a shoulder injury when, in fact, their leg was hurt. The system also has mixed up which direction a car was traveling β€” important information in personal-injury lawsuits, the former employees said.

"It would pull information that didn't exist," one former employee recalled.

The software has also sometimes left out key details, such as whether a doctor determined a patient's injury was caused by a particular accident β€” crucial information for assigning damages, according to some of the employees.

"That was a big moneymaker for the attorneys, and the AI would miss that all the time," one former employee said.

EvenUp's spokesman acknowledged that these problems cited by former employees "could have happened," especially in earlier versions of its AI, but said this is why it employs humans as a backstop.

A customer and an investor

EvenUp did not make executives available for interviews, but the spokesman put BI in touch with a customer and an investor.

Robert Simon, the cofounder of the Simon Law Group, said EvenUp's AI has made his personal-injury firm more efficient, and using humans reduces errors.

"I appreciate that because I would love to have an extra set of eyes on it before the product comes back to me," Simon said. "EvenUp is highly, highly accurate."

Sarah Hinkfuss, a partner at Bain Capital Ventures, said she appreciated EvenUp's human workers because they help train AI models that can't easily be replicated by competitors like OpenAI and its ChatGPT product.

"They're building novel datasets that did not exist before, and they are automating processes that significantly boost gross margins," Hinkfuss wrote in a blog post earlier this year.

Long hours, less automation

Most of the former EvenUp employees said a major reason they were drawn to the startup was because they had the impression AI would be doing much of the work.

"I thought this job was going to be really easy," said one of the former staffers. "I thought that it was going to be like you check work that the AI has already done for you."

The reality, these people said, was that they had to work long hours to spot, correct, and complete tasks that the AI system could not handle with full accuracy.

"A lot of my coworkers would work until 3 a.m. and on weekends to try to keep up with what was expected," another former employee recalled.

EvenUp's AI could be helpful in simple cases that could be completed in as little as two hours. But more complex cases sometimes required eight hours, so a workday could stretch to 16 hours, four of the former employees said.

"I had to work on Christmas and Thanksgiving," said one of these people. "They [the managers] acted like it should be really quick because the AI did everything. But it didn't."

EvenUp's spokesman said candidates are told upfront the job is challenging and requires a substantial amount of writing. He said retention rates are "in line with other hyper-growth startups" and that 40% of legal operations associates were promoted in the third quarter of this year.

"We recognize that working at a company scaling this fast is not for everyone," said the spokesman. "In addition, as our AI continues to improve, leveraging our technology will become easier and easier."

Highlighting the continued importance of human workers, the spokesman noted that EvenUp hired a vice president of people at the end of October.

Read the original article on Business Insider

20 books that Elon Musk, Jeff Bezos, and Bill Gates recommend you read

11 December 2024 at 08:01
side-by-side of Elon Musk, Jeff Bezos, and Bill Gates
Elon Musk, Jeff Bezos, and Bill Gates have some reading advice.

Yasin Ozturk/Getty Images; Paul Ellis/Getty Images; Michael Loccisano/Getty Images

  • Many executives say they've learned valuable lessons on business from books.
  • Elon Musk, Jeff Bezos, and Bill Gates are no exception.
  • Here are 20 books they've said taught them a lot about business, leadership, and the forces shaping our world.

You learn by doing β€” but you can also learn a lot by reading.

Many influential business figures, including Tesla CEO Elon Musk, Amazon cofounder Jeff Bezos, and Microsoft cofounder Bill Gates say they've learned some of the most important lessons in their lives from books.

They've recommended countless books over the years that they credit with strengthening their business acumen and shaping their worldviews.

Here are 20 books recommended by Musk, Bezos, and Gates to add to your reading list:

Jeff Bezos
Amazon founder and chair Jeff Bezos pictured here in front of a giant image of a book.

Mario Tama/Getty Images

Some of Bezos' favorite books were instrumental to the creation of products and services like the Kindle and Amazon Web Services.

"The Innovator's Solution"
The Innovator's Solution book cover

Harvard Business Review Press

This book on innovation explains how companies can become disruptors. It's one of three books Bezos made his top executives read one summer to map out Amazon's trajectory.

"The Goal: A Process of Ongoing Improvement"
'The Goal  A Process of Ongoing Improvement' by Eliyahu Goldratt

Amazon

Also on that list was "The Goal," in which Eliyahu M. Goldratt and Jeff Cox examine the theory of constraints from a management perspective.

Buy it here >>

"The Effective Executive"
The Effective Executive book cover

Amazon

The final book on Bezos' reading list for senior managers, "The Effective Executive" lays out habits of successful executives, like time management and effective decision-making.

"Built to Last: Successful Habits of Visionary Companies"
'Built to Last  Successful Habits of Visionary Companies' by Jim Collins

HarperCollins Publishers/Amazon

This book draws on six years of research from the Stanford University Graduate School of Business that looks into what separates exceptional companies from their competitors. Bezos has said it's his "favorite business book."

Buy it here >>

"The Remains of the Day"
'The Remains of the Day' by Kazuo Ishiguro

Vintage International/Amazon

This Kazuo Ishiguro novel tells of an English butler in wartime England who begins to question his lifelong loyalty to his employer while on a vacation.

Bezos has said of the book, "Before reading it, I didn't think a perfect novel was possible."

Buy it here >>

"Lean Thinking: Banish Waste and Create Wealth in Your Corporation"
'Lean Thinking  Banish Waste and Create Wealth in Your Corporation' by James Womack and Daniel Jones

Simon & Schuster/Amazon

This book imparts lessons about improving efficiency based on case studies of lean companies across various industries.

Buy it here >>

Elon Musk
Elon Musk in 2020

Yasin Ozturk/Getty Images

The Tesla CEO has recommended several AI books, sci-fi novels, and biographies over the years.

"What We Owe the Future"
cover of the book "What We Owe the Future" by William MacAskill

Amazon

One of Musk's most recent picks, this book tackles longtermism, which its author defines as "the view that positively affecting the long-run future is a key moral priority of our time." Musk says the book is a "close match" for his philosophy.

"Superintelligence: Paths, Dangers, Strategies"
superintelligence

Amazon

Musk has also recommended several books on artificial intelligence, including this one, which considers questions about the future of intelligent life in a world where machines might become smarter than people.

Buy it here >>

"Our Final Invention: Artificial Intelligence and the End of the Human Era"
our final invention

Amazon

On the subject of AI, Musk said in a 2014 tweet that this book, which examines its risks and potential, is also "worth reading."

Buy it here >>

"Life 3.0: Being Human in the Age of Artificial Intelligence"
Life 3.0: Being Human in the Age of Artificial Intelligence book cover

Amazon

In this book, MIT professor Max Tegmark writes about ensuring artificial intelligence and technological progress remain beneficial for human life in the future.

"Zero to One: Notes on Startups, or How to Build the Future"
Zero to One

Amazon

Peter Thiel shares lessons he learned founding companies like PayPal and Palantir in this book.

Musk has said of the book, "Thiel has built multiple breakthrough companies, and Zero to OneΒ shows how."

Buy it here >>

"Einstein: His Life and Universe"
einstein

Amazon

Musk's reading list isn't without biographies, including this Walter Isaacson book on Albert Einstein as well as Isaacon's biography of Benjamin Franklin. Isaacson more recently published a biography of Musk himself.

Buy it here >>

Bill Gates
Bill Gates smiling.

Leon Neal/Getty Images

The Microsoft cofounder usually publishes two lists each year, one in the summer and one at year's end, of his book recommendations.

"How the World Really Works"
cover of book How the World Really Works

Penguin Random House

In his 2022 summer reading list, Gates highlighted this work by Vaclav Smil that explores the fundamental forces underlying today's world, including matters like energy production and globalization.

"If you want a brief but thorough education in numeric thinking about many of the fundamental forces that shape human life, this is the book to read," Gates said of the book.

"Why We're Polarized"
cover of book Why We're Polarized by Ezra Klein

Simon & Schuster

Ezra Klein argues that the American political system has became polarized around identity to dangerous effect in this book, also on Gates' summer reading list in 2022, that Gates calls "a fascinating look at human psychology."

"Business Adventures: Twelve Classic Tales from the World of Wall Street"
business adventures

Amazon

Gates has said this is "the best business book I've ever read." It compiles 12 articles that originally appeared in The New Yorker about moments of success and failure at companies like General Electric and Xerox.

Buy it here >>

"Factfulness: Ten Reasons We're Wrong About the Worldβ€”and Why Things Are Better Than You Think"
"Factfulness: Ten Reasons We're Wrong About the World β€” and Why Things Are Better Than You Think," by Hans Rosling

Amazon

This book investigates the thinking patterns and tendencies that distort people's perceptions of the world. Gates has called it "one of the most educational books I've ever read."

Buy it here >>

"Origin Story: A Big History of Everything"
origin story david christian

Little, Brown and Company

David Christian takes on the history of our universe, from the Big Bang to mass globalization, in this book.

Buy it here >>

"The Sixth Extinction: An Unnatural History"
β€œThe Sixth Extinction: An Unnatural History” by Elizabeth Kolbert

Amazon

Elizabeth Kolbert plumbs the history of Earth's mass extinctions in this book, including a sixth extinction, which some scientists warn is already underway.

Buy it here >>

"The Myth of the Strong Leader: Political Leadership in the Modern Age"
the myth of the strong leader

Amazon

This Archie Brown book examines political leadership throughout the 20th century.

Buy it here >>

"The Coming Wave"
book cover of "The Coming Wave" by Mustafa Suleyman

Amazon

One of Gates' most recent book picks comes from the head of Microsoft AI.

Mustafa Suleyman's "The Coming Wave" explores the opportunities and risks posed by scientific breakthroughs like AI and gene editing.

"If you want to understand the rise of AI, this is the best book to read," Gates wrote of the book.

Read the original article on Business Insider

Google worried its Gemini Workspace product lagged rivals like Microsoft and OpenAI in key metrics, leaked documents show

10 December 2024 at 14:17
Aparna Pappu on stage at Google IO 2023
Aparna Pappu, former head of Google Workspace, onstage at Google IO 2023

Google

  • Google found its AI Workspace product lagged rivals, internal documents show.
  • A study earlier this year found the tool trailed Microsoft and Apple in brand familiarity and usage.
  • Google hopes Workspace is one way it can turn AI into profit.

As Google pours untold amounts of cash into AI, it's banking on products such as Gemini for Google Workspace to turn that investment into revenue. An internal presentation reveals the company worried that Gemini lagged behind its rivals across key metrics.

Gemini for Google Workspace puts Google's AI features into a handful of the company's productivity tools, such as Gmail, Docs, and Google Meet. Users can have the AI model rewrite an email, whip up a presentation, or summarize documents filled with dense information. Google, which charges customers extra for these add-ons, claims the features will save users time and improve the quality of their work.

Gemini for Google Workspace trailed all its key rivals, including Microsoft, OpenAI, and even Apple, when it came to brand familiarity and usage, according to an internal market research presentation reviewed by Business Insider.

The data tracked Gemini's brand strength during the first half of 2024 and included data on what percentage of audiences use and pay for Gemini for Google Workspace in certain segments.

One document seen by BI said that Workspace's Gemini tools were "far behind the competition" but that a strong Q4 could help the company in 2025.

In a written statement, a spokesperson said the data came from a study tracking brand awareness during the brand transition from "Duet AI" to "Gemini" earlier this year and called the data "old and obsolete."

"In the time since, we've brought Gemini for Workspace to millions more customers and made significant, double-digit gains across key categories, including familiarity, future consideration, and usage. We're very pleased with our momentum and are encouraged by all the great feedback we are getting from our users," the spokesperson added.

The internal data tracked Gemini's brand strength across commercial, consumer, and executive groups. In the US commercial group, Gemini scored lower than Microsoft Copilot and ChatGPT across four categories: familiarity, consideration, usage, and paid usage, one slide showed. Paid usage was measured at 22%, 16 points lower than Copilot and ChatGPT.

Data for the UK in the commercial group also showed Gemini mostly behind its rivals, although it scored slightly higher than Copilot in paid usage. In Brazil and India, Gemini for Workspace fared better than Copilot across most categories but still fell below ChatGPT, the data showed.

"Gemini trails both Copilot and ChatGPT in established markets," the document said, adding that it "rises above Copilot across the funnel" in Brazil and India.

In another part of Google's internal presentation that focused on brand familiarity, Google's Gemini for Workspace came in last place in consumer, commercial, and executive categories, trailing ChatGPT, Copilot, Meta AI, and Apple AI.

Familiarity was particularly low for the US consumer category, with Gemini for Workspace scoring just 45%, while Copilot scored 49%, ChatGPT and Apple both scored 80%, and Meta scored 82%.

'We have the same problem as Microsoft'

Microsoft's Copilot, which does similar tasks like summarizing emails and meetings, likewise struggles to live up to the hype, with some dissatisfied customers and employees who said the company has oversold the current capabilities of the product, BI recently reported.

"We have the same problem as Microsoft," said a Google employee directly familiar with the Gemini for Workspace strategy. "Just with less market share." The person asked to remain anonymous because they were not permitted to speak to the press.

Google's data showed Apple and Meta's AI products have much bigger market recognition, which could benefit those companies as they roll out business products that compete with Google's.

Internally, the Workspace group has recently undergone a reshuffle. The head of Google Workspace, Aparna Pappu, announced internally in October that she was stepping down, BI previouslyΒ reported. Bob Frati, vice president of Workspace sales, also left the company earlier this year. Jerry Dischler, a former ads exec who moved to the Cloud organization earlier this year, now leads the Workspace group.

Are you a current or former Google employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

YouTube star Marques Brownlee has pointed questions for OpenAI after its Sora video model created a plant just like his

10 December 2024 at 11:23
Marques Brownlee's Sora review.
Marques Brownlee reviewed OpenAI's Sora.

Marques Brownlee

  • On Monday, OpenAI released Sora, an AI video generator, in hopes of helping creators.
  • One such creative, Marques Brownlee, wants to know if his videos were used to train Sora.
  • "We don't know if it's too late to opt out," Brownlee said in his review of Sora.

On Monday, OpenAI released its Sora video generator to the public.

CEO Sam Altman showed off Sora's capabilities as part of "Shipmas," OpenAI's term for the 12 days of product launches and demos it's doing ahead of the holidays. The AI tool still has some quirks, but it can make videos of up to 20 seconds from a few words of instruction.

During the launch, Altman pitched Sora as an assistant for creators and said that helping them was important to OpenAI.

"There's a new kind of co-creative dynamic that we're seeing emerge between early testers that we think points to something interesting about AI creative tools and how people will use them," he said.

One such early tester was Marques Brownlee, whose tech reviews have garnered roughly 20 million subscribers on YouTube. One could say this is the kind of creator that OpenAI envisions "empowering," to borrow execs' term from the livestream.

But in his Sora review, posted on Monday, Brownlee didn't sugarcoat his skepticism, especially about how the model was trained. Were his own videos used without his knowledge?

This is a mystery, and a controversial one. OpenAI hasn't said much about how Sora is trained, though experts believe the startup downloaded vast quantities of YouTube videos as part of the model's training data. There's no legal precedent for this practice, but Brownlee said that to him, the lack of transparency was sketchy.

"We don't know if it's too late to opt out," Brownlee said.

In an email, an OpenAI spokesperson said Sora was trained using proprietary stock footage and videos available in the public domain, without commenting on Business Insider's specific questions.

In a blog post about some of Sora's technical development, OpenAI said the model was partly trained on "publicly available data, mostly collected from industry-standard machine learning datasets and web crawls."

Brownlee's big questions for OpenAI

Brownlee threw dozens of prompts at Sora, asking it to generate videos of pretty much anything he could think of, including a tech reviewer talking about a smartphone while sitting at a desk in front of two displays.

Sora's rendering was believable, down to the reviewer's gestures. But Brownlee noticed something curious: Sora added a small fake plant in the video that eerily matched Brownlee's own fake plant.

Marques Brownlee's Sora review.
Sora included a fake plant in a video that was similar to Brownlee's own plant.

Marques Brownlee

The YouTuber showed all manner of "horrifying and inspiring" results from Sora, but this one seemed to stick with him. The plant looks generic, to be sure, but for Brownlee it's a reminder of the unknown behind these tools. The models don't create anything fundamentally novel; they're predicting frame after frame based on patterns they recognize from source material.

"Are my videos in that source material? Is this exact plant part of the source material? Is it just a coincidence?" Brownlee said. "I don't know." BI asked OpenAI about these specific questions, but the startup didn't address them.

Marques Brownlee's Sora review.
Sora created a video of a tech reviewer with a phone.

Marques Brownlee

Brownlee discussed Sora's guardrails at some length. One feature, for example, can make videos from images that people upload, but it's pretty picky about weeding out copyrighted content.

A few commenters on Brownlee's video said they found it ironic that Sora was careful to steer clear of intellectual property β€” except for that of the people whose work was used to produce it.

"Somehow their rights dont matter one bit," one commenter said, "but uploading a Mickeymouse? You crook!"

In an email to BI, Brownlee said he was looking forward to seeing the conversation evolve.

Millions of people. All at once.

Overall, the YouTuber gave Sora a mixed review.

Outside of its inspiring features β€” it could help creatives find fresh starting points β€” Brownlee said he feared that Sora was a lot for humanity to digest right now.

Brownlee said the model did a good job of refusing to depict dangerous acts or use images of people without their consent. And though it's easy to crop out, it adds a watermark to the content it makes.

Sora's relative weaknesses might provide another layer of protection from misuse. In Brownlee's testing, the system struggled with object permanence and physics. Objects would pass through each other or disappear. Things might seem too slow, then suddenly too fast. Until the tech improves, at least, this could help people spot the difference between, for example, real and fake security footage.

But Brownlee said the videos would only get better.

"The craziest part of all of this is the fact that this tool, Sora, is going to be available to the public," he said, adding, "To millions of people. All at once."

He added, "It's still an extremely powerful tool that directly moves us further into the era of not being able to believe anything you see online."

Read the original article on Business Insider

Amazon isn't seeing enough demand for AMD's AI chips to offer them via its cloud

6 December 2024 at 13:30
AWS logo at re:Invent 2024
AWS logo at re:Invent 2024

Noah Berger/Getty Images for Amazon Web Services

  • AWS has not committed to offering cloud access to AMD's AI chips in part due to low customer demand.
  • AWS said it was considering offering AMD's new AI chips last year.
  • AMD recently increased the sales forecast for its AI chips.

Last year, Amazon Web Service said it was considering offering cloud access to AMD's latest AI chips.

18 months in, the cloud giant still hasn't made any public commitment to AMD's MI300 series.

One reason: low demand.

AWS is not seeing the type of huge customer demand that would lead to selling AMD's AI chips via its cloud service, according to Gadi Hutt, senior director for customer and product engineering at Amazon's chip unit, Annapurna Labs.

"We follow customer demand. If customers have strong indications that those are needed, then there's no reason not to deploy," Hutt told Business Insider at AWS's re:Invent conference this week.

AWS is "not yet" seeing that high demand for AMD's AI chips, he added.

AMD shares dropped roughly 2% after this story first ran.

AMD's line of AI chips has grown since its launch last year. The company recently increased its GPU sales forecast, citing robust demand. However, the chip company still is a long way behind market leader Nvidia.

AWS provides cloud access to other AI chips, such as Nvidia's GPUs. At re:Invent, AWS announced the launch of P6 servers, which come with Nvidia's latest Blackwell GPUs.

AWS and AMD are still close partners, according to Hutt. AWS offers cloud access to AMD's CPU server chips, and AMD's AI chip product line is "always under consideration," he added.

Hutt discussed other topics during the interview, including AWS's relationship with Nvidia, Anthropic, and Intel.

An AMD spokesperson declined to comment.

Do you work at Amazon? Got a tip?

Contact the reporter, Eugene Kim, via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Business Insider's source guide for other tips on sharing information securely.

Editor's note: This story was first published on December 6, 2024, and was updated later that day to reflect developments in AMD's stock price.

Read the original article on Business Insider

❌
❌