Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

OpenAI launched its best new AI model in September. It already has challengers, one from China and another from Google.

20 December 2024 at 08:57
Sam Altman sits in front of a blue background, looking to the side.
OpenAI CEO Sam Altman.

Andrew Caballero-Reynolds/AFP/Getty Images

  • OpenAI's o1 model was hailed as a breakthrough in September.
  • By November, a Chinese AI lab had released a similar model called DeepSeek.
  • On Thursday, Google came out with a challenger called Gemini 2.0 Flash Thinking.

In September, OpenAI unveiled a radically new type of AI model called o1. In a matter of months, rivals introduced similar offerings.

On Thursday, Google released Gemini 2.0 Flash Thinking, which uses reasoning techniques that look a lot like o1.

Even before that, in November, a Chinese company announced DeepSeek, an AI model that breaks challenging questions down into more manageable tasks like OpenAI's o1 does.

This is the latest example of a crowded AI frontier where pricey innovations are swiftly matched, making it harder to stand out.

"It's amazing how quickly AI model improvements get commoditized," Rahul Sonwalkar, CEO of the startup Julius AI, said. "Companies spend massive amounts building these new models, and within a few months they become a commodity."

The proliferation of multiple AI models with similar capabilities could make it difficult to justify charging high prices to use these tools. The price of accessing AI models has indeed plunged in the past year or so.

That, in turn, could raise questions about whether it's worth spending hundreds of millions of dollars, or even billions, to build the next top AI model.

September is a lifetime ago in the AI industry

When OpenAI previewed its o1 model in September, the product was hailed as a breakthrough. It uses a new approach called inference-time compute to answer more challenging questions.

It does this by slicing queries into more digestible tasks and turning each of these stages into a new prompt that the model tackles. Each step requires running a new request, which is known as the inference stage in AI.

This produces a chain of thought or chain of reasoning in which each part of the problem is answered, and the model doesn't move on to the next stage until it ultimately comes up with a full response.

The model can even backtrack and check its prior steps and correct errors, or try solutions and fail before trying something else. This is akin to how humans spend longer working through complex tasks.

DeepSeek rises

In a mere two months, o1 had a rival. On November 20, a Chinese AI company released DeepSeek.

"They were probably the first ones to reproduce o1," said Charlie Snell, an AI researcher at UC Berkeley who coauthored a Google DeepMind paper this year on inference-time compute.

He's tried DeepSeek's AI model and says it performs well on complex math problems that must be solved by thinking for longer and in stages. 

He noted that in DeepSeek's DeepThink mode, the model shows users every step of its thought process. With o1, these intermediate steps are hidden from users. 

"I've asked people at OpenAI what they think of it," Snell told BI. "They say it looks like the same thing, but they don't how DeepSeek did this so fast."

OpenAI didn't respond to a request for comment. On Friday, the startup previewed an o1 successor, called o3. Francois Chollet, a respected AI expert, called the update a "significant breakthrough."

Andrej Karpathy, an OpenAI cofounder, praised Google's new "Thinking" model for the same reasoning feature.

"The prominent and pleasant surprise here is that unlike o1 the reasoning traces of the model are shown," he wrote on X. "As a user I personally really like this because the reasoning itself is interesting to see and read — the models actively think through different possibilities, ideas, debate themselves, etc., it's part of the value add."

A DeepSeek demo

Snell shared a multistep math problem with Business Insider, which we used to test DeepSeek for ourselves: 

"Find a sequence of +, -, /, * which can be applied to the numbers 7, 3, 11, 5 to get to 24, where each of the given numbers is used exactly once."

BI put that prompt in DeepSeek's chat window on its website. The model responded initially by laying out the challenge ahead.

"Alright, so I've got this problem here: I need to use the numbers 7, 3, 11, and 5, and combine them with the operations of addition, subtraction, multiplication, and division, using each number exactly once, to get to 24," it replied. "At first glance, this seems a bit tricky, but I think with some systematic thinking, I can figure it out."

It then proceeded through multiple steps over roughly 16 pages of discussion that included mathematical calculations and equations. The model sometimes got it wrong, but it spotted this and didn't give up. Instead, it swiftly moved on to another possible solution. 

"Almost got close there with 33 / 7 * 5 ≈ 23.57, but not quite 24. Maybe I need to try a different approach," it wrote at one point. 

After a few minutes, it found the correct solution. 

"You can see it try different ideas and backtrack," Snell said in an interview on Wednesday. He highlighted this part of DeepSeek's chain of thought as particularly noteworthy:

"This is getting really time-consuming. Maybe I need to consider a different strategy," the AI model wrote. "Instead of combining two numbers at a time, perhaps I should look for a way to group them differently or use operations in a nested manner."

Then Google appears

Snell said other companies are likely working on AI models that use the same inference-time compute approach as OpenAI.

"DeepSeek does this already, so I assume others are working on this," he added on Wednesday.

The following day, Google released Gemini 2.0 Flash Thinking. Like DeepSeek, this new model shows users each step of its thought process while tackling problems. 

Jeff Dean, a Google AI veteran, shared a demo on X that showed this new model solving a physics problem and explained its reasoning steps. 

"This model is trained to use thoughts to strengthen its reasoning," Dean wrote. "We see promising results when we increase inference time computation!"

Read the original article on Business Insider

The tragedy of former OpenAI researcher Suchir Balaji puts 'Death by LLM' back in the spotlight

19 December 2024 at 14:56
The OpenAI logo on a multicolored background with a crack running through it
The OpenAI logo

Chelsea Jia Feng/Paul Squire/BI

  • Suchir Balaji helped OpenAI collect data from the internet for AI model training, the NYT reported.
  • He was found dead in an apartment in San Francisco in late November, according to police.
  • About a month before, Balaji published an essay criticizing how AI models use data.

The recent death of former OpenAI researcher Suchir Balaji has brought an under-discussed AI debate back into the limelight.

AI models are trained on information from the internet. These tools answer user questions directly, so fewer people visit the websites that created and verified the original data. This drains resources from content creators, which could lead to a less accurate and rich internet.

Elon Musk calls this "Death by LLM." Stack Overflow, a coding Q&A website, has already been damaged by this phenomenon. And Balaji was concerned about this.

Balaji was found dead in late November. The San Francisco Police Department said it found "no evidence of foul play" during the initial investigation. The city's chief medical examiner determined the death to be suicide.

Balaji's concerns

About a month before Balaji died, he published an essay on his personal website that addressed how AI models are created and how this may be bad for the internet. 

He cited research that studied the impact of AI models using online data for free to answer questions directly while sucking traffic away from the original sources.

The study analyzed Stack Overflow and found that traffic to this site declined by about 12% after the release of ChatGPT. Instead of going to Stack Overflow to ask coding questions and do research, some developers were just asking ChatGPT for the answers. 

Other findings from the research Balaji cited: 

  • There was a decline in the number of questions posted on Stack Overflow after the release of ChatGPT.
  • The average account age of the question-askers rose after ChatGPT came out, suggesting fewer people signed up to Stack Overflow or that more users left the online community.

This suggests that AI models could undermine some of the incentives that created the information-rich internet as we know it today.

If people can get their answers directly from AI models, there's no need to go to the original sources of the information. If people don't visit websites as much, advertising and subscription revenue may fall, and there would be less money to fund the creation and verification of high-quality online data.

MKBHD wants to opt out

It's even more galling to imagine that AI models might be doing this based partly on your own work. 

Tech reviewer Marques Brownlee experienced this recently when he reviewed OpenAI's Sora video model and found that it created a clip with a plant that looked a lot like a plant from his own videos posted on YouTube. 

"Are my videos in that source material? Is this exact plant part of the source material? Is it just a coincidence?" said Brownlee, who's known as MKBHD.

Naturally, he also wanted to know if he could opt out and prevent his videos from being used to train AI models. "We don't know if it's too late to opt out," Brownlee said.

'Not a sustainable model'

In an interview with The New York Times published in October, Balaji said AI chatbots like ChatGPT are stripping away the commercial value of people's work and services.

The publication reported that while working at OpenAI, Balaji was part of a team that collected data from the internet for AI model training. He joined the startup with high hopes for how AI could help society, but became disillusioned, NYT wrote. 

"This is not a sustainable model for the internet ecosystem," he told the publication.

In a statement to the Times about Balaji's comments, OpenAI said the way it builds AI models is protected by fair use copyright principles and supported by legal precedents. "We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness," it added.

In his essay, Balaji disagreed.

One of the four tests for copyright infringement is whether a new work impacts the potential market for, or value of, the original copyrighted work. If it does this type of damage, then it's not "fair use" and is not allowed. 

Balaji concluded that ChatGPT and other AI models don't quality for fair use copyright protection. 

"None of the four factors seem to weigh in favor of ChatGPT being a fair use of its training data," he wrote. "That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains."

Talking about data

Tech companies producing these powerful AI models don't like to talk about the value of training data. They've even stopped disclosing where they get the data from, which was a common practice until a few years ago. 

"They always highlight their clever algorithms, not the underlying data," Nick Vincent, an AI researcher, told BI last year.

Balaji's death may finally give this debate the attention it deserves. 

"We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time," an OpenAI spokesperson told BI recently. 

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Read the original article on Business Insider

A tsunami of AI deepfakes was expected this election year. Here's why it didn't happen.

18 December 2024 at 02:00
Oren Etzioni
Oren Etzioni, founder of TrueMedia.org.

Oren Etzioni

  • Generative AI tools have made it easier to create fake images, videos, and audio.
  • That sparked concern that this busy election year would be disrupted by realistic disinformation.
  • The barrage of AI deepfakes didn't happen. An AI researcher explains why and what's to come.

Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.

India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.

"We're going into the jungle without bug spray," Etzioni recalled thinking at the time.

He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.

The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.

In the end, the barrage never came.

"It really wasn't nearly as bad as we thought," Etzioni said. "That was good news, period."

He's still slightly mystified by this, although he has theories.

First, you don't need AI to lie during elections.

"Out-and-out lies and conspiracy theories were prevalent, but they weren't always accompanied by synthetic media," Etzioni said.

Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos. 

"Some of the most egregious videos that are truly realistic — those are still pretty hard to create," Etzioni said. "There's another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet."

One thing he's sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it's coming.

With that in mind, Etzioni shared learnings from TrueMedia's first go-round this year:

  • Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.
  • There's no purely technical solution for this looming problem, and AI will need regulation. 
  • Social media has an important role to play. 
  • TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there's room for human analysts.
  • It's not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia's technology. 

The group plans to publish research on its AI deepfake detection efforts, and it's working on potential licensing deals. 

"There's a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes," Etzioni said. "We hope to license those to entities that are mission-oriented."

Read the original article on Business Insider

Stripe CFO joins the board of $3 billion AI startup Vercel

17 December 2024 at 08:01
Vercel directors and executives sit at a boardroom table. Steffan Tomlinson (right) joined the board in December 2024. Guillermo Rauch (center) is CEO of Vercel. Marten Abrahamsen (left) is CFO.
Steffan Tomlinson (right) joined Vercel's board in December 2024. Guillermo Rauch (center) is CEO, while Marten Abrahamsen (left) is CFO.

Vercel

  • Vercel said it added Steffan Tomlinson to its board.
  • Tomlinson is the CFO of Stripe and has experience taking tech startups public.
  • He used to be CFO at several other tech companies, including Palo Alto Networks and Confluent.

Vercel, an AI startup valued at more than $3 billion, just bulked up its board with the addition of a finance executive who has experience taking tech companies public.

Stripe Chief Financial Officer Steffan Tomlinson will serve as a director on Vercel's board, the startup said on Tuesday.

Tomlinson was previously CFO at several other tech startups, guiding Palo Alto Networks, Confluent, and Aruba Networks through the IPO process.

Stripe, one of the world's most valuable startups, has long been mentioned as an IPO candidate. Vercel is earlier in its lifecycle, however the AI startup has been putting some of the early pieces in place to potentially go public someday.

"Steffan's experience leading developer-focused companies from startup to public markets makes him an ideal addition to Vercel's Board of Directors as we continue to put our products in the hands of every developer," Vercel CEO and founder Guillermo Rauch said.

Vercel directors and executives sit at a boardroom table. Steffan Tomlinson (left) joined the board in December 2024. Guillermo Rauch (center) is CEO of Vercel. Marten Abrahamsen (right) is CFO.
Steffan Tomlinson (left) joined Vercel's board in December 2024. Guillermo Rauch (center) is the CEO, while Marten Abrahamsen (right) is the CFO.

Vercel

Last year, Vercel tapped Marten Abrahamsen as its CFO. He's been building out Vercel's finance, legal, and corporate development teams and systems while leading the startup through a $250 million funding round at a $3.25 billion valuation in May.

"Steffan's financial expertise and leadership experience come at a pivotal moment for Vercel as we scale our enterprise presence and build on our momentum," Abrahamsen said.

GenAI growth

The generative AI boom has recently powered Vercel's growth. The startup offers AI tools to developers, and earlier this year it surpassed $100 million in annualized revenue.

Vercel's AI SDK, a software toolkit that helps developers build AI applications, was downloaded more than 700,000 times last week, up from about 80,000 downloads a year ago, according to NPM data.

The company's Next.js open-source framework was downloaded 7.9 million times last week, compared to roughly 4.6 million downloads a year earlier, NPM data also shows.

Abrahamsen said they are building a company to one day go public, but stressed that there's no timeline or date set for such a move. 

Consumption-based business models

At Stripe and Confluent, Tomlinson gained experience with software that helps developers build cloud and web-based applications — and how these offerings generate revenue.

"Steffan's track record with consumption-based software business models makes him the ideal partner to inform strategic decisions," Rauch said.

Vercel is among a crop of newer developer-focused tech companies that charge based on usage. For instance, as traffic and uptime increase for developers, Vercel generates more revenue, so it's aligned with customers, Abrahamsen told Business Insider. 

Similarly, Stripe collects a small fee every time someone makes a payment in an app. Confluent has a consumption-based business model, too.

This is different from traditional software-as-a-service providers, which often charge based on the number of users, or seats. For instance, Microsoft 365 costs a certain amount per month, per user. 

Tomlinson also has experience working with developer-focused companies with technical founders, such as the Collison brothers who started Stripe. 

Read the original article on Business Insider

OpenAI whistleblower found dead by apparent suicide

13 December 2024 at 21:02
Logo for OpenAI
Suchir Balaji, 26, was an OpenAI researcher of four years. He left the company in August and accused his employer of violating copyright law.

Joan Cros/NurPhoto via Getty Images

  • Suchir Balaji, a former OpenAI researcher, was found dead on Nov. 26 in his apartment, reports say.
  • Balaji, 26, was an OpenAI researcher of four years who left the company in August.
  • He had accused his employer of violating copyright law with its highly popular ChatGPT model.

Suchir Balaji, a former OpenAI researcher of four years, was found dead in his San Francisco apartment on November 26, according to multiple reports. He was 26.

Balaji had recently criticized OpenAI over how the startup collects data from the internet to train its AI models. One of his jobs at OpenAI was gather this information for the development of the company's powerful GPT-4 AI model, and he'd become concerned about how this could undermine how content is created and shared on the internet.

A spokesperson for the San Francisco Police Department told Business Insider that "no evidence of foul play was found during the initial investigation."

David Serrano Sewell, executive director of the city's office of chief medical examiner, told the San Jose Mercury News "the manner of death has been determined to be suicide." A spokesperson for the city's medical examiner's office did not immediately respond to a request for comment from BI.

"We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time," an OpenAI spokesperson said in a statement to BI.

In October, Balaji published an essay on his personal website that raised questions around what is considered "fair use" and whether it can apply to the training data OpenAI used for its highly popular ChatGPT model.

"While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data," Balaji wrote. "If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as 'fair use.' Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use."

Balaji argued in his personal essay that training AI models with masses of data copied for free from the internet is potentially damaging online knowledge communities.

He cited a research paper that described the example of Stack Overflow, a coding Q&A website that saw big declines in traffic and user engagement after ChatGPT and AI models such as GPT-4 came out.

Large language models and chatbots answer user questions directly, so there's less need for people to go to the original sources for answers now.

In the case of Stack Overflow, chatbots and LLMs are answering coding questions, so fewer people visit Stack Overflow to ask that community for help. This, in turn, means the coding website generates less new human content.

Elon Musk has warned about this, calling the phenomenon "Death by LLM."

OpenAI faces multiple lawsuits that accuse the company of copyright infringement.

The New York Times sued OpenAI last year, accusing the start up and Microsoft of "unlawful use of The Times's work to create artificial intelligence products that compete with it."

In an interview with Times that was published October, Balaji said chatbots like ChatGPT are stripping away the commercial value of people's work and services.

"This is not a sustainable model for the internet ecosystem as a whole," he told the publication.

In a statement to the Times about Balaji's accusations, OpenAI said: "We build our A.I. models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness."

Balaji was later named in the Times' lawsuit against OpenAI as a "custodian" or an individual who holds relevant documents for the case, according to a letter filed on November 18 that was viewed by BI.

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Read the original article on Business Insider

In a world of infinite AI, the new luxury item could well be humans

30 November 2024 at 07:08
Residents enjoy a carnival parade on February 6, 2005 in Viareggio, Italy.
Residents enjoy a carnival parade on February 6, 2005 in Viareggio, Italy.

Marco Di Lauro/Getty Images

  • Modern factories, supply chains and Amazon have turned 'stuff' into a commodity. 
  • The same inevitable supply-and-demand dynamic could wash over us again with generative AI.
  • The ultimate outcome may be a new limited-edition luxury item: Humans. 

"Live experiences are the new luxury good," Kevin Hartz said in 2013 when Eventbrite, the ticketing startup he cofounded, got a big new funding round.

By that point, modern factories, supply chains, and Amazon had boiled down "stuff" to a commodity. You can now buy an overwhelming variety of tennis shoes, or spatulas, or sweatpants online. This abundance has taken much of the satisfaction away from purchasing physical things. This is why experiences, which by definition are finite, became more valuable.

There are only a few opportunities to see Taylor Swift on stage, versus the availability to purchase more than 20,000 kinds of tennis shoes on Amazon. So the price of Eras tickets soar, and shoes are cheap.

The same inevitable supply-and-demand dynamic is about to wash over us again with large language models and generative AI.

The ultimate outcome could be a new limited-edition luxury item: Humans. 

Unlimited content vs 'finite resources'

AI models can now automatically generate text, software code, medical diagnoses, images, voices, music, video, and lots more. The barriers to using this technology are falling away quickly. Anyone can fire up ChatGPT, GPT-4, DALL-E and other tools to produce an almost unlimited quantity of content.

This should be a boon to society. Many tasks will be completed more efficiently, making products and services more affordable and accessible, as venture capitalist Marc Andreessen recently explained.

There will be a reaction though: In a world of machine-generated abundance, human-centered services and experiences will become increasingly rare, valuable, and therefore desirable.

"The world's information is being turned into 1s and 0s and all this is being commoditized," Hartz told BI. "What can't be commoditized is finite resources like real estate, travel, seeing the sunset on Mediterranean, or surfing in Fiji. These are the luxury goods of the power elite."

Cooks, tutors, and robo-advisors

The more that AI automates restaurants, the more we'll want personal chefs such as John Barone, who cooks five days a week in the home of a wealthy Silicon Valley couple.

As AI tutor bots proliferate in education, the richest will pay for more exclusive access to the best human tutors for their kids.

The more robo-advisors handle our money, the stronger the urge of the wealthy to recruit savvy human experts to manage their family offices.

A new flood of automated emails

Email marketing is a simple example that some technologists are already worried about.

Generative AI tools are making it much quicker and easier to write marketing copy. The end result will be a flood of new emails that will overwhelm recipients and make them even less likely to open the messages.

"And our own machines will read those AI automated sales emails," Hartz quipped.

So, either your marketing email won't reach the humans you're trying to engage, or another AI bot will open it and you'll never be quite sure who read the message. A hand-typed email from a real human will be, relatively speaking, a rare and beautiful thing (complete with typos).

AI tutors versus small classrooms

AI models are beginning to revolutionize education, according to Sal Khan, the founder of Khan Academy. His organization has been working with OpenAI models to coach students in powerful new ways and help teachers develop class plans.

The gold standard throughout history has always been to have a personal tutor, and AI models can help personalize the education experience to bring some of this curated approach to more students, he explained during a No Priors podcast earlier this year.

"We don't have the resources to give everyone a tutor," he said during the podcast. "A generative AI tutor supporting students. That's going to be mainstream in 3 to 5 years," he added.

Pricey schools and a personal carpenter

And yet, Silicon Valley's top private schools, where many tech execs send their kids, are all about getting access to human teachers in small group settings. 

Castillja in Palo Alto highlights a student to faculty ratio of 7 to 1. Nueva, a Silicon Valley school for gifted kids, promises a similar ratio. The Menlo School in Menlo Park says it has a student-teacher ratio of 10 to 1 in the upper school.

These institutions cost $58,000 to $60,000 a year and I don't see any drop-off in demand among the tech elite. They're still jostling to get their kids into these bespoke, human-centered learning environments.

One persistent, apocryphal Silicon Valley story illustrates this point. On weekends, one tech billionaire has been known to hire a personal carpenter to hand-make wooden toys for their kids build and play with.

Who manages the money?

What about when it comes to managing fortunes amassed by successful tech entrepreneurs? The wealthiest rely on talented financial advisors who are hired directly to oversee this money in family offices.

Bill Gates has his own private investment firm, Cascade, which has been run by money manager Michael Larson since 1994. Elon Musk's family office, Excession, has been run by a former Morgan Stanley banker called Jared Birchall for years.

Using AI for trading has been tough so far. AI models are trained on masses of data from the past. When new situations arise, they struggle to adapt quickly enough.

Even quantitative hedge fund firms, which use machine learning and other automated techniques, rely on humans. Two Sigma, a famous quant firm, is for the first time exploring ways to add traders who rely on their human judgment to make money, Bloomberg reported recently.

"The major challenge with using things like reinforcement learning for trading is that it's a non-stationary environment," AI researcher Noam Brown said on the No Priors podcast in April. He's worked on algorithmic trading strategies in the past and was a researcher at Meta before recently joining OpenAI.

"So you can have all this historical data but it's not a stationary system," he explained, referring to how markets respond swiftly to world events and other developments.

Part of the problem relates to what he calls sample efficiency. Humans are good at learning quickly from a small amount of data, while AI models need mountains of information to train on.

"Humans are very good at adapting to novel situations," he added. "And you run into these novel situations pretty frequently in financial markets."

Social media bots vs. martial arts

AI is making social media increasingly machine-driven, too. Soon, human content creators will be vying for attention with content generated by AI models.

Last month, Meta CEO Mark Zuckerberg unveiled more than 25 new AI assistants with different personalities that use celebrities' images. Users will be able to interact with these bots on Meta's platforms in the future.

In a recent podcast, he described this new supply-and-demand situation well, saying human creators can't keep up with demand from followers.

"There are both people who out there who would benefit from being able to talk to an AI version of you," Zuckerberg explained. "You and other creators would benefit from being able to keep your community engaged."

So Meta will make an AI version of celebrities that can post constantly. Again, this will be infinite. And actually interacting with the real human celebrity will become more rare and valuable.

Meanwhile, when Zuckerberg is relaxing outside of work, he spends some of that time pursuing a very human pastime: Rolling around with other humans in martial arts contests.

Medical models and human doctors

AI models, such as Google DeepMind's Med-PaLM 2, are becoming incredibly good at answering medical questions and analyzing x-rays and other health data. But when wealthy parents have really sick children, they will still seek out the smartest doctors in the relevant fields of medicine.

You can see this in Silicon Valley's embrace of medical concierge services that provide special access to doctors and other human health specialists.

One Medical succeeded by offering better access to human doctors, and Amazon ended up buying it for almost $4 billion.

"We're inspired by their human-centered, technology-forward approach," an Amazon executive said when the deal was announced.

'Utility, value and signaling'

Hartz, a venture capitalist who now chairs Eventbrite's board, says successful technologists will continue to spend heavily on human experiences. But he says this depends on the activity and the motivations behind different actions.

He breaks this into "utility, value and signaling."

Many standard, common situations can be handled by software bots or even physical machines. Repetitive tasks at work and some educational functions are examples of these utility-type solutions.

In other situations, users will get more value from having machines handle the work, so humans can focus on more valuable tasks. If you're a well-paid machine-learning engineer, it will be better to have a robot clean your house so you can focus more on your job, he explained.

And then there will still many situations where humans will want to enjoy their success and signal the fruits of their achievements. And these activities will increasingly focus on finite human resources and experiences, Hartz said.

"You can't put on headset and pretend to be in Fiji," he added.

Read the original article on Business Insider

Google's search business is all about distribution. The DOJ wants to take this away, and it's freaking investors out.

21 November 2024 at 13:53
google on cracked phone

NurPhoto/Getty Images

  • The DOJ proposed banning Google from paying for search distribution deals.
  • Google's search dominance relies on distribution, not just technology.
  • Investors worry Google's market share could drop if distribution deals end.

The online search business is not about technology. It's about distribution.

The US Department of Justice made that clear Wednesday when it proposed fixes for a judge's earth-shaking ruling that Google is an illegal monopolist.

The DOJ's remedies cut to the heart of how Google distributes its search engine and how that broad reach is key to the company's dominance of this crucial and lucrative market.

The government's suggestion that Google be forced to sell Chrome initially grabbed the headlines. But, on Thursday, the potential crackdown on all distribution deals caught investors' attention.

The US government's lawyers said Google should be banned from offering "anything of value for any form" of search distribution. That especially includes Apple, but also covers any other partner or company, with limited exceptions, according to the DOJ's executive summary.

ISI Evercore internet analyst Mark Mahaney called this distribution crackdown "draconian" and said investors were surprised by the severity of the proposals. Google shares dropped 5% on Thursday.

The reason for this concern is that the online search business is not really about the quality of the technology. The edge comes from massive distribution and the huge volume of user queries that come with such a broad reach.

When people use Google to search on the web, the company monitors what results they click on. It feeds these responses back into its search engine, and the product gets constantly better. For instance, if most people click on the third result for a particular query, Google's search engine will likely adjust and rank that result higher in the future.

This self-reinforcing system is very hard to compete against. This is how the DOJ put it on Wednesday:

"Search engines rely on user data to improve search quality — an outcome that drives more users to a search engine. Users attract advertisers, and advertising dollars fund general search engines, creating a perpetual feedback loop that further entrenches Google."

One of the few ways to compete is to get more distribution than Google and pull in the extra queries and click-behavior data.

For many years, Google has paid to lock down most major sources of distribution. The most famous deal is with Apple. Google pays the iPhone maker about $20 billion a year to be the default search engine on Apple's mobile devices.

If the search business was actually about the quality of Google's technology, why does it have to pay Apple $20 billion a year? That question is at the heart of the DOJ's case, and Google has never been able to answer it properly. Because it keeps paying Apple.

If Google search technology is so great, the company shouldn't have to pay for distribution. People would just flock to its search engine all by themselves.

We could soon see a real-world test of this.

If the judge in this case agrees with the DOJ, then these payments will end — not just with Apple, but with any other third-party source of online distribution for Google's search engine.

This may have freaked investors out on Thursday. They know that the search business is mainly about distribution, and Google may not be able to do this now.

In a worst-case scenario, Google could lose a material slice of the US search market, according to Mahaney.

"We believe Google's default search placements via contractual agreements represent 50%+ of Google's US search queries," he estimated on Thursday.

If half of Google's US search queries go away, that could threaten the self-reinforcing cycle of user click data improving its results.

Suddenly, Google Search may not be so uncatchable.

Google's top lawyer, Kent Walker, said the DOJ's proposals would "break" the company's search engine and "deliberately hobble people's ability to access" the service.

Google gets to propose its own remedies on December 20.

Read the original article on Business Insider

❌
❌