Yann LeCun, Meta's chief AI scientist, said Europe should be recruiting US-based scientists who face reductions in federal research funding.
Fabrice COFFRINI / AFP via Getty Images.
Meta's Yann LeCun warned there could be an exodus of US-based scientists due to funding cuts.
The Trump administration wants to slash NIH funding, causing concern in the scientific community.
LeCun said Europe should be recruiting them by offering more favorable research conditions.
The United States could soon see an exodus of tech talent, according to Meta's chief AI scientist, Yann LeCun.
"The US seems set on destroying its public research funding system. Many US-based scientists are looking for a Plan B," LeCun wrote in a post on LinkedIn on Saturday.
The Trump administration has issued several executive orders to reduce funding, sparking concern among the US-based scientific community.
It announced drastic cuts to the National Institutes of Health that would effectively end billions in federal funding for biomedical research. A judge on Friday extended a temporary block on the cuts as lawsuits filed by states and universities who say the cuts are illegal make their way through the court system.
"A sane government would never do this," former Harvard Medical School Dean Jeffrey Flier said of the funding cuts in a post on X.
Elon Musk's cost-cutting DOGE squad has also been deployed to federal agencies, including the NIH, National Oceanic and Atmospheric Administration, the Environmental Protection Agency, and NASA.
The executive order that Trump signed against diversity, equity, and inclusion mandates has also caused concern that it could threaten scientific research at universities.
"At least one university is telling its researchers to refrain from terms like "biodiversity" to steer clear of detection by AI-based grant review systems, " Scientific American reported.
LeCun β who earned his bachelor's and Ph.D. in France β said the changes in the United States should be a wake-up call for European institutions and companies.
"You may have an opportunity to attract some of the best scientists in the world," he wrote.
He shared seven things he believes talented researchers want to see at any university, company, or public research agency they're joining:
Access to top students and junior collaborators.
Access to research funding with little administrative overhead.
Good compensation (comparable with top universities in the US, Switzerland, Canada).
Freedom to do research on what they think is most promising.
Access to research facilities (e.g. computing infrastructure, etc).
Ability to collaborate/consult with industry and startups.
Moderate teaching and administrative duties.
His message to Europe: "To attract the best scientific and technological talents, make science and technology research professions attractive."
More than 2 in 5 parents use ChatGPT to help their kids with homework, according to a new study.
Catherine Delahaye/Getty Images
Some parents are resorting to ChatGPT to find answers to their children's homework.
Those who spoke with Business Insider said it makes learning more engaging and jump-starts assignments.
AI tools like ChatGPT are debated for educational use, with concerns about critical thinking.
Roughly two years ago, Phil Birchenall's 11-year-old daughter, Daisy, was having a hard time with math.
"βShe's a bright girl," Birchenall, an AI consultant based in a suburb of Manchester, England, told Business Insider. Yet her long division skills were stopping her from acing the standardized tests known as SATs, which are required for secondary school in the UK.
Birchenall said he last learned math in the eighties, and problem-solving techniques have changed since then. He could have hired a tutor, but he resorted to what he felt was a more personal, and cost-effective approach. He built a GPT, a customizable version of ChatGPT, one evening to help his daughter get back on track.
"βI fed in all of the subject areas that Daisy was falling behind on. I added in that she was in the UK, and she was doing a SAT," he said. To keep her engaged, he gave it the personality of a dog, inspired by his daughter's love for their cocker spaniel. It didn't take more than a few weeks with the "tutor" for Daisy to get up to speed. "βShe smashed her SATs in the end," he said.
Parents in the US can also share the stress of homework and exam preparation. Nearly 60% of parents said they struggle to help their children with homework, according to a September 2024 survey of 1,006 parents of students in kindergarten through eighth grade in the US, conducted by Prodigy, a maker of educational games.
Math may be the most feared subject. Over 80% of parents said they avoid helping their children with it, while 20% of parents spurn science, and 19% steer clear of language arts. And they're turning to generative AI for help β 44% of parents said they use ChatGPT to find answers to their children's homework.
Data shows that students rely heavily on ChatGPT for homework, as visits often spike while school is in session. But the merits of the bot are still up for debate. Educators in support of it say it can make assignments more approachable, helping students get over their writer's block, or coaching them through math problems. Critics worry that it could foster a kind of mental inertia, with students outsourcing too much intellectual work to a chatbot.
New skills for a new learning paradigm
Stephen Salaka, a software engineering director from Florida, and his 14-year-old son both identify as neurodivergent. They excel under clear directions, but tend to struggle with more open-ended, creative work. He said they turn to ChatGPT to work things out through the Socratic method.
"He'll get an assignment, it'll be like, hey, draw a poster about, you know, the Civil War or something. It's very nebulous," he told BI. The bot helps his son get organized, talk through his thoughts, and move forward with the assignment.
As generative AI technology becomes more integrated into students' lives, Salaka encourages parents to help them cultivate new critical thinking skills.
"At some point in time, AI work is going to be distinguishable from human sources, and because of that, there's no way for us to track the provenance of information," Salaka said. "So disinformation, deepfakes, all of these things are going to become much more prevalent as we move forward."
Students, he said, should learn to start asking questions like: "Is that source valid? What is the rationale behind that source to say, hey, this is true? Are there other sources that corroborate?"
For now, AI tools are beginning to display sources in their outputs. Earlier this month, OpenAI launched "deep research", a new agent that conducts extensive research online, synthesizes it, and documents its outputs with "clear citations and a summary of its thinking."
In January, Anthropic launched Citations, an API feature that lets its chatbot, Claude, provide "detailed references to the exact sentences and passages it uses to generate responses." AI-powered search engine, Perplexity, also includes footnotes linking to original sources in every answer it generates.
There are still many parents who are apprehensive about tools like ChatGPT, according to Audrey Wisch, cofounder of Curious Cardinals, a tutoring and mentorship network based in San Francisco. Over the past 20 months, Wisch has taught over 75 workshops for parents on how to use AI to optimize their productivity. Before the workshops, she asks parents to fill out a registration form detailing their AI anxieties, among other points, and has collected more than 2,000 responses to date.
"They have this anxiety that they're going to screw up their kids," she said. "So there's just so much fear and there's so much misunderstanding. I think some of the biggest fears are cutting corners β will my kid not know how to write?"
Curious Cardinals pairs students in kindergarten through 12th grade with mentors to help them with schoolwork, pursue passion projects, or provide career guidance, and has incorporated AI education into those services.
Wisch said that a few parents have started asking for AI mentoring, too. "We have two mentors who are teaching moms AI one on one," she said. "What I love is seeing these women become very digitally empowered who otherwise are digitally insecure."
SecurityPal's base in Kathmandu employs close to 200 workers.
SecurityPal
SecurityPal helps major AI companies complete security questionnaires.
The company has been operating a 24/7 command center in Kathmandu, Nepal since 2023.
"There's a lot of technology talent just nestled amongst the foothills," CEO Pukar Hamal said.
At 10:00 p.m. in San Francisco, it's 11:45 the next morning in Kathmandu, Nepal. And for AI security startup, SecurityPal, those nearly fourteen hours are just enough time to stay one step ahead of its customers.
"I was looking at places around the world where I could tap into because my philosophy always was if a customer could send us a questionnaire by 5:00 p.m. and it was back in their inbox at 6:00 or 7 a.m. in the morning, that would be like magic," SecurityPal CEO Pukar Hamal told Business Insider.
The company β which launched in San Francisco in 2020 βopened a 24/7 security operations command center in Kathmandu two years ago this month.
It now employs almost 200 workers in Kathmandu. They're largely in their 20s and 30s and have a broad range of expertise, from those who studied technical subjects like cybersecurity and computer science to those with a liberal arts focus in economics or psychology.
SecurityPal has won many of the biggest β and buzziest β clients in the tech industry. It has become a key player behind some of the AI industry's top names, handling security questionnairesfor companies like OpenAI, Langchain, and Cursor. Its main focus is makingthe security review process easier for enterprise companies.
When these companies take on a new customer they are typically vetted by that customer through a security questionnaire. This complex document covers everything from how the company handles data to how it identifies vulnerabilities in its systems to the physical measures it takes to protect its facilities.
In its early days, SecurityPal'semployees manually filled out every questionnaire. "What we learned quickly was that there were ways to automate the steps," Hamal said.
Now, when a company signs up, SecurityPal's analystsfirst spend time understanding the company's full security and compliance posture. They'll parse, annotate, and add context to historical questionnaires, infrastructure documents, compliance reports, and other relevant information to create "discrete" question and answer pairs, Hamal said. Using AI, the company has built a repository of over 2 million pairs, which can be used for various customer requests.
As rapid advancements in AI drive companies to adopt new tools, Hamal said he'd noticed more "paranoia" in the questionnaires SecurityPal's clients receive.
"I think companies just have a really hard time digesting what's actually happening with their data," he said. "The questions are different. They're more nuanced." When it comes to large language model providers, for example, "they want to know how the model is used, how the model's trained, where it is hosted," he added.
SecurityPal serves 200 to 300 customers annually. Most employ 500 to 1000 people or more, and one-fifth of its customer base is publicly traded companies. At itsΒ Series A funding roundΒ in 2022, it was valued at $105 million. Hamal did not disclose the company's current valuation but said it's north of that now.
From Silicon Valley to Silicon Peaks
SecurityPal's 33-year-old CEO, Pukar Hamal, was born in Nepal.
SecurityPal
Hamal, now 33, left Kathmandu for New York when he was 7. He studied international relations at Stanford and planned to become a diplomat. He ultimately ended up in the tech industry, launching his first startup, Teamable, in 2016. It was acquired in 2020.
With SecurityPal, Hamal didn't initially have plans to launch a second base in his motherland. He was looking at places that were hours ahead of the US β like India and the Philippines β so when it was nighttime for his clients, his team could be hard at work.
"I just was always skeptical about the talent here," he said. Once he realized how many students Nepal was sending to the US, UK, and Australia, though, he began to reconsider.
From 2000 to 2016, the number of Nepali students enrolled in degree programs abroad surged by 835%, reaching 44,255 students by 2017, according to a report from World Education News and Reviews. Nepal's ratio of international students to domestic students is also significantly higher than neighboring countries like India.
When COVID-19 hit, though,many students who went abroad returned to be closer to family and stayed. They were not only fluent in English, but also equipped with technical skills that rivaled those of workers in Silicon Valley, Hamal said. He saw potential in this pool of talent.
"I'm trying to champion the term 'Silicon Peaks' because there's a lot of technology talent just nestled amongst the foothills," he said.
Economic growth in Nepal has steadily increased since 2018, when a new government was formed after years of political turmoil and peace negotiations. Between 2023 and 2024, the services sector β including industries like tourism, real estate, and trade β was the greatest driver of Nepal's gross domestic product, according to data from World Bank. Agriculture also plays a significant role in the economy.
Through the base in Kathmandu, Hamal is hoping to reshape how the world views Nepal's workforce.
"Nepal's always been sort of like this feeder country of high-quality physical labor," he said. He mentioned the Gurkhas, Nepali soldiers that the British East India Company began recruiting in the early nineteenth century as an elite fighting force or the scores of Nepali migrant workers who built stadiums in Qatar for the 2022 FIFA World Cup.
"The talent has always been there, but the perception was that it skewed more on the brawn side," Hamal said. "Now, it's the brain."
The NYSE is moving its Chicago headquarters to Dallas.
f11photo/Getty Images
The New York Stock Exchange is relocating its Chicago headquarters to Dallas.
Texas is attracting businesses due to its favorable taxes and regulations.
Texas is now home to 3.3 million startups and 1 in 10 US publicly traded companies.
The Lone Star State is on track to become a business hub, rivaling the likes of Silicon Valley and Wall Street.
The New York Stock Exchange announced Wednesday that it would be relocating its Chicago headquarters to Dallas. The fully electronic exchange will serve as a platform for companies to list and trade securities.
"As the state with the largest number of NYSE listings, representing over $3.7 trillion in market value for our community, Texas is a market leader in fostering a pro-business atmosphere," NYSE Group president Lynn Martin said in a statement.
The NYSE did not respond to Business Insider's request for comment.
Over the past decade, hundreds of businesses, including Charles Schwab, Oracle, and HPE, have established a base in Texas, their leaders citing business-friendly taxes and regulations.
Billionaire Elon Musk has added momentum to the migration. He moved Tesla's headquarters to Austin in 2021. In July, he announced that he would also move X and SpaceX to Texas, and said that other companies would follow suit.
Texas is now home to more than 3.3 million startups and small businesses and 1 in 10 publicly traded companies in the US, according to a fact sheet from Texas governor Greg Abbott. At $2.6 trillion, it's the eighth largest economy in the world, surpassing countries like Russia, Canada and Italy.
"Texas is the most powerful economy in the nation, and now we will become the financial capital of America," Texas Gov. Greg Abbott said in a statement on Wednesday. "With the launch of NYSE Texas, we will expand our financial might in the United States and cement our great state as an economic powerhouse on the global stage."
The arrival of the centuries-old NYSE signals that Dallas may be emerging as the state's financial capital. In January, the Texas Stock Exchange β a smaller competitor backed by major financial institutions like BlackRock, Citadel Securities, Charles Schwab β announced that it had filed its Form 1 registration with the US Securities and Exchange Commission and was targeting a 2026 launch.
Anna Kim/Getty, Yana Iskayeva/Getty, bob_bosewell/Getty, Morsa Images/Getty, Antonio Garcia Recena/Getty, Morsa Images/Getty, Anna Moneymaker/Getty, Tyler Le/BI
Three weeks into Donald Trump's second term, the makeup of Elon Musk's DOGE staff is becoming clearer.
BI saw a list of about 30 White House DOGE staffers; nearly all are early- or mid-career professionals.
They include tech advisors, a former Clarence Thomas clerk, and a former McKinsey consultant.
Software developers. Former Supreme Court clerks. An ex-McKinsey consultant. Corporate financiers.
Three weeks into the second Trump administration, the composition of Elon Musk's Department of Government Efficiency team is becoming clearer.
White House records seen by Business Insider show about 30 people now work for the White House's DOGE office. At least four of the names haven't been previously reported. Among them are Kendall Lindemann, 24, who worked for a healthcare firm founded by the senior DOGE official Brad Smith, and Adam Ramada, 35, an investor whose firm took a stake in a SpaceX supplier last year.
Other new names are Kyle Schutt, 37, a tech startup worker who was most recently employed at an AI interviewing software company, and Austin Raynor, 36, a lawyer who clerked for Supreme Court Justice Clarence Thomas. Raynor was interviewed on NTD news in November, outlining how Trump could challenge birthright citizenship.
Since January 20, the DOGE team has moved quickly to dismantle federal agencies, reduce staff, slow down enforcement, and gain access to the digital systems that help shepherd trillions of taxpayer dollars. While the US DOGE Service is part of the White House Office, the White House hasn't released details about its inner workings or staff; the list seen by BI helps shed light on the powerful young squadron tasked with remaking the federal government.
Most of the more than two dozen people on the White House DOGE staff are early-career professionals in their 20s and 30s. They have backgrounds primarily in tech but also in finance, law, and politics. BI confirmed their backgrounds through public records, including social media profiles and legal filings.
The records categorize nearly all of the DOGE staffers as volunteers. Wired earlier reported that the DOGE engineer Luke Farritor β who is also on the list seen by BI β posted to Discord looking for software engineers. He said the position would be "paid," but not by whom, according to the outlet.
The records seen by BI don't include the names of some DOGE affiliates who have appeared in legal filings and news reports as working for other agencies, such as the Treasury employees Tom Krause and Marko Elez. Elez resigned from the Treasury Department last week after The Wall Street Journal reported on racist social media posts from an account linked to him; Musk said on X one day later that he would rehire him.
Beyond the gates of 1600 Pennsylvania Avenue, the executive order behind DOGE called for agencies to create their own "DOGE Teams," and people linked to Elon Musk have popped up in employee directories at the Treasury Department, the Consumer Financial Protection Bureau, the Department of Education, and other agencies, news reports say. Some of the people listed as White House DOGE staff are also employees of other agencies.
The White House didn't reply to a request for comment.
Here's a breakdown of the DOGE team.
Tech
Tech workers make up the largest chunk of the DOGE team on the list reviewed by BI.
Some are veteran software engineers. Schutt was most recently the chief technology officer at Kerplunk, an AI interviewing software startup, and has a Ph.D. from Virginia Tech.
Schutt didn't respond to a request for comment.
Others on the list are relatively junior. Edward Coristine is 19, and Farritor, 23, a former SpaceX intern, was a senior at the University of Nebraska when he was named a Thiel Fellow last year.
Two close associates of Musk are also on the list. Steve Davis, who was trained as an aerospace engineer, now leads The Boring Company, Musk's tunneling company; Jehn Balajadia has been described as Musk's assistant. The New York Times reported that she was also listed as a Department of Education employee.
Davis and Balajadia didn't respond to requests for comment.
Finance
Some White House DOGE staff have corporate finance and management backgrounds.
Lindemann graduated from the University of Tennessee's business college in 2022. She worked for McKinsey for about two years, according to her LinkedIn profile, and in 2024 left for Russell Street Ventures, the health industry investment firm run by Smith, the senior DOGE official. Smith worked at the Center for Medicare and Medicaid Services in the first Trump administration. Lindemann worked at Russell Street Ventures as a venture associate, an entry-level job in the venture capital industry.
Lindemann, Davis, McKinsey, and Russell Street Ventures didn't respond to requests for comment.
According to campaign finance records, the White House DOGE staffer Ramada is a Miami venture capitalist who donated more than $1,000 to Republican fundraising committees last year. One of his companies, Spring Tide Capital, invested in Impulse Space, which was founded by a SpaceX employee and has contracted with SpaceX.
Ramada and Spring Tide Capital didn't respond to a comment request.
Law
There are five lawyers listed as part of the DOGE White House staff, and the majority have clerked for conservative Supreme Court justices.
Raynor, a graduate of the University of Virginia's law school, clerked for Justice Clarence Thomas for the Supreme Court term that started in October 2016 and spent time as an associate at Sullivan & Cromwell. During Trump's first term, he served as an assistant to the solicitor general. He has argued in front of the Supreme Court at least eight times. He was most recently a senior attorney and special counsel for the Supreme Court practice at the Pacific Legal Foundation, a libertarian organization.
Raynor's November TV segment and Trump's executive order to end birthright citizenship used similar language, though Raynor wasn't the first person to make such arguments.
Raynor declined to comment when reached by BI.
Other names on the list, which have been previously reported, include Jacob Altik, Keenan Kmiec, and Stephanie Holmes. Altik was selected to clerk for Justice Neil Gorsuch for the term starting in October 2025, while Kmiec clerked for Chief Justice John Roberts, as well as for Samuel Alito while Alito was still a federal circuit judge. James Burnham, who ProPublica described as DOGE's general counsel, clerked for Justice Neil Gorsuch and was previously a partner at the law firm Jones Day.
Politics
Of the list of White House DOGE staffers, only one appears to have previously worked in politics. Chris Young, who Musk hired as an advisor over the summer to help his get-out-the-vote work, was most recently a senior political advisor at PhRMA, a trade association that advocates on behalf of pharmaceutical companies. He was previously a national field director for the Republican National Committee. He didn't respond to a request for comment.
Have a tip? Know more? Reach Jack Newsham via email ([email protected]) or via Signal (+1-314-971-1627). Do not use a work device.
Sam Altman's World Network surveyed over 90,000 users on AI and dating.
It found that 26% of people flirt with chatbots, knowingly or not.
World Network's new product, World ID Deep Face, aims to verify humans on dating apps and platforms.
When the movie Her debuted in 2013, its plot about a man falling in love with an AI operating system seemed, if not wholly original, a vision of the distant future.
About a decade later, though, relationships between AI chatbots and humans are becoming more commonplace.
Take Replika, a dating app launched in 2017 that lets users create customized romantic chatbots. By 2023, it had about 676,000 daily active users, with the average user spending two hours a day on the app, according to figures from Apptopia.
One in four people admitted to flirting with a chatbot either knowingly or unknowingly, according to a survey conducted by Sam Altman's futuristic project, World, formerly known as Worldcoin. The company surveyed 90,000 of the 25 million people on its network about their feelings on love in the age of AI.
The majority of respondents said they are still wary of interacting with bots. About 90% said they want dating apps to have a system for verifying real humans. About 60% of users said they have either suspected or discovered that they matched with a bot.
To help users combat deepfakes, World launched a product called World ID Deep Face. It relies on the World's existing verification system β whichΒ takes pictures of humans' irisesΒ with a melon-sized orb β to verify on platforms like Google Meet, Zoom, or dating apps that users are communicating with real humans in real-time video or chat interactions. World is in the process of rolling out the system in beta.
"As someone that uses dating apps, all the time I get catfished," Tiago Sada, the chief product officer of Tools for Humanity, the company building World's technology, told Business Insider. "You see profiles that they're just too good to be true. Or you realize this person has six fingers. Why do they have six fingers? Turns out it's AI."
(From left to right) McKinsey's Rodney Zemmel, PwC's Dan Priest, Deloitte's Jim Rowan, and EY's Matt Barrington are at the forefront of AI strategy for clients.
Matt Barrington/EY, Rodney Zemmel/McKinsey, Dan Priest/PwC, Jim Rowan/Deloitte, Elizabeth Fernandez/Getty, Tyler Le/BI
Consulting firms have become a destination for some companies looking to make the most out of AI.
AI leaders at these firms use tools like GPT Enterprise and internal chatbots like McKinsey's Lilli.
BI asked AI leaders at several large companies to share tips for using the technology.
Working with artificial intelligence can sometimes feel more like an art than a science.
That's why many companies are turning toΒ consulting firmsΒ for guidance on how to maximize the technology.
Top firms are not only helping companies develop AI tools, upskill their workforces, and identify potential security weaknesses,Β but they are also creating chatbots and agents to organize their firm's knowledge and streamline routine tasks. As a result, AI leaders at consulting firms tend to have a handle on AI strategies that can work for a broad range of tasks.
Business Insider asked AI execs at five top consulting firms β Deloitte, EY, KPMG, McKinsey, and PwC β to share their best tips for using AI in everyday work.
The AI leaders said they regularly used various AI tools, including models from OpenAI, Google, Microsoft, and Anthropic, as well as tools built internally, like McKinsey's Lilli, EY's EYQ, and ChatPwC, PwC's internal version of ChatGPT.
Here's how they use AI and some of their advice for getting the most out of it. Responses are edited for brevity.
How do you use AI in your work?
Dan Priest, US chief AI officer at PwC: I do a lot of research with it. For instance, I was doing some analysis on labor productivity and how AI will improve labor productivity. The typical search would produce labor statistics. Well, AI, the big powerful foundation models, it'll grab those labor facts and statistics, it'll do analysis, it'll show you trends, discontinuities, or cause analyses. It is much more robust. In terms of research and analysis, it emerges as a thought partner versus just a search engine.
You discover blind spots in your thinking. I was writing a policy and I thought it was pretty comprehensive, and I ran it through GPT Enterprise, and it found two other points in the policy that we should be adding.
Todd Lohr, head of ecosystems at KPMG: Part of my job as a leader is being able to synthesize information. AI is very helpful in that it has allowed me to understand trends and the marketplace and has enabled me to have a broader view as a leader and synthesize and ingest a lot more information.
It has also been helpful for communications in terms of preparing for meetings, follow-up from meetings, as well as correspondence.
Rodney Zemmel, global leader for McKinsey Digital and firmwide AI transformation: I've found it to be excellent at "level one" creativity and coming up with things you generally will not have thought of. It's an excellent aid to brainstorming for our teams. I haven't yet seen it as having true unbounded creativity, i.e., a new way of looking at the world. That won't be far behind, though.
What are your go-to AI prompts or advice you have for writing good prompts?
Dan Priest, PwC: I'll give some context about what I'm trying to do, a short, punchy question, and then ask follow-ups that make them increasingly specific, and then you can adjust based on what you're seeing.
During the week, I travel a lot, and if I get 100 or 200 emails in a day, it's just really hard to keep up with every single one of them. I go into Microsoft Teams, activate Copilot, and ask it to review all messages in Teams and email and find the actions for me. I'll just spend 15, 20 minutes at the end of the day, do the prompt "Identify emails that are addressed to me directly or that have an action for me," and it produces the list. It's not perfect, but it's good at it.
I like to cook, and I don't like to waste food in the refrigerator. So I will prompt, "Create a recipe with these ingredients," and I'll just list the things that I want to get rid of in the refrigerator.
Rodney Zemmel, McKinsey: Too many people are still using it to look something up. The trick is to have a dialogue with it and to get comfortable building agents that can execute simple tasks. Let AI handle the 80% of tasks we're mediocre at, so we can excel at the exciting 20%, as one of my colleagues likes to say.
Matt Barrington, Americas chief technology officer at EY: Context management is paramount. I keep separate AI "workspaces" for different focus areas β such as technical Q&A or drafting client communications.
I also give the AI clear instructions about the style and depth of response I want, like "Provide a concise, bullet-point summary," or "Act as a finance expert," or "cite credible sources or references and provide links."
What challenges are there with using AI?
Dan Priest, PwC: It is changing muscle memory.
I've spent a lot of years developing a certain writing style, a certain research technique, and I had to change that. And I am better for having changed it, but it doesn't happen overnight.
It was just like anything that you learn, you have to be disciplined about learning it, and then it sticks.
Todd Lohr, KPMG: The biggest challenge is connecting all my individual data sources that are disparate. If I want to build my own personal AI, the challenge is having access to the right information and knowledge.
I have been deliberate about addressing this challenge when I took over in my current role. I put everything in one folder and personally curated the content that I agreed with and liked.
Matt Barrington, EY: The main challenge is keeping pace with the innovation. There's a constant flow of new models, tools, and capabilities, and it can be tough to pinpoint which option is best for a given task. I follow newsletters, participate in AI-focused events, and learn from AI practitioners β but in my view, hands-on experimentation is the most effective way to stay informed and find what genuinely works.
Also, it is important to remember that while these models are confident and impressive, they can be wrong. Always validate the information and output before you utilize it.
What do your clients want to know about incorporating AI into their business?
Dan Priest, PwC: The questions have sort of shifted. A year ago, they were asking, "What's the killer use case?" "What's the most industrialized use case?" "What's the use case that's going to produce the most savings or the greatest deficiencies?" Now, the questions we're getting are less about those technical use cases and they're much more about "How do you evolve the business strategy to take advantage of AI capabilities?"
Rodney Zemmel, McKinsey: They want to understand how AI agents can integrate with their workforce, acting like talented interns who need proper training to be effective. We've also seen the conversation move from just productivity to growth and productivity, and to finding ways to do things better and faster than humans to doing things that no human could possibly do.
Do you have something to share about what you're seeing in consulting? Business Insider would like to hear from you. Email our consulting team from a nonwork device at [email protected] with your story, or ask for one of our reporter's Signal numbers.
This isn't the first time Trump has hinted at his interest in creating a sovereign wealth fund. During a campaign stop at the Economic Club of New York in September, Trump called for a state-owned investment fund to finance "great national endeavors."
Discussions about such a fund also surfaced under former President Biden's administration. Biden's top aides circulated plans for a fund to finance national security interests.
It is unclear how an American fund would be supported or how it would operate.
However, lawmakers might have something of a model in The Alaska Permanent Fund, which distributes the money it makes to the state's residents as annual dividends.
"The fund was initially established with revenue from mineral extraction, primarily oil, but within a few years after its initiation, its primary source of revenue is investment returns," Sarah Cowan, executive director of the Cash Transfer Lab, previously told Business Insider. "It diversifies the Alaskan economy because, at this juncture, the revenue from this fund primarily does not come from oil."
Alaska's fund offers benefits that mimic a universal basic income β a no-strings-attached, recurring payment distributed to people regardless of socioeconomic status. But there are some key differences. For one, the dividend doesn't come out of taxes; it's paid only annually and doesn't equate to a livable wage.
Federal lawmakers likely see a sovereign wealth fund serving a different purpose, like supporting industries or financing supply chain initiatives. Creating one at the national level also comes with more legislative hurdles.
"Typically, many countries went through special law making to create a SWF, defining the SWF's source of capital, investment mandate, and supervision system," Winston Ma, an adjunct professor at NYU and the author of The Hunt for Unicorns: How Sovereign Funds Are Reshaping Investment in the Digital Economy, wrote to Business Insider by email. "Therefore, it's not a simple corporate setup. It will involve lots of collaborative work between the executive and legislative branches."
Sovereign wealth funds β like Alaska's or Norway's Government Pension Fund Global, which is the largest in the world β are often funded by wealth generated from state-owned natural resources. The issue is that "natural resources in the US are mostly owned by the states," Ma said. So, consolidating those revenue streams might require some back and forth.
Elon Musk wants to see DOGE workers putting in long hours to meet its cost-cutting goals.
Anna Moneymaker/Getty Images
Elon Musk says Department of Government Efficiency works 120 hours a week.
Musk's is trying to cut a $1 trillion or more from the federal budget.
Some executives told BI that Musk's posts are meant to set the tone for the work culture he expects.
It's been just under two weeks since Elon Musk stepped into his role at the Department of Government Efficiency, and he's already bringing his Silicon Valley drive to Washington.
"DOGE is working 120 hours a week. Our bureaucratic opponents optimistically work 40 hours a week. That is why they are losing so fast," Musk posted to X on Sunday.
Less than a day earlier, he had extolled the virtues of weekend work.
"Very few in the bureaucracy actually work the weekend, so it's like the opposing team just leaves the field for 2 days!" he wrote on X.
That's 17 hours and 8 minutes of work a day, including Saturday and Sunday, as one X user noted in the comments.
Musk is known for his relentless work ethic. He's said he works 120-hour weeks and expects his employees to work long hours, too.
When he officially took control of Twitter in October 2022, he immediately mandated 80-hour workweeks. But whether his hard-charging tech executive mentality will work in the more staid realm of government is an open question.
An operational efficiency expert told Business Insider that Musk's approach might be the best way to get DOGE quickly up to speed.
"Musk's tweet underscores his well-known philosophy on work ethic and the inefficiencies of bureaucracy," Shannon Copeland, CEO of SIB, a cost-cutting firm, told BI by text. "While a 120-hour workweek isn't a practical or sustainable solution for most, the principle behind it resonates. Companies that prioritize efficiency, automation, and proactive cost management will always outperform those weighed down by bureaucracy."
Roi Ginat, CEO of Endless AI, which developed a video AI assistant and has raised $100 million, said Musk's posts shouldn't be taken literally.
"Driving a team too hard for too long leads to fatigue and burnout. Many people simply won't function well without enough sleep, and as fatigue sets in, errors increase," he told BI by text. "I believe that Elon's tweet is about an effort, not a new standard at DOGE."
Ginat, who said he regularly works 85 hours a week, added that "my work is on my mind most of my time, and it's an important part of the deal, but great work ideas often come while I hike with my kids."
Theymight be turning to old-fashioned social media to find their next partner instead.
Instagram is the most popular place to meet and communicate with a potential partner, at least for the hundreds of thousands of people using Rizz, an AI dating assistant.
Rizz, which is slang for charisma, launched in 2022. It functions like a dating coach, helping users craft witty responses to messages and less cringey pick-up lines.
"Dating coaches charge $30 to $300 an hour. Not everybody can afford paying over $30 an hour. So the moment ChatGPT launched, I already had this idea in the back of my mind," Rizz cofounder and CEO Roman Khaves told Business Insider.
Users upload screenshots of conversations from dating apps, messaging apps, or social media to Rizz, which then generates a response. The app leverages ChatGPT but fine-tunes its responses based on prior responses it generated. So the more you use it, the better and more personalized its responses will be, Khaves said.
Rizz costs $7 a week or $20 a month and has about half a million monthly active users. The platform's user base is 65% male and 35% female and most of them fall between 18 and 35. They're largely from English-speaking countries.
Khaves shared data with BI showing that 22% of the screenshots uploaded to Rizz now come from Instagram, an 8% increase from a year ago. About 15% come from iMessage and 11% from WhatsApp. Dating apps comprise a smaller share, with Tinder at 11%, Hinge at 10%, and Bumble at 4%.
Messaging and social media are a growing destination for dating, according to data from Rizz.
Rizz
It's not uncommon for potential partners to meet on a dating app and quickly move their conversation to text or even Instagram. But Rizz sees evidence in the data that Instagram is becoming a first-choice destination for dating.
"Instagram has evolved into this fascinating dual-purpose platform for relationships," Khaves said. "Whether it's sliding into DMs or naturally connecting through shared content, people are both starting and deepening their connections entirely within Instagram."
Dating on Instagram isn't new. Celebrity couples β like Joe Jonas and Sophie Turner, or exes Dua Lipa and Anwar Hadid β and regular couples alike have found love on Instagram. The data from Rizz shows it's only becoming more popular as users give up on regular dating apps.
Rizz's algorithms can track the source of screenshots, distinguishing whether a user uploads them from an Instagram story or a direct message. What's less clear is the history behind the relationship. "The one thing we don't know is how long they know the person, whether they've been friends with them on Instagram for a while or not," Khaves said.
Rizz is already developing features tailored to Instagram, where the first move could be linked to a story or a picture rather than a prompt on Hinge. Instagram also introduced a feature in September that allows users to leave comments under stories, making it easier for mutual followers to interact with each other.
The reason people are moving away from dating apps might be simple β they offer too much data about a person upfront.
"Think about it β in real life, you don't walk up to someone with a resume of your height, job, and dating intentions. Instead, you get to know them gradually through shared moments and mutual interests, which is exactly what Instagram enables," Khaves said.
DeepSeek's impact on the AI industry will likely extend far beyond this week, AI executives say.
Jonathan Raa/NurPhoto
Chinese startup DeepSeek shocked markets this week after releasing a cheaper rival to OpenAI's o1.
Silicon Valley has reacted to DeepSeek's release with a mix of panic and awe.
Some AI startups see an opportunity in DeepSeek's open-source success.
In the tech industry, the tides can turn quickly, especially when it comes to AI.
Last week, OpenAI was the industry leader, developing what many saw as the most advanced AI models on the market, which led to a skyrocketing valuation.
This week, its standing was in question as Silicon Valley eyeda more cost-effective competitor: DeepSeek.
The Chinese company recently released a challenger to OpenAI's o1 reasoning model called R1. Users who've tested both said R1 rivals the capabilities of o1 and comes at a substantially cheaper cost.
The news shocked markets on Monday, leading to a stock sell-off that wiped almost $1 trillion in market cap. AI insiders said the frenzy is warranted: DeepSeek's methods are a game changer for the industry.
CEOs of startup companies facilitating the AI boom by supplying hardware, security services, and building agents told Business Insider that DeepSeek's success creates more opportunities for smaller companies to flourish.
Roi Ginat, the cofounder and CEO of EndlessAI, which develops the video AI assistant Lloyd, said DeepSeek's success could widen the pool of who can develop AI technology β and who can access it.
"DeepSeek's success represents a democratization of AI development, where smaller teams with limited resources can meaningfully compete with well-funded tech giants," Ginat wrote by email. "This has catalyzed a wave of innovation from startups and research labs previously considered peripheral to the field."
While OpenAI might not lose its standing in the industry, Ginat said its role could change. "The industry is witnessing a fascinating tension between two competing visions. One focuses on pursuing artificial general intelligence (AGI) through increasingly powerful and comprehensive models. The other emphasizes practical applications through efficient models and methods targeted at specific use cases and benchmarks," he said, comparing OpenAI and DeepSeek. "This tension drives innovation in both directions, and also exists within the big companies."
Pukar Hamal, the CEO of SecurityPal, which helps companies like OpenAI complete security questionnaires,said the industry should temper expectations of immediate change.
"If the DeepSeek team truly can cut training and inference costs by an order of magnitude, it could spark far broader deployment of AI than analysts anticipate," Hamal, told Business Insider. "On the flip side, it'll take more than a few tough earnings calls to make the biggest AI players reconsider the staggering GPU investments we're seeing for 2025."
Meta recently committed $60 billion to AI infrastructure investments. President Donald Trump also announced Stargate last month, a joint venture between OpenAI, Oracle, and SoftBank that will invest $500 billion into AI infrastructure across the country.
One of the biggest debates among AI innovators is whether open-source models, which the public can access and modify, are more likely to drive breakthroughs than closed-source models. OpenAI says it keeps its models closed for safety, while DeepSeek's models are open-source.
Satya Nitta, the cofounder and CEO of Emergence AI, a company developing AI agents, said that "DeepSeek R1 is a meaningful advance in broadening access to AI reasoning, spotlighting the power of open source and setting a new benchmark for reasoning."
Hamal said we should still approach open-source development cautiously β even if it'll eventually dominate the industry.
"An 'open source' model of unknown alignment invites serious public safety and regulatory questions. If DeepSeek's mobile app keeps climbing the charts, we could end up with a discussion similar to the recent calls to block TikTok in the US," he said. White House advisor David Sacks also raised concerns about DeepSeek's training methods when he told Fox News that it is 'possible' DeepSeek used OpenAI's models to train its own AI model.
Still, "openness typically wins in the long run," Hamal said. "If DeepSeek helps reset an increasingly closed foundational model market, that can be a net positive β so long as we maintain the guardrails that protect customers and the public at large."
If there's one lesson AI executives are taking away from this week, though, it's that it's possible to do more with fewer resources.
Matthew Putman, CEO of Nanotronics, which designs AI-controlled factories, said, "To me, the competition itself is less significant than the validation of a broader principle: AI models can be built more affordably and applied far beyond large language models."
Yann LeCun, Meta's chief AI scientist, said there were misconceptions about DeepSeek.
Fabrice COFFRINI / AFP via Getty Images.
DeepSeek's success has put Silicon Valley on edge about Chinese competition.
After DeepSeek released its latest model, AI investors panicked.
Meta's chief AI scientist, Yann LeCun, said the market's reaction to DeepSeek was "unjustified."
Silicon Valley is melting down over DeepSeek, an emerging Chinese competitor in AI, but Meta's AI chief says the hysteria is unwarranted.
DeepSeek caused alarm among US AI companies when it released a model last week that, on third-party benchmarks, outperformed models from OpenAI, Meta, and other leading developers. It did so with subpar chips and, it said, vastly less money.
Bernstein Research found that DeepSeek priced its models significantly below equivalent models from OpenAI: DeepSeek's latest reasoning model, R1, cost $0.55 for every 1 million tokens inputted, while OpenAI's o1 reasoning model charged $15 for the same number of tokens. A token is the smallest unit of data an AI model processes.
The news hit the markets Monday, triggering a tech sell-off that wiped out $1 trillion in market cap. Nvidia β known for its premium chips, which can cost at least $30,000 β lost almost $600 billion.
Yann LeCun, the chief AI scientist for Facebook AI Research, however, says there's a "major misunderstanding" about how the hundreds of billions of dollars invested in AI will be used. In a Threads post, LeCun said the huge sums of money going into US AI companies were needed primarily for inference, not training AI.
Inference is the process in which AI models apply their training knowledge to new data. It's how popular generative AI chatbots like ChatGPT respond to user requests. More user requests means more inference is required, and processing costs increase.
LeCun said that as AI tools become more sophisticated, the cost of inference will rise. "Once you put video understanding, reasoning, large-scale memory, and other capabilities in AI systems, inference costs are going to increase," LeCun said, adding, "So, the market reactions to DeepSeek are woefully unjustified."
Thomas Sohmers, a founder of Positron, a hardware startup for transformer model inference, told Business Insider he agreed with LeCun that inference would account for a larger share of AI infrastructure costs.
"Inference demand and the infrastructure spend for it is going to rise rapidly," he said. "Everyone looking at DeepSeek's training cost improvements and not seeing that is going to insanely drive inference demand, cost, and spend is missing the forest for the trees."
This means that, as its popularity grows, DeepSeek is expected to handle more requests and spend a significant amount on inference.
A growing number of startups are entering theΒ AI inference market, aiming to simplify output generation. With so many providers, some in the AI industry expect the cost of inference to drop eventually.
But this applies only to systems handling inference on a small scale. The Wharton professor Ethan Mollick has said that for models like DeepSeek V3 that provide free answers to a large user base, inference costs are likely to be much higher.
"Frontier model AI inference is only expensive at the scale of large-scale free B2C services (like customer service bots)," Mollick wrote on X in May. "For internal business use, like giving action items after a meeting or providing a first draft of an analysis, the cost of a query is often extremely cheap."
In the past two weeks, leading tech firms have stepped up their investments in AI infrastructure.
Meta CEO Mark Zuckerberg announced more than $60 billion in planned capital expenditures for 2025 as the company ramps up its own AI infrastructure. In a post on Threads, Zuckerberg said the company would be "growing our AI teams significantly" and had "the capital to continue investing in the years ahead." He did not say how much of that would be devoted to inference.
Last week, President Donald Trump announced Stargate, a joint venture between OpenAI, Oracle, and SoftBank set to funnel up to $500 billion into AI infrastructure across the US.
DeepSeek, a small Chinese startup, said it built AI models using less capital and inferior Nvidia chips.
Dado Ruvic/REUTERS
Business leaders are reacting to DeepSeek's rise, which sent US tech stocks tumbling.
Intel's former CEO said DeepSeek would expand the AI market instead of diminishing it.
Meta promised a new "leading state of the art" AI model and pledged more investment.
Tech leaders and their companies have reacted with admiration and insightsΒ after AI companyΒ DeepSeekΒ launched its flagshipΒ large language model, R1.
Just days after DeepSeek launched, the app dethroned ChatGPT with the most downloadson Apple's Top Free Apps chart, rivaling systems by OpenAI, Google, and Meta despite being developed at what the startup said was afraction of the cost.
The rise of the Chinese AI startup founded by quant hedge fund manager Liang Wenfeng was followed by a sharp sell-off ofmajor AI and chip companies in the US tech markets on Monday.
Nvidia, a leader in AI hardware, saw its stock plunge by over 17% β erasing hundreds of billions from its market cap β amid concern about DeepSeek's ability to achieve competitive results with less advanced and significantly cheaper hardware.
Here's how Silicon Valley leaders have responded to DeepSeek so far.
Satya Nadella
Nadella, Microsoft's CEO, posted on LinkedIn on Monday that "Jevons paradox is at play again," referencing the concept that greater efficiency in production often fuels higher demand. "As AI becomes more efficient and accessible, its adoption will soar, transforming it into an indispensable commodity," he added.
Earlier last week at the World Economic Forum in Davos, Nadella also said that other tech companies "should take the developments out of China very, very seriously."
Marc Andreessen
Andreessen, cofounder of Andreessen Horowitz, praised DeepSeek's R1 model and called it "one of the most amazing and impressive breakthroughs" and "a profound gift to the world" in an X post on Friday. On Sunday, the Silicon Valley venture capitalist β who has been advising President Trump on tech policy β went on to call Deepseek R1 "AI's Sputnik moment."
Gelsinger, the former CEO of Intel, challenged the market's reaction to DeepSeek's advancements, particularly the sell-off of AI chip stocks. He said the market is "getting it wrong" and suggested that the company's "dramatically cheaper" AI models could expand the market for AI applications rather than diminish it.
Gelsinger also credited DeepSeek's Chinese engineers, who "had limited resources, and they had to find creative solutions."
Wisdom is learning the lessons we thought we already knew. DeepSeek reminds us of three important learnings from computing history: 1) Computing obeys the gas law. Making it dramatically cheaper will expand the market for it. The markets are getting it wrong, this will make AIβ¦
Yann LeCun, chief AI scientist for Meta's Fundamental AI Research division, challenged the perception that China is surpassing the US in AI in a LinkedIn post, arguing that the correct reading is that "open source models are surpassing proprietary ones."
He said that DeepSeek "came up with new ideas and built them on top of other people's work." He also said the hype around DeepSeek's drastically cheaper models is a bit overblown.
In a Threads post, LeCun said there is a "major misunderstanding about AI infrastructure investments," noting that the billions of dollars in investment are largely going toward inference, not training AI.
Inference is the process in which AI models apply their training knowledge to new data. It's how popular generative AI chatbots like ChatGPT respond to user requests. So the more user requests a model receives, the higher the cost to process inference.
LeCun said that as tools become more sophisticated, the cost of processing costs will rise, too.
"Once you put video understanding, reasoning, large-scale memory, and other capabilities in AI systems, inference costs are going to increase," LeCun said. "So, the market reactions to DeepSeek are woefully unjustified."
Mark Zuckerberg
Though Zuckerberg did not directly respond to DeepSeek's rise, the Meta CEOΒ posted on FacebookΒ on Friday promising that a new version of Facebook's open-source AI model family Llama would become "the leading state of the art model" upon release.
Llama is an AI model designed for natural language processing tasks like text generation, translation, and summarization, which is promoted as open-source like DeepSeek.
Pledging more than 1.3 million GPUs of computing power by the end of the year, he wrote that Meta is "planning to invest $60-65B in capex this year while also growing our AI teams significantly" and that the company has additional capital to continue investing over the next few years.
Meta did not immediately respond to a request for comment.
Nvidia
In a statement, a spokesperson for Nvidia told Business Insider that DeepSeek is an "excellent AI advancement and a perfect example of Test Time Scaling," illustrating how to leverage "widely available models and compute that is fully export control compliant." The spokesperson added that to make inference work, it "requires significant numbers of NVIDIA GPUs and high-performance networking."
Jensen Huang, Nvidia's CEO, has not directly responded to DeepSeek thus far.
Sam Altman
OpenAI CEO Sam Altman wrote on X on Monday that DeepSeek's R1 model is "impressive," especially because of its price point.
The Chinese AI lab recently rolled out new models that researchers say are just as good as OpenAI's 01 model.
Altman embraced the new competition and said OpenAI will continue to deliver better models. The CEO said "more compute is more important now than ever to succeed" and said he is looking forward to bringing "AGI and beyond" to the world.
"We will obviously deliver much better models and also it's legit invigorating to have a new competitor! We will pull up some releases," the OpenAI boss added.
A report compiled by Management Consulted found starting salaries in consulting have remained stagnant for two years.
Thomas Barwick/Getty Images
Consultant starting salaries have remained flat since 2023, a new report found.
Management Consulted found salaries were largely stagnant at boutique, MBB, and Big Four firms.
The industry has been impacted recently by slowing demand and AI-fueled productivity increases.
Starting salaries for consultants at both top firms and boutique consultancies largely remained flat for the second year in a row, according to a new report from Management Consulted, a company that provides online resources and career coaching to professionals trying to land jobs in consulting.
The report found that starting pay has remained stagnant since 2023 as the consulting industry reels from a slowdown in demand for services, despite some recent signs of improvement. Previously, annual increases of 5 to 10% were standard for the industry, according to Management Consulted.
The company's 2025 Consulting Salaries Report included over 100 firms and was based on submissions and offer letters shared by its readers and clients who work in consulting. Management Consulted said it does not include salary information that it is unable to verify.
The report found that starting total compensation at the Big Four professional services firms β Deloitte, PwC, KPMG, and EY β has not increased since 2023. This was true for new hires coming out of undergraduate programs as well as the higher paid ones coming out of MBAs or PhDs.
The same was largely true for new hires at MBB firms β McKinsey & Company, Bain & Company, and Boston Consulting Group β which are widely considered the most prestigious strategy consulting firms and are known for their competitive pay packages.
Do you work in consulting and have insights to share about the industry? Contact this reporter at [email protected] or via the encrypted messaging app Signal at kelseyv.21.
The report said that Management Consulted expected salaries to remain flat despite some increases in demand for consulting services in 2024, which came after a couple years of a downturn that saw major firms conducting layoffs or delaying start dates for new hires.
The plateau is notable given that consulting compensation surged in 2022 and 2023, according to Management Consulted's 2023 salary trends video. The last major increase before that was in 2019.
In 2023 post-MBA hires earned a base salary of $192,000, a performance bonus of up to $60,000, and a signing bonus of $35,000 at the top tiers. Pre-MBA hires earned a base salary of $112,000, a performance bonus of up to $30,000, and a signing bonus of around $5,000.
However, salaries and performance bonuses rose across the industry in 2023, with several firms enhancing benefits like profit-sharing, paid leave, and retirement contributions. Boston Consulting Group even overhauled its compensation structure in a bid to attract new talent and retain existing talent.
One reason salaries remained the same in 2024, according to the report, is productivity advancements sparked by generative AI and remote work. The report also said fewer consultants were leaving the industry due to limited opportunities elsewhere, meaning the stagnant salaries could be another potential side effect of the so-called white-collar recession.
"AI enablement is enabling consulting firms to accomplish more with fewer hires. Productivity gains, combined with slower attrition, reduce the need for new hires and stall salary growth," Namaan Mian, chief operating officer of Management Consulted, said in comments shared with Business Insider.
Mian also said the perception of the value of hiring MBAs, who typically make a higher starting salary than consultants coming out of undergrad, varies widely.
"Firms historically pay MBAs twice as much, but don't get twice the value from them. This doesn't fly in an efficiency oriented environment," Mian said. "This is why we're seeing less hiring from MBA programs and more from undergraduate ones."
Some firms also used changes in their variable compensation β in which pay is partially determined by performance via bonuses β to make their pay packages look more attractive, the report said, adding that only 5 to 10% of consultants typically earn the maximum amount of their bonus.
Management Consulted said it expects an increase in hiring as demand for consulting services and attrition are expected to increase in the coming years. However, it said salaries for new hires could remain stagnant.
Do you work in consulting and have insights to share about the industry? Contact this reporter at [email protected] or via the encrypted messaging app Signal at lvaranasi.70.
Mitesh Agrawal has moved on from Lambda Labs for Positron, a new player in the AI hardware space.
Kavita Agrawal
Lambda Labs COO Mitesh Agrawal has left to head AI hardware startup Positron.
Lambda focuses on deploying cloud infrastructure to customers and is valued at over $2 billion.
Positron aims to compete with Nvidia by offering faster, energy-efficient AI hardware.
Lambda Labs, a Nvidia partner, has lost its chief operating officer to a little-known company building hardware for the AI industry.
Lambda COO Mitesh Agrawal told Business Insider he stepped into a new role as CEO of Positron earlier this month. Positron builds hardware for transformer model inference, which is how chatbots like ChatGPT respond to user requests.
Agrawal's departure is significant given his role in shaping Lambda into one of Silicon Valley's best-funded and most valuable startups.
During its Series C round last February, the company was valued at about $1.5 billion. Agrawal declined to share the company's exact valuation but said it has grown to over $2 billion since then.
Agrawal told BI that when he joined Lambda in 2017, the company was focused on building machines for image generation models. This was five years after twin brothers Stephen and Michael Balaban founded it as a company developing facial recognition technology. It wasn't long after Agrawal's arrival, however, that the company shifted its focus, designing infrastructure for full-scale data centers and pivoting into cloud services.
He said Lambda's business now focuses on deploying cloud infrastructure to customers, renting out servers powered by Nvidia's graphics processing units. It also offers the requisite software, including APIs for inference and machine learning libraries for customers.
Agrawal said that his move to Positron comes amid a growing appetite for inference β the capacity for AI models to apply their training to new data.
Between chatbots like ChatGPT and xAI's Grok, and new reasoning models like OpenAI's o1 tackling PhD-level problems, "the curve of technology for inference is just going up, which means the computational requirement is really going up," Agrawal said. So, he said he's thinking a lot about "how to solve and how to run these models with as much efficiency as possible."
He believes Positron is well-positioned to take on that challenge.
Positron was founded in 2023 by Thomas Sohmers, whom Agrawal met in 2015. The two also overlapped at Lambda during Sohmers's stint at the company between 2020 and 2021. Sohmers, who will move into the role of chief technology officer, told BI that, in simplest terms, the company is "building hardware competing against Nvidia."
Positron says its hardware outperforms Nvidia's H100 and H200 GPUs β which fueled the AI race before it released its more powerful Blackwell chips β in performance, power, and affordability.
Going up against a behemoth like Nvidia β which overtook Apple as the world's most valuable company last week β is no easy task for an up-and-coming company. But by focusing more narrowly on providing hardware for transformer model inference, Sohmers said Positron can differentiate itself from the competition.
Transformer models β neural networks that learn the context and meaning of data to generate new data β are behind some of the most popular generative AI applications. Unlike convolutional neural networks, which underpinned previous decades of machine learning advances, transformer models have greater memory demands. Sohmers said he saw an opportunity to capitalize on those demands.
"I would say the whole reason we started Positron is we thought that there was a better way to do things," Sohmers said. "Nvidia, as a large company that also has a lot of other product focuses wasn't going to really optimize and focus on the particular niche that we're focused on, which is transformer model inference."
Agrawal, too, is confident in the performance and energy efficiency of Positron's hardware. Its compatibility with a range of transformer models will also help it attract customers from competitors, he said.
"Nvidia has such a strong ecosystem in the world of AI models. You hear about their CUDA moat, and you heard about the software moat," he said, referring to the software network the company has built between its products to retain customers.
"What Positron really did was completely remove this friction of anything," Agrawal said. That means a company can take a model trained on an Nvidia GPU and "run that model's inference on a Positron card just like you would run on an Nvidia GPU," he said.
Agrawal said the jump from an established player like Lambda to a young startup like Positron presents an "exciting challenge."
"You get to compete against an industry veteran as well as in a field that is just so big," he said.
It's not uncommon to have a stressful dream about work, but it might signify something bigger about your life.
Yasinemir/Getty Images
Over three-fifths of US workers have nightmares about work.
Common nightmares include being late to work, job loss, and romantic dreams about coworkers.
Dreams are often a reflection of the inner self, therapists say.
For many people, work extends well beyond the standard 9-to-5. The pressure from their jobs can disrupt sleep, leading to restless nights and stressful dreams.
In a survey of 1,750 working adults in the US conducted by Each Night, a sleep resource platform, more than three-fifths of workers reported having a nightmare about their jobs.
The most common workplace nightmare is being late to work, according to an analysis of global search data conducted by the job search platform JobLeads. Losing your job, getting a new job, and colleague romances were also commonly reported dreams.
Annie Wright, a psychotherapist who operates boutique trauma therapy centers in California and Florida, told Business Insider that dreams are worth analyzing.
The fear of being late to work can signify a sense of uncertainty, she said. "It doesn't terribly surprise me that that's showing up because, you know, we have that classic dream in college and high school of being late for a test," she said.
Through the lens of gestalt psychotherapy β a therapeutic approach that focuses on understanding a person's present experience β every element of a dream, from the setting to the people, places, and objects, can be viewed as a reflection of the dreamer's inner self.
Wright offered a hypothetical workplace dream in which the dreamer sees their boss, closest colleague, and a challenging client. The boss is yelling at the colleague about their interactions with the client.
Wright said she would ask the dreamer to describe the qualities they associate with their boss. "Critical, demanding, and hostile," they might say, she said. Then, they would describe their colleague. "Supportive, kind, but incompetent sometimes," she said.
She would ask the dreamer to think about all these aspects within their self.
"What does it say that the critical, angry part of you is attacking the, you know, supportive but kind part of you," she said. Perhaps the person would realize that the dream was about something else entirely.
"I cannot turn off this critical voice about my inability to get pregnant," she said, as an example. "When we unfold it from that lens, it can become less about the workplace itself or the workplace figure itself and more about what those different parts symbolized by the workplace or workplace figures represent."
Stressful dreams often reflect a person's sense of vulnerability in the wider world, she said. Whether it's the workplace or the middle school hallway β the most common setting for a stress dream β the setting of a dream is like a subject that our mental state seeks out. "In other words, the state of vulnerability seeks that out and gloms on to it," she said.
Here's a closer look at the top most searched workplace stress dreams, according to JobLeads data.
Being late for work is the most searched dream; it can signify a sense of uncertainty in other parts of your life.
Yann LeCun, Meta's chief AI scientist, speaks at the World Economic Forum in Davos.
FABRICE COFFRINI / AFP via Getty Images
DeepSeek, an open-source Chinese AI company, has riled Silicon Valley with its rapid rise.
Meta's chief AI scientist said DeepSeek has benefited from the open-source community.
Meta's AI program has remained open-source, while OpenAI has shifted to closed-source.
Silicon Valley was on edge this week after DeepSeek, a Chinese AI company, released its R1 model. In third-party benchmarks, it outperformed leading American AI companies like OpenAI, Meta, and Anthropic.
For Meta's chief AI scientist, Yann LeCun, the biggest takeaway from DeepSeek's success was not the heightened threat posed by Chinese competition but the value of keeping AI models open source so that anyone can benefit.
It's not that China's AI is "surpassing the US," but rather that "open source models are surpassing proprietary ones," LeCun said in a post on Threads.
DeepSeek's R1 is itself open source, as is Meta's Llama. OpenAI, which was originally founded as an open-source AI company with a mission to create technology that benefits all of humanity, has on the other hand more recently shifted to closed-source.
LeCun said DeepSeek has "profited from open research and open source."
"They came up with new ideas and built them on top of other people's work. Because their work is published and open source, everyone can profit from it," LeCun said. "That is the power of open research and open source."
When DeepSeek unveiled R1 on January 20, which it said "demonstrates remarkable reasoning capabilities," the company said it was "pushing the boundaries" of open-source AI.
The announcement took Silicon Valley by surprise and was easily the most talked-about development in the tech industry during a week that included the World Economic Forum, TikTok uncertainty, and President Donald Trump's busy first few days in office.
Days after DeepSeek's announcement, Meta CEO Mark Zuckerberg said Meta planned to spend over $60 billion in 2025 as it doubles down on AI. Zuckerberg has been an outspoken advocate of open-source models.
"Part of my goal for the next 10-15 years, the next generation of platforms, is to build the next generation of open platforms and have the open platforms win," he said in September. "I think that's going to lead to a much more vibrant tech industry."
Those who support open source say it allows technology to develop rapidly and democratically since anyone can modify and redistribute the code. On the other hand, advocates for closed-source models argue that they are more secure because the code is kept private.
OpenAI CEO Sam Altman said the closed-source approach offers his company "an easier way to hit the safety threshold" in an AMA on Reddit last November. He added, however, that he "would like us to open source more stuff in the future."
OpenAI CEO Sam Altman, along with Yash Kumar, Casey Chu, and Reiichiro Nakano, unveiled Operator on Thursday. The AI agent can navigate the web and perform tasks like booking reservations and buying groceries.
OpenAI
OpenAI unveiled Operator, its first AI agent, for ChatGPT Pro subscribers in the US.
It can autonomously complete tasks like booking reservations or buying groceries.
The agent is powered by a new model built in GPT-4o called CUA.
Experts predicted that 2025 would be the year AI agents go mainstream, and OpenAI is delivering on that forecast.
On Thursday, OpenAI unveiled Operator, a system that can use a web browser to do things like book travel reservations and buy products.
While chatbots like OpenAI's popular ChatGPT use generative AI to respond to queries, Operator is an agent designed to perform tasks autonomously.
OpenAI said Operator would be available Thursday in the US for users of ChatGPT Pro, a $200 monthly plan that provides access to its latest models, including o1. In the coming months, the company said, it will also be made available to subscribers of ChatGPT Plus, OpenAI's $20 monthly subscription tier, and to users in other countries.
During a livestream announcing Operator on Thursday, OpenAI CEO Sam Altman called the release an "early research preview," adding that it would be refined over the coming months. He said OpenAI would also have more agents to launch.
The interface is similar to ChatGPT. Users prompt Operator with a request, like "book a dinner reservation at 7 p.m." They can select a specific website through which they want to process the request, such as OpenTable, or send the request through a search engine like Google.
Operator summarizes its reasoning process in a sidebar so users can identify steps where it makes mistakes, which OpenAI says it's still prone to do.
Operator details its reasoning process while booking a restaurant reservation.
OpenAI
Users can also upload a picture of a handwritten grocery list and prompt Operator to purchase the items on the list.
Users can upload a handwritten grocery list and ask Operator to purchase the items on it.
OpenAI
Users can choose a specific site, such as Instacart, for Operator to purchase the groceries from. If no site is selected, it will default to a search engine.
Operator searches Instacart for the items on the grocery list.
OpenAI
Reiichiro Nakano, a member of the company's technical staff, said in the livestream that Operator was powered by CUA, a new model built on GPT-4o.
It's "trained to use and control a computer in the same way that humans can, by just looking at the screen and using a mouse and keyboard to control it," he said.
Nakano said the model bypassed the need for APIs, mechanisms that allow software components to communicate with each other, and "unlocks a whole new range of software we can use that was previously inaccessible."
He added that the model removed "one more bottleneck in our path towards AGI," or artificial general intelligence.
Still, Operator has a way to go before it matches humans' ability to navigate the web.
OpenAI said that in a benchmark measuring how AI agents navigate common operating systems, like the open-source operating system Linux, Operator scored 38.1%, compared with 72.4% for humans. In another benchmark measuring how AI agents navigate common websites, Operator scored 58.1%, compared with 78.2% for humans.
Matthew Fitzpatrick is moving on from QuantumBlack Labs role to head Invisible Technologies.
FINN Partners
McKinsey executive Matthew Fitzpatrick is leaving the firm to become CEO of Invisible Technologies.
Fitzpatrick led QuantumBlack Labs, developing AI software and tools for companies.
Invisible Technologies, a company focused on AI training, is valued at $500M, the company said.
One of McKinsey & Company's top executives is leaving the firm for Silicon Valley's new promised land: the AI industry.
After12 years at McKinsey, Matthew Fitzpatrick, senior partner and global head of QuantumBlack Labs β the software and research and development arm of the firm's AI division QuantumBlackβ is stepping down.
During his tenure, Fitzpatrick led teams that helped companies scale AI projects and oversaw the development of tools like Kedro, an open-source analytics and machine learning library that McKinsey said has been downloaded more than 17 million times since its launch in 2019.
In his next role, he will serve as the CEO of Invisible Technologies,Β a key company in the AI industry that has kept a relatively low profile.
Invisible Technologies specializes in data pre-training, the initial phase of training for large language model developers, and post-training, which helps refine models for companies adopting the technology. Invisible Technologies was valued at $500 million in 2024, according to a press release from the company.
Fitzpatrick said he believes Invisible can help businesses tackle one of the biggest challenges of the moment β effectively integrating AI into their operations.
"Despite the hype around AI, we're at a point where fewer than 10% of AI models reach usage and production because enterprises don't have the experience to evaluate, train, and operationalize them," Fitzpatrick said in the company's press release. "That's where Invisible shines. I'm bullish we can help customers cross the chasm and realize the massive potential of AI."
Fitzpatrick told BI the move came about through conversations with Invisible's founder, Francis Pedraza, whom he met through a networking organization.
"It's the most under-the-radar critical AI company in the US that I've ever seen, and it's been involved in all of the model training for the last five years but has done very little publicity of any kind around that," he said.
He said he sees companies having a strong demand for Invisible's services.The challenge, however, is "not growing too fast that we in any way sacrifice quality," he said.
Somesh Khanna, a former colleague of Fitzpatrick's, told BI Fitzpatrick built a reputation for changing McKinsey's talent pool, bringing more engineers and employees with quantitative skills into the fold. McKinsey told BI that it now employs 7,000 people as technologists, designers, and product managers. Fitzpatrick was responsible for overseeing 1,000 of them, a rep for Fitzpatrick told BI.
"The biggest thing is that in the McKinsey model of the past it was very hard for atypical profiles such as PhDs or data engineers or scientists to integrate into the culture of the firm. McKinsey's philosophy centered around acquiring amazing talent and teaching these individuals the McKinsey approach to problem-solving and client service," Khanna, a senior partner who retired from the firm in May after almost three decades, told BI.
"These guys, the new team that Matt was hiring and developing, were even more different β people who basically were hands-on keyboards guys, not power pointers."
Khanna said Fitzpatrick was critical in integrating this new breed of talent with deep technical skills into the firm.
Fitzpatrick will be succeeded at QuantumBlack Labs by McKinsey senior partner TomΓ‘s Lajous.
TikTok creators shared their thoughts on how the app's potential shutdown threatens their income.
Jaap Arriens/NurPhoto via Getty Images.
TikTok restored services in the US after 12 hours of downtime, easing some creators' concerns.
Creators rely on TikTok for income, from product sales and ad deals to the app's affiliate program.
With TikTok's future still uncertain, some creators are planning to diversify how they sell online.
TikTok restored services in the US on Sunday, easing the concerns of content creators and entrepreneurs who make their living from the platform β at least for now.
The platform was down for 12 hours starting late Saturday night and was restored following a Truth Social post by President-Elect Donald Trump, who said he'd issue an executive order on Monday to delay the ban. TikTok's future remains unclear, as its China-based parent company, ByteDance, has so far refused to divest from the app as required by law, but for now, the economy driven by TikTok can continue to churn.
"My whole livelihood was on the line this weekend," Live shopping host Kimberly Balance told Business Insider. "Never experienced anything like this the entire time that I've been a business owner."
Balance, who goes by KIMMIEBBAGS, sells luxury consignment goods on TikTok, Instagram, and the marketplace platform Whatnot. Last week, she relocated her business from Florida to California to expand her live shopping operations.
Balance was set to host a six-hour live shopping show on TikTok on Saturday as part of a new live shopping partnership she had struck with Reunited Luxury. On Thursday evening, TikTok informed her that her Friday meeting with the platform's luxury sales manager was canceled. Her show on Saturday was canceled soon after,Β in a blow to her business' revenue.
Since it launched in 2023, TikTok's online marketplace, TikTok Shop, has quickly become a prime source of revenue for creators on the platform. The app also has an affiliate program where creators can earn a commission for sales they help drive by tagging products in videos or live streams. Creators can also package products from different sellers on their profiles for users to search through. TikTok takes a cut of each transaction.
In its April 2024Β economic impact report, the companyΒ said TikTok "brings tens of billions of dollars to the US economy," including $15 billion in revenue to small businesses that use the app, supporting more than 224,000 jobs. Business Insider could not independently confirm these internal statistics.
Before TikTok "went dark" on Saturday night, some creators on the platform told Business Insider they worried the ban could hurt them financially.
In a press release for the social media app Own, one creator, ChalkDunny, said he made more than 60% of his income in 2024. Another creator, izzybizzyspider, said in the release that TikTok is her "biggest source of income and biggest platform."
She warned that creators on the app have to be "prepared to be flexible and adapt quickly."
Nadya Okamoto, founder of menstrual-care brand August, which sells products on TikTok, told Business Insider she is "relieved" that TikTok came back online. However, she said the ongoing volatility over the ban prompted her to develop a contingency plan that reduces her reliance on the app.
"I've been encouraging my followers to connect with me on platforms like Instagram and YouTube for updates," she said. "I'm also exploring other affiliate shopping opportunities, such as YouTube Shop, where I've started adding shoppable productsβparticularly in my skincare-related videos."
Balance said she plans to switch up the platforms where she does business, given TikTok's still-uncertain future.
"We're going to continue probably to lean on the other channels like Instagram and possibly launch a YouTube," she said. "I think this is just an eye opener for all small businesses that we need to have a diverse way to reach our audiences."
TikTok did not immediately return a request for comment from Business Insider for this story.