Invitations to test Manus are limited, due to its launch in closed beta.
VCG/VCG via Getty Images
Manus, a new agentic AI system from a Chinese startup, is starting to generate DeepSeek-like buzz.
The AI agent is currently in closed beta, with limited invitations being distributed.
Access codes are being listed on third-party reseller sites for a wide range of asking prices.
Resellers are testing the waters on how much people are willing to pay to score an invite to Manus, the latest buzzy agentic AI out of China.
Manus is currently in closed beta, which means the public can't try it out without getting an invite code from an existing user. As a result, listings are cropping up on the Chinese reseller site Goofish, also known as Xianyu, claiming to offer invite codes.
Searching for "Manus" on the site brings up fifty pages of listings, both incredibly cheap, at about 1 to 2 Chinese yuan, or a little less than one USD, to the more exorbitant. Sorting from highest to lowest pricing on the site brings up listings in the equivalent of thousands of American dollars.
Listings on Goofish can reach into the thousands, once converted to USD.
Steven Tweedie
The interest has spilled over onto eBay, where an unsold listing for an email and password to access Manus AI starts at $1,000.
It's not clear how many, if any, of the listings on Goofish are actually selling, particularly the higher priced codes. But their presence alone highlights a growing international interest in exploring the AI's purported capabilities.
An energy not unlike the hype that surrounded Deepseek's initial entry into the marketplace now surrounds Manus β some early users claim it's revolutionary, while others say it falls short of expectations.
The Butterly Effect, the Chinese company behind Manus, claims it's capable of a variety of real-world tasks, from analyzing stocks to developing minigames and screenplays.
"This isn't just another chatbot or workflow," said Yichao "Peak" Ji,Manus cofounder, in a Youtube video announcing the AI. "It's a truly autonomous agent that bridges the gap between conception and execution."
The team behind Manus did not immediately respond to a request for comment from Business Insider prior to publication.
Ji goes on to add that the program has already proven itself capable of "solving real-world problems" on gig-work platforms like Fiverr and Upwork, as well as "proven its capabilities" in Kaggle Competitions, which challenges users to solve problems based off data sets.
Manus' developers appear to be positioning it as the model to beat, in direct competition with other "chain-of-thought" offerings, such as ChatGPT's "Deep Research," and Claude's "Extended Thinking" mode.
"We see it as the next paradigm of human-machine collaboration, and potentially a glimpse into AGI," Ji said.
Bryce Adelstein Lelbach says Nvidia doesn't employ a strict hierarchy, and has a free-form culture.
Justin Sullivan/Getty Images
Bryce Adelstein Lelbach, a principal architect at Nvidia, discussed the company's culture in a recent podcast interview.
He described it as "organized chaos" and said it best fits self-starters with lots of initiative.
Lelbach said that the kind of flexibility Nvidia's culture has afforded him is "really valuable."
If you're highly driven and independently motivated, you might be a culture fit for Nvidia.
"I sometimes describe it as, it's a little bit of, you know, organized chaos, which I really like," Nvidia's principal architect Bryce Adelstein Lelbach said in a recent podcast interview with TechBytes.
Nvidia has rapidly gained a reputation as one of the companies at the forefront of the AI revolution, and positions at the company are highly coveted β both for the associated prestige and potential financial upside.
Those who do make the cut should brace themselves β as Lelbach says Nvidia assigns new hires real responsibility from day one. He added that it could feel, to some, like being thrown into the deep end.
"I think that for some people it could be quite scary if, you know, you start day one at Nvidia and usually it's like, 'Okay, here's a laptop and like here's a pile of bugs that you're responsible for. Good luck,'" Lelbach said.
Still, he added, things are likely to work out well for those who are used to being self-sufficient.
"There's not necessarily like a lot of structured onboarding. It sort of depends, team by team," Lelbach said. "But if you're a self-starter, if you've got a lot of initiative, it's a great environment for you."
Lelbach says Nvidia has a relatively "flat management structure" β meaning that the company has fewer layers of management separating executives from employees. CEO Jensen Huang has previously said he manages about 50 to 60 direct reports himself.
"I really love the Nvidia culture because Nvidia does not have a lot of strict hierarchy or rules. It's very free-form, very flexible," Lelbach said.
The Nvidia technical leader said employees aren't likely to be told "no" based solely on whether or not they're staying in their proverbial lanes.
"They're not going to be like, 'Oh you can't do that because like that's not your job title,'" Lelbach said. "You're not going to hear that."
Reviews on the company's Glassdoor page appear to largely reflect Lelbach's lived experience, with 96% of posters saying they'd recommend working at the company to a friend. One user listed a "pro" of working at Nvidia as its "unique and empowering culture," while another said a "con" was the environment could prove "a bit fast-paced and too competitive."
When presented with employee accounts calling him "demanding," a "perfectionist," and "not easy to work for" during a "60 Minutes" interview, Huang said those descriptions fit him "perfectly."
"It should be like that," the Nvidia CEO said. "If you want to do extraordinary things, it shouldn't be easy."
For Lelbach, the kind of autonomy that Nvidia affords its employees is a key draw.
"That sort of β that freedom and that flexibility is really valuable to me," he said. "It really works well for my personality type."
Luckily, it's easy to check your settings and set your Google Calendar to private.
First, head on down to the "My calendars" section in the bottom-left corner of your Google Calendar homepage and click the three dots for the calendar you want to edit.
Then, select "Settings and sharing."
Click "Settings and sharing" to navigate to the settings page for the calendar you want to edit.
Sarah Perkel
You'll be taken to the settings page.
Scroll down to the segment titled "Access permissions for events," and make sure the box labeled "Make available to public" isn't checked.
Uncheck "Make available to public" if you want your Google Calendar to be private.
Sarah Perkel
If it's a company calendar, you might also have the ability to restrict or allow access to those within your organization.
If you'd like to share your entire calendar only with specific people, you can add them under the "Shared with" tab, which is just beneath "Access permissions for events."
If you're only looking toΒ hide particular events, you can click on the one you'd like to make invisible and select the pencil icon labeled "Edit event." From there, you'll see a dropdown menu that's likely set to "Default visibility," which you can then swap to private.
After that, the event's details won't be visible to anyone who doesn't have, at minimum, permission to "make changes to events" on your calendar.
Now, you won't have to worry if your meeting details are out there for anyone to see.
Eric Schmidt co-authored a policy paper urging the U.S. to avoid a "Manhattan Project" for AI.
Christian Marquardt/Getty
Former Google CEO Eric Schmidt co-authored a paper warning the US about the dangers of an AI Manhattan Project.
In the paper, Schmidt, Dan Hendrycks, and Alexandr Wang push for a more defensive approach.
The authors suggest the US sabotage rival projects, rather than advance the AI frontier alone.
Some of the biggest names in AI tech say an AI "Manhattan Project" could have a destabalizing effect on the US, rather than help safeguard it.
The dire warning came from former Google CEO Eric Schmidt, Center for AI Safety director Dan Hendrycks, and Scale AI CEO Alexandr Wang. They coauthored a policy paper titled "Superintelligence Strategy" published on Wednesday.
In the paper, the tech titans urge the US to stay away from an aggressive push to develop superintelligent AI, or AGI, which the authors say could provoke international retaliation. China, in particular, "would not sit idle" while the US worked to actualize AGI, and "risk a loss of control," they write.
The authors write that circumstances similar to the nuclear arms race that birthed the Manhattan Project β a secretive initiative that ended in the creation of the first atom bomb β have developed around the AI frontier.
In November 2024, for example, a bipartisan congressional committee called for a "Manhattan Project-like" program, dedicated to pumping funds into initiatives that could help the US beat out China in the race to AGI. And just a few days before the authors released their paper, US Secretary of Energy Chris Wright said the country is already "at the start of a new Manhattan Project."
"The Manhattan Project assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."
It's not just the government subsidizing AI advancements, either, according to Schmidt, Hendrycks, and Wang β private corporations are developing "Manhattan Projects" of their own. Demis Hassabis, CEO of Google DeepMind, has said he loses sleep over the possibility of ending up like Robert Oppenheimer.
"Currently, a similar urgency is evident in the global effort to lead in AI, with investment in AI training doubling every year for nearly the past decade," the authors say. "Several 'AI Manhattan Projects' aiming to eventually build superintelligence are already underway, financed by many of the most powerful corporations in the world."
The authors argue that the US already finds itself operating under conditions similar to mutually assured destruction, which refers to the idea that no nation with nuclear weapons will use its arsenal against another, for fear of retribution.They write that a further effort to control the AI space could provoke retaliation from rival global powers.
Instead, the paper suggests the US could benefit from taking a more defensive approach β sabotaging "destabilizing" AI projects via methods like cyberattacks, rather than rushing to perfect their own.
In order to address "rival states, rogue actors, and the risk of losing control" all at once, the authors put forth a threefold strategy. Deterring via sabotage, restricting access of chips and "weaponizable AI systems" to "rogue actors," and guaranteeing US access to AI chips via domestic manufacturing.
Overall, Schmidt, Hendrycks, and Wang push for balance, rather than what they call the "move fast and break things" strategy. They argue that the US has an opportunity to take a step back from the urgent rush of the arms race, and shift to a more defensive strategy.
"By methodically constraining the most destabilizing moves, states can guide AI toward unprecedented benefits rather than risk it becoming a catalyst of ruin," the authors write.
Junior programmers are increasingly facing competition from senior, laid-off coders in a shrinking market.
Steve Marcus/Reuters
Entry-level programming roles are increasingly scarce, with junior applicants fielding fierce competition.
Bryce Adelstein Lelbach, a principal architect at Nvidia, discussed the state of the job market in a podcast interview.
He said young developers should focus on honing two skills β math and writing.
It's a dog-eat-dog world in the job market for entry-level software engineers β just ask Nvidia principal architect Bryce Adelstein Lelbach.
"I think that it's a very challenging time for young programmers," Lelbach said on a recent episode of the TechBytes podcast.
"We saw, post-Covid, with a little bit of the tech pullback β we saw a lot of tech companies pull back on hiring," he added. "And the reality is that most of the Big Tech companies have the luxury of just hiring senior people, these days."
Mass firings in the wake of the pandemic set loose a flood of mid-to-senior level coders into the job market, who suddenly found themselves competing with junior programmers for positions they were once considered overqualified for.
As AI appears more likely to further shrink the pool of available jobs, Lelbach says there are two skills he'd suggest young programmers prioritize.
The first is writing.
"Especially with the emergence of large language models, it's going to become even more important to be able to communicate your ideas and your thoughts," Lelbach said.
The second β the "timeless field" of pure mathematics.
"While there may be a future where we humans do a lot less programming, the fundamentals that you learn in math are always, I think, going to be relevant," he said. "They're going to be relevant to how we design things and how we build things."
Above all else, Lelbach says practical knowledge remains king.
"I think the best option is to have internships," he said. "If you want to get a job as a software engineer, you need to have internships essentially every year that you're in college."
Despite the temptation to delay entering the workforce by way of a master's degree or Ph.D., Lelbach says that approach can present more problems than solutions. The sheer volume of applicants that now sport post-graduate degrees somewhat dilutes their ability to make anyone stand out, he added.
"I am generally a little bit more skeptical these days of people getting Masters and PhDs because there are so many people who have them now," he said.
For undergraduate seniors who are weighing their options, Lelbach suggests focusing on acquiring as much real-world experience as possible.
An applicant with "time in industry" under their belt might have a fighting chance at slightly higher-level roles, bypassing the entry-level mania entirely.
"If you graduate as a Master's student or a Ph.D. student with no industry experience, you're going to be competing with the pool of people that are looking for more junior positions," Lelbach said. "Versus going and getting, you know, two or four years of industry experience β then you're going to be competing for the more senior jobs."
Anthropic CPO Mike Krieger says software engineers will have to delegate parts of their job to AI.
Chris Saucedo/Getty Images for SXSW
Instagram cofounder Mike Krieger discussed the evolving job of software engineers in a podcast interview.
He said the day-to-day work will change as AI gets better at coding.
Krieger, who now works at Anthropic, predicts software engineers will spend more time reviewing code than writing it.
Software engineers should expect their jobs to meaningfully change in the next three years, according to Instagram's cofounder.
Mike Krieger, who now works as Anthropic's chief product officer, said in a recent podcast interview that developers will be spending more time double-checking AI-generated code than writing it themselves.
"How do we evolve from being mostly code writers to mostly delegators to the models and code reviewers?" Krieger asked on a recent episode of "20VC."
As the act of coding itself increasingly involves artificial intelligence, Krieger expects software developers to tackle the more abstract work that AI models can't handle and learn how to effectively oversee the systems themselves.
"That's what I think the work looks like three years from now," Krieger said. "It's coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale β and that's probably some combination of maybe a comeback of some static analysis or maybe AI-driven analysis tools of what was actually produced."
At certain Big Tech companies, the work of software development has already undergone significant change.
In October, Google CEO Sundar Pichai said over a quarter of new code at the company was already being produced by AI. Besides using it to write some code himself, Krieger said he began the year at Anthropic by determining what parts of the product development process were "Claude-ified," and which others remained best left up to human beings.
"Driving alignment and actually figuring out what to build is still the hardest part, right," Krieger said. "Like that is actually the only thing that is still best resolved by just getting together in a room and talking through the pros and cons, or going off and exploring it in Figma and coming back."
Though AI may speed along certain parts of the process for product development, Krieger doesn't expect it to entirely eliminate the need for software developers β a worry that is top of mind for some computer science majors and recent graduates who previously spoke with Business Insider.
Krieger said AI will instead alter the skills required to remain relevant in a coding-related job.
"I think it becomes multidisciplinary, where it's knowing what to build as much as it is knowing the exact implementation that you want," Krieger said. "I love that about our engineers. Many, maybe even most, of our good product ideas come from our engineers and come from them prototyping, and I think that's what the role ends up looking like for a lot of them."
A spokesperson for Anthropic told Business Insider that the company views itself as a "testbed" for how other workplaces can navigate AI-driven changes to critical roles.
"At Anthropic, we're focused on developing powerful and responsible AI that works with people, not in place of them," the spokesperson said. "As Claude rapidly advances in its coding capabilities for real-world tasks, we're observing developers gradually shifting toward higher-level responsibilities."
Certain jobs, Krieger said, are still most efficient when performed by human hands β for the time being.
"And I think alignment: Deciding what to build, solving real user problems, and like figuring out a cohesive product strategy β still very hard," he said. "And probably the models are more than a year away from solving that. That is the constraint."
The director of Carnegie Mellon's undergraduate degree in AI said the program stays flexible.
Katherine Frey/The Washington Post via Getty Images
Universities are increasingly offering courses in artificial intelligence.
Non-STEM students are becoming more interested in AI programs, faculty members say.
Programs are evolving to suit an ever-changing field.
The undergraduate major in Artificial Intelligence at Carnegie Mellon University looks very different now than it did when it started over half a decade ago.
"These large language models, you know, generative AI β have basically taken over, sucked all the oxygen out of the room," said Reid Simmons, the program director for the Bachelor of Science in AI at CMU. "That's what we're focused on mostly, right now β really trying to make sure the students understand the technology."
Carnegie Mellon was one of the first to develop an undergraduate degree in artificial intelligence, enrolling its inaugural class in 2018. Originally, Simmons said, the goal was to provide participants with a fundamental understanding of a broad and rapidly changing field.
"A lot of work in artificial intelligence is focused solely on machine learning, but there are a lot of other aspects, including things like search, knowledge representation, decision making, robotics, computer vision, natural language," Simmons said. "All those fall under the rubric of artificial intelligence."
Now, Simmons said, the number of classes dedicated to machine learning has exploded, going from one or two "basic" courses to "as many as 10" higher-level classes.
As AI only continues to grow in terms of sheer power and possible application, so does interest in learning how to harness it. In particular, Simmons said more students without backgrounds in engineering and computer science are showing interest in an AI education.
"We're starting to look at courses that are more accessible to people without a strong technical background," he said. "So that's kind of the next step that we're looking at, is how to kind of have an AI for all type experience."
AI education for every discipline
A similar evolution is taking place at Johns Hopkins, said Barton Paulhamus, the director of the institution's online master's degree in Artificial Intelligence. As more and more people hear about AI, the program is receiving attention from a "broader audience."
"What can we give them that they can learn about AI without needing to go through 10 courses of prerequisites?" Paulhamus said.
Originally, Johns Hopkins courses were targeted at "students with an undergrad in computer science," Paulhamus said. Yet, "over the year and a half I've been there, we've been offering more courses towards the other extreme."
Some are now geared towards students from non-traditional backgrounds, including "nursing, business, and education," he said.
In addition to building out classes for the relatively uninitiated, Paulhamus said that the school is also working to increase the "breadth and depth" of courses dealing with generative AI.
"So, as fast as we can get the instructors on board and get the material created, there's just an insatiable appetite for that right now," he said.
Regardless of mounting interest unlocked by what he called the "AI boom," Paulhamus said the program is still largely focused on what the school has identified as the critical elements of an education in AI.
"It's more fundamental than the latest du jour thing, right?" Paulhamus said.
Demystifying computer science
Leonidas Bachas, the Dean of the College of Arts and Sciences at the University of Miami, said that students from disciplines "further away from STEM" needed a course to figure out "what AI may be for them."
"We have a course that starts with data science and AI for everyone," Bachas said. "The students will come in without any background in computing β they may not even know coding, but they get this teaser as a starter course through which they can become interested in the subject matter and then continue into one of these other programs."
Bachas said the aim is to open up a field that can appear daunting to those without the prerequisite knowledge.
"In other words, don't be scared about the field of computer science," he said. "This is a computer science class for all, and then a data science class for all, an artificial intelligence class for all, to try to bring students slowly from disciplines that may not be accustomed to view computing as friendly."
Mitsunori Ogihara, a professor in the Department of Computer Science at UM who helped design the university's B.S. in Data Science and AI, said he hopes that students come to respect it as a core subject of sorts, much like mathematics. With greater understanding, Ogihara said, comes the lessening of anxiety surrounding potential ramifications.
"Whenever there is this new development that occurs, lay-people's reaction is, 'Oh, I'm totally scared of this. The computer scientists are conspiring to do very, very bad things to society,'" Ogihara said. "We want to remove that. So, the best way to do this is to educate the next generation, the people who are running the society, about how computing works, how computing could be useful."
Dario Amodei believes it's possible to address the risks of AI without foregoing the solutions it affords.
FABRICE COFFRINI/AFP via Getty Images
Anthropic CEO Dario Amodei said that while the benefits of AI are big, so are the risks.
Amodei said on "Hard Fork" that he worries about threats to national security and the misuse of AI.
He believes it is possible to address the risks of AI without foregoing the solutions it affords.
Anthropic CEO Dario Amodei said that people still aren't taking AI seriously enough β but he expects that to change within the next two years.
"I think people will wake up to both the risks and the benefits," Amodei said on an episode of the New York Times' "Hard Fork," adding that he's worried the realization will arrive as a "shock."
"And so the more we can forewarn people β which maybe it's just not possible, but I want to try," Amodei said. "The more we can forewarn people, the higher the likelihood β even if it's still very low β of a sane and rational response."
Those optimistic about the technology expect the advent of powerful AI to bring down the barriers to niche "knowledge work" once performed exclusively by specialized professionals. In theory, the benefits are immense β with applications that could help solve everything from the climate crisis to deadly disease outbreaks. But the corresponding risks, Amodei said, are proportionately big.
"If you look at our responsible scaling policy, it's nothing but AI, autonomy, and CBRN β chemical, biological, radiological, nuclear," Amodei said. "It is about hardcore misuse in AI autonomy that could be threats to the lives of millions of people. That is what Anthropic is mostly worried about."
He said the possibility of "misuse" by bad actors could arrive as soon as "2025 or 2026," though he doesn't know when exactly it may present a "real risk."
"I think it's very important to say this isn't about, 'Oh, did the model give me the sequence for this thing? Did it give me a cookbook for making meth or something?'" Amodei said. "That's easy. You can do that with Google. We don't care about that at all."
"We care about this kind of esoteric, high, uncommon knowledge that, say, only a virology Ph.D. or something has," he added. "How much does it help with that?"
If AI can act as a substitute for niche higher education, Amodei clarifies, it "doesn't mean we're all going to die of the plague tomorrow." But it would mean that a new breed of danger had come into play.
"It means that a new risk exists in the world," Amodei said. "A new threat vector exists in the world as if you just made it easier to build a nuclear weapon."
Setting aside individual actors, Amodei expects AI to have massive implications for military technology and national security. In particular, Amodei said he's concerned that "AI could be an engine of autocracy."
"If you think about repressive governments, the limits to how repressive they can be are generally set by what they can get their enforcers, their human enforcers to do," Amodei said. "But if their enforcers are no longer human, that starts painting some very dark possibilities."
Amodei pointed to Russia and China as particular areas of concern and said he believes it's crucial for the US to remain "even with China" in terms of AI development. He added that he wants to ensure that "liberal democracies" retain enough "leverage and enough advantage in the technology" to check abuses of power, and block threats to national security.
So, how can risk be mitigated without kneecapping benefits? Beyond implementing safeguards during the development of the systems themselves andΒ encouraging regulatory oversight, Amodei doesn't have any magic answers, but he does believe it can be done.
"You can actually have both. There are ways to surgically and carefully address the risks without slowing down the benefits very much, if at all," Amodei said. "But they require subtlety, and they require a complex conversation."
AI models are inherently "somewhat difficult to control," Amodei said. But the situation isn't "hopeless."
"We know how to make these," he said. "We have kind of a plan for how to make them safe, but it's not a plan that's going to reliably work yet. Hopefully, we can do better in the future."
A new statement disclosing AI content has been added to the Steam pages for certain Call of Duty titles.
Reuters
The pages for some "Call of Duty" titles on PC games store Steam now include disclosures of AI content.
Publisher Activision included the statements after Steam implemented a policy requiring disclosure of AI.
Some players suspected the use of AI in "CoD" games before it was confirmed.
New disclosures splashed across the Steam pages for certain "Call of Duty" titles confirm what some players have long suspected β the developers are dabbling in AI.
Activision, the publisher of the popular "Call of Duty" games, issued the disclosures β found on the Steam pages for "Call of Duty: Black Ops 6" and "Call of Duty: Warzone" β in compliance with Steam's policy requiring developers to disclose the use of AI. It reads: "Our team uses generative AI tools to help develop some in-game assets."
Player reactions on X were mixed β but many reflected prior suspicions that "CoD" was making use of AI-generated content.
One user posted a GIF of a puppet making a shocked face, labeled "acting surprised." Another called the use of artificial intelligence "lazy," while a different player criticized the company for putting out what they described as "rushed, unpolished, and imbalanced works," even with the help of AI.
However, another user said they didn't see it as a "problem," particularly if AI was used on "mundane busy repetitive work," like "1000 versions of shrubs."
"The 'mundane busy work' is actually peoples' jobs btw," a different user responded.
It's not clear to what extent artificial intelligence was used in the making of the games. Activision did not immediately respond to a request for comment from Business Insider.
AI is only continuing to improve in terms of sophistication and adoption β and the gaming industry isn't exempted from its growing reach. Microsoft, the owner of Activision, recently unveiled its Muse model, capable of generating "game visuals and controller actions."
Creatives have expressed concerns about being replaced by AI β particularly in the wake of mass layoffs that swept the gaming industry in 2024. That was the same year that SAG-AFTRA and the Writers Guild of America went on strike, in part, while seeking improved AI protections for their members. Microsoft cut jobs in its gaming arm at the start of this year, without specifying an exact number.
Artists have also expressed worries that artificial intelligence models could be trained off their artwork without their consent, leading to AI being able to perfectly replicate their unique art styles.
Activision hasn't confirmed exactly which assets are AI-generated, or to what extent AI was used, but players have previously used certain graphics as examples that they may be using the tech.
For instance, the "Necroclaus" loading screen featured in Black Ops 6's "Zombies" mode in December of 2024 depicted what some thought was a six-fingered hand. Other players suggested it was just flesh falling off the zombie's pinky.
Another image, this time used to promote a "Zombies" community event in 2024, appeared to depict a gloved hand with six fingers but no on-screen thumb β implying a total of seven fingers on one hand. A hallmark of AI-generated art can be an excess number of fingers, toes, and teeth.
OpenAI researcher Karina Nguyen said creativity and emotional intelligence are some of the hardest things to teach AI.
SOPA Images/Getty Images
Karina Nguyen left engineering for research after watching Claude get better at coding in a previous role at Anthropic.
In a recent podcast interview, she said soft skills will remain important even as work changes.
Nguyen, now at OpenAI, said creativity and emotional intelligence remain some of the hardest things to teach AI.
In a world where certain jobs could one day be rendered obsolete by AI, OpenAI researcher Karina Nguyen said she expects soft skills to endure as highly prized.
She also expects that to be the case in the realm of AI research.
"I just think people in AI field are like β I wish they were a little bit more creative and connecting the dots across different fields," Nguyen said on a recent episode of "Lenny's Podcast."
Nguyen, who previously worked at Anthropic, said that above all else, she expects AI to automate "redundant tasks for people." She added that the models she works with can struggle to grasp skills that often come so naturally to human beings.
"I think it's the dream of any AI research is to automate AI research," Nguyen said. "It's kind of scary, I'd say, which makes me think that people management will stay, you know? It's one of the hardest things to β emotional intelligence, with the models, creativity in itself is one of the hardest things."
At OpenAI, Nguyen said her role is heavy on "management and mentorship," despite originally being passionate about engineering. She said the shift came about during her tenure at Anthropic β after observing Claude's rapidly advancing capabilities, Nguyen came to a realization about her career.
"When I first came to Anthropic, I was like, 'Oh no, I really love front-end engineering,'" Nguyen said. "And then the reason why I switched to research is because I realized, at that time, it's like, 'Oh my god, Claude is getting better at front-end. Claude is getting better at coding.'"
Nguyen and OpenAI did not immediately respond to a request for comment by Business Insider prior to publication.
"It was kind of this meta realization where it's like, 'Oh my god, the world is actually changing,'" she added.
Nguyen said that models are only improving, becoming increasingly cost-efficient as "small models" prove themselves "smarter than large models." As the costs associated with artificial intelligence drop, Nguyen expects the technology to proliferate even further, unlocking work that she considers to have been previously "bottlenecked by intelligence."
"I'm thinking about healthcare, right?" Nguyen said. "Instead of going to a doctor, I can ask ChatGPT or give ChatGPT a list of symptoms and ask me, 'Would I have a cold, flu, something else?'"
Nguyen said she's been spending "a lot" of time thinking about what her future might look like in a working landscape altered by AI. She said that if the models she's helped build eventually automate her current job, she may spend her time writing "short stories, sci-fi stories, novels," or working as a museum conservator.
"I feel like I have a lot of job options," Nguyen said. "I would love to be a writer, I think. I think that would be super cool."
Google Deepmind CEO Demis Hassabis said there's "probably too much" pressure on AI leaders.
World Economic Forum/Gabriel Lado
The CEOs of Google DeepMind and Anthropic spoke about feeling the weight of responsibilities in a recent interview.
The executives advocated for the creation of regulatory bodies to oversee AI projects.
Both AI leaders agree that people should better grasp and prepare for the risks posed by advanced AI.
When asked if he ever worried about "ending up like Robert Oppenheimer," Google DeepMind's CEO Demis Hassabis said that he loses sleep over the idea.
"I worry about those kinds of scenarios all the time. That's why I don't sleep very much," Hassabissaid in an interview alongside Anthropic CEO Dario Amodei with The Economist editor in chief Zanny Minton Beddoes.
"I mean, there's a huge amount of responsibility on the people β probably too much β on the people leading this technology," he added.
Hassabis and Amodei agreed that advanced AI could present destructive potential whenever it becomes viable.
"Almost every decision that I make feels like it's kind of balanced on the edge of a knife β like, you know, if we don't build fast enough, then the authoritarian countries could win," Amodei said. "If we build too fast, then the kinds of risks that Demis is talking about and that we've written about a lot, you know, could prevail."
"Either way, I'll feel that it was my fault that, you know, that we didn't make exactly the right decision," the Anthropic CEO added.
Hassabis said that while AI appears "overhyped" in the short term, he worries that the mid-to-long-term consequences remain underappreciated. He promotes a balanced perspective β to recognize the "incredible opportunities" afforded by AI, particularly in the realms of science and medicine, while becoming more keenly aware of the accompanying risks.
"The two big risks that I talk about are bad actors repurposing this general purpose technology for harmful ends β how do we enable the good actors and restrict access to the bad actors?" Hassabis said. "And then, secondly, is the risk from AGI, or agentic systems themselves, getting out of control, or not having the right values or the right goals. And both of those things are critical to get right, and I think the whole world needs to focus on that."
Both Amodei and Hassabis advocated for a governing body to regulate AI projects, with Hassabis pointing to the International Atomic Energy Agency as one potential model.
"Ideally it would be something like the UN, but given the geopolitical complexities, that doesn't seem very possible," Hassabis said. "So, you know, I worry about all the time, and we just try to do at least, on our side, everything we can in the vicinity and influence that we have."
Hassabis views international cooperation as vital.
"My hope is, you know, I've talked a lot in the past about a kind of a CERN for AGI type setup, where basically an international research collaboration on the last sort of few steps that we need to take towards building the first AGIs," Hassabis said.
Both leaders urged a better understanding of the sheer force for change they expect AI to be β and for societies to begin planning accordingly.
"We're on the eve of something that has great challenges, right? It's going to greatly upend the balance of power," Amodei said. "If someone dropped a new country into the world β 10 million people smarter than any human alive today β you know, you'd ask the question, 'What is their intent? What are they actually going to do in the world, particularly if they're able to act autonomously?'"
Anthropic and Google DeepMind did not immediately respond to requests for comment from Business Insider.
"I also agree with Demis that this idea of, you know, governance structures outside ourselves β I think these kinds of decisions are too big for any one person," Amodei said.
Microsoft CEO Satya Nadella said "knowledge work" could change as AI grows more powerful.
Getty Images
Satya Nadella made a distinction between "knowledge workers" and "knowledge work" in a recent interview.
The Microsoft CEO said the definition of cognitive labor will evolve as AI grows more powerful.
He expects knowledge work to be redefined as AI becomes capable of tasks once performed exclusively by humans.
Satya Nadella isn't dancing around it. AI agents are increasingly poised to take on work that once fell solely to human beings.
However, that doesn't mean knowledge work is going away β he simply expects it to evolve.
"When we think about, even, all these agents, the fundamental thing is there's a new work and workflow," Nadella said in an interview with YouTuber Dwarkesh Patel.
"So, the new workflow for me is: I think with AI and work with my colleagues," he added.
Nadella likened the advent of AI to that of other workplace-altering technologies, using the analogy of a multinational company producing forecasts in the days before "PC, and email, and spreadsheets."
"Faxes went around," Nadella said. "Somebody then got those faxes and then did an interoffice memo that then went around, and people entered numbers, and, you know, then ultimately a forecast came, maybe just in time for the next quarter."
People were eventually able to "take an Excel spreadsheet, put it in email," and "send it around." The fax was largely replaced in the workplace, and the jobs dependent on it meaningfully changed.
"So, the entire forecasting business process changed because the work artifact and the workflow changed," Nadella added. "That is what needs to happen with AI being introduced into knowledge work."
There is no single definition of knowledge work, but it can be generally understood as labor that requires employees to think critically to solve non-routine problems.
Nadella said that the current concept of "cognitive labor" is relatively narrow and will have to expand to include new occupations and tasks. As soon as AI automates one task, he expects humans to invent another.
What we consider to be human-exclusive work today could, in the future, be the domain of a machine, Nadella said β much in the same way that the calculator knocked out the need for mental math, and "computer" once referred to someone whose job it was to manually perform calculations.
"The knowledge work of today could probably be automated," Nadella said. "Who said my life's goal is to triage my email, right? Let an AI agent triage my email. But after having triaged my email, give me a higher-level cognitive labor task of, 'Hey, these are the three drafts I really want you to review.'"
Nadella said that he expects AI agents to require a supervisor β the "knowledge worker" of the future, whether that's a human overseer or a smarter machine.
"So basically, think of it as: There is knowledge work, and there's a knowledge worker, right," Nadella said. "The knowledge work may be done by many, many agents, but you still have a knowledge worker who is dealing with all the knowledge workers. And that, I think, is the interface that one has to build."
Riley Willis said his job in data annotation is "perfect for an introvert."
Getty Images
A computer science student at the University of Florida helps train AI through DataAnnotation Tech.
Riley Willis was drawn to the job after leaving his parents' home and said it supports his lifestyle.
He said he's careful to be frugal as he otherwise couldn't live off data annotation alone.
Riley Willis sits behind his keyboard for 30 hours a week. When he clocks out on Saturday, he's completed his shifts without speaking with a soul.
Willis is an AI contractor, working in data annotation through a company called DataAnnotation Tech. He said his job is fully remote and he operates asynchronously β with no set boss, no contact with coworkers, and no daily meetings.
"It's perfect for an introvert," he told Business Insider.
Willis said his days consist of "fact-checking" the output of various AI models β but only because that's what he chooses to take on. He said DataAnnotation Tech offers a variety of projects for contract workers to choose from, including teaching artificial intelligence to improve its creative writing and coding skills.
"You are literally going through and you're labeling the information based on a series of criteria that differ between projects," Willis said. "I personally work on a lot of factual stuff, just because I just find the pay-to-effort ratio is nice. So there's a lot of fact-checking responses, making sure the models aren't lying or hallucinating information."
As companies continue the push to improve their AI models, there's been a corresponding spike in demand for data annotators and labelers. Workers in the field spend their time either "generating data for training" or "labeling data for training," Willis said, completing tasks in programming, writing, and research.
Willis said he was drawn to the job when he was looking to move out of his parents' house.
"I wanted a job that wasn't customer service or anything like that," he said. "I was already doing remote school, so I was kind of already comfortable with the remote environment."
When he was onboarded, Willis was making $20 an hour, he said, but now he nets about $25. Hourly rates vary depending on the projects workers take on, and Willis said he'd seen them reach as high as $30.
Despite not taking on a full-time workload, Willis, who is pursuing a bachelor's in computer science through the University of Florida's online program, said he could fully cover his expenses β including tuition and rent in Raleigh, North Carolina.
"I can pay rent, and I can pay my way through college," Willis said. "It's not like I have a lot of spending money in the end, especially if I'm not working 40 hours, full-time kind of stuff, but I have enough where I can live generally."
Willis said his frugal lifestyle allowed him to stretch his pay, adding that he "couldn't support himself" if he were living alone.
"I live poor, honestly," he said. "If I had some extravagant lifestyle β if I went out to, you know, bars and everything and spent a lot online β I probably couldn't do it. But just getting through my day, it completely covers it."
The job does have its drawbacks. Willis said the repetitive nature of the tasks could be "mind-numbing." And unlike a full-time position, Willis gets paid only for the time he's sitting down at his computer actively working on assignments.
"Even in a high-paced job, I feel like, you know, there's paid lunch breaks, stuff like that," he said. "Or, you know, you spend a second or two talking to someone, but you're still on the clock getting paid. With this, you're really not getting paid unless you are doing the work."
Willis said he sometimes spends "six or seven hours" on research but gets paid for only about "five hours of actual work" because of his need to pause occasionally. But Willis said the pros of the job outweigh the cons.
"I will personally say work-life balance is nice, in the sense that, obviously, it's remote and I can work whenever I want," he said. "I can work at 1 o'clock in the morning if I really feel like it."
For someone who's looking for a side hustle to tack on to their full-time role, Willis said working to train AI might be the way to go.
"I think this would be really, really good for someone who has an actual job and is doing this, like, two hours a day," he said. "I think this is really what this kind of setup shines in. If you have an actual full-time job, and you just add the extra hour, hour and a half a day earning an extra $25 to $50 a day, just when you have nothing else to do, is definitely doable."
Bill Gates says younger generations should be "very afraid" of a few things.
Roy Rochlin/Getty Images for Netflix
Bill Gates told Patrick Collison that younger generations should worry about four things.
They are the climate crisis, unchecked AI, nuclear war, and the spread of disease.
Gates said that despite his concerns, he thinks people will be "so much better off" in the future.
Bill Gates says that if he were young again, he'd be afraid of more than just the atom bomb.
"There's, you know, about four or five things that are very scary, and the only one that I really understood and worried about a lot when I was young was nuclear war," Gates said in an interview with Patrick Collison.
Gates, the founder of Microsoft and chair of the Gates Foundation, shared his perspective on the evolving risks facing society.
"Today I think we'd add climate change, bioterrorism/pandemic, and keeping control of AI in some form," Gates said. "So, you know, now we have four footnotes."
Gates also described social polarization as a problem, later adding, "The younger generation has to be very afraid of those things."
This isn't the first time Gates has identified these areas of concern. In a blog post in 2023, Gates said that as his family grew, so did his desire to better the world.
"A grandchild does make you think about how we make sure the future is better β politics, health, climate, etc.," he wrote.
Gates argued that society is suffering from a dearth of intelligence. But he believes AI could present a solution rather than a problem. Though some have warned of AI's cataclysmic potential, Gates thinks it could be harnessed productively.
"We don't have as many medical experts, you know, people who can stay on top of everything, or people who can do math tutoring in the inner city," Gates said. "And we have a shortage of intelligence, and so we use this market system to kind of allocate it. AI, over time β and people can argue about the time frames β will make intelligence essentially free."
Despite the challenges, Gates said he still expects the citizens of the future to be largely better off β if they address the risks.
"Absent not solving some of these big problems, things are going to be so much better off," Gates said. "Alzheimer's, obesity, you know, we'll have a cure for HIV, we will have gotten rid of polio, measles, malaria. The pace of innovation is greater today than ever."
While fear can often act as a paralytic, Gates believes it could be a galvanizing force for younger generations.
"They'll actually, to some degree, exaggerate the likelihood and maybe the impact of some of those things in order to activate people to make sure we steer clear of those things," he said.
Musk said he expects Tesla to produce "several thousand" Optimus robots by the end of 2025.
Screengrab from We, Robot livestream
Elon Musk has made a series of predictions across Tesla's most recent earnings calls.
The Tesla CEO has said 2025 could be a "pivotal" year for the company.
BI rounded up what Musk and Tesla execs said will happen, from Optimus production to a refreshed Model X and Model S.
The way Elon Musk sees it, 2025 could be a major year for Tesla.
Musk has said that this year could be "pivotal" for the EV maker, particularly in terms of autonomous artificial intelligence, an umbrella term under which Musk includes Tesla's humanoid robots and fully self-driving cars.
"In fact, I think it probably will be viewed β '25 β as maybe the most important year in Tesla's history," Musk said during Tesla's quarterly earnings call on January 29. "There is no company in the world that is as good in real-world AI as Tesla."
While Tesla saw some choppy waters in 2024 as it reported its first year-over-year sales decline, Tesla's stock price has gotten a boost in the months since Donald Trump's victory in November. Musk's bet on Trump could also help pave the way to federal approval for fully autonomous vehicles, technology that the Tesla CEO has said is key to its future growth.
Musk, a self-described optimist, has told investors he envisions Tesla eventually being worth multiple trillions in market cap.
"I see a path, I'm not saying it's an easy path, but I see a path of Tesla being the most valuable company in the world by far," Musk said during Tesla's Q4 earnings call."Not even close, like maybe several times more than β I mean, there is a path where Tesla is worth more than the next top five companies combined."
As of February 25, the top five most valuable companies in the world by market capitalization were Apple, Nvidia, Microsoft, Amazon, and Google. Tesla sits at number 10 on the list.
So, how does Musk plan on eclipsing his competitors? In short: solving "autonomy."
Thousands of Optimus bots
Tesla unveiled its Optimus humanoid robot in 2021.
VCG/ Getty Images
Musk said the company plans on continuing to home in on "autonomous vehicles and autonomous humanoid robots" in 2025, after laying extensive groundwork last year.
The company has said it's moving onto "building the structure" necessary to produce the tech that Musk has identified as critical.
While Musk said that Tesla's internal goals call for the production of around 10,000 Optimus robots, he recently hedged that the goal is likely unrealistic.
"Will we succeed in building 10,000 exactly by the end of December this year? Probably not, but will we succeed in making several thousand? Yes, I think we will," Musk said on the earnings call. "Will those several thousand Optimus robots be doing useful things by the end of the year? Yes, I'm confident they will do useful things."
Musk said Tesla plans on using the bots internally at Tesla factories to tackle "boring, annoying," and sometimes "dangerous" tasks. The Optimus models in use at Tesla will inform tweaks for "production design 2," Musk added, which he expects to launch in 2026.
When Optimus is finally ready to be sold externally, the bots could eventually cost in the neighborhood of $20,000 to $30,000, Musk has previously estimated.
Currently, increasing the volume of production year-over-year is one of Tesla's foremost goals. Musk said the company is aiming to "ramp up Optimus production faster than maybe anything has ever been ramped."
"It doesn't take very many years before we're making 100 million of these things a year if you go up by, let's say, a factor by 5x per year," he said.
The rollout of Tesla's first robotaxi rides
Tesla showed off what its autonomous ride-hailing app could look like at an event last year.
Tesla
On the autonomous vehicle front, Musk said he expects 2025 to see the rollout of fully self-driving cars.
"But I think we'll have unsupervised FSD in almost every market this year, limited simply by regulatory issues, not technical capability," Musk said. "And then unsupervised FSD in the US this year, in many cities, but nationwide next year."
Regulatory scrutiny could prove to be an obstacle β Tesla's supervised FSD program has already been subject to regulator scrutiny. The National Highway Traffic Safety Administration launched a probe into the tech in 2024, after reports of four crashes, one resulting in a fatality, in which Tesla's FSD was believed to be engaged.
Still, Musk said that the technology itself is there, with thousands of Teslas are already "operating autonomously" with unsupervised FSD at its California factory in Fremont. He added that Teslas will be driving "in the wild" with no oversight in Austin by June, with California to follow by the end of the year.
"So, what I'm saying is this is not some far-off mythical situation," he said. "It's literally five, six months away, five months away kind of thing."
Musk acknowledged that he's been promising FSD for a long time, but said he's now confident that Tesla can deliver.
"Some of these things I've said for quite a long time, and I know people have said, 'Well, Elon, the boy who cried like a wolf like several times,'" Musk said. "But I'm telling you, there's a damn wolf this time and you can drive it. In fact, it could drive you. It's a self-driving wolf."
"We are still on track to launch a more affordable model in the first half of 2025 and will continue to expand our lineup from there," said Vaibhav Taneja, Tesla's CFO, on the company's most recent earnings call.
Last year, Tesla sales dipped amid a wider EV slowdown and increased competition, particularly from Chinese rival BYD, which has a $9,500 EV available for purchase.
Regardless of Musk's bigger-picture focus on autonomy, the majority of Tesla's profits still come from EV sales, which a cheaper model could help boost.
The lower-cost vehicle could be built on Tesla's "next-generation" vehicle platform, with the same manufacturing lines pumping out both Tesla's $25,000 vehicle and upcoming Cybercab robotaxi.
It's less clear what the lower-cost vehicle will look like, as Musk has previously said Tesla doesn't plan to simply launch a version of the Cybercab with a steering wheel and pedals.
A refresh for Model S and Model X
Lars Moravy, Tesla's Vice President of Vehicle Engineering, teased a possible update to Tesla's Model X and Model S, which could come later this year.
"Just give it a minute. We'll get there," Moravy said on an episode of "Ride the Lightning."
Musk has previously said that Tesla continues to manufacture the S and X largely for "sentimental reasons," calling them niche products, but Moravy said the company isn't likely to stop production of the cars "anytime soon."
"The upgrade a couple years ago was bigger than most people thought, you know, in terms of architecture and structure, and the car got a lot better, too," Moravy said. "We'll give it some love, you know, later this year β make sure it gets a little bit, you know, of the stuff we've been putting in 3 and Y."
Moravy didn't provide further detail on what exactly the update would include. The Model S and Model X β which were introduced in 2012 and 2015, respectively β last received a refresh in 2021.
Moravy said the company plans on continuing to focus on the 3 and Y to stay "competitive" and that everyone at Tesla has "a little place in their heart for S and X."
"Definitely there's some nostalgia to them and keeping them around, and we don't have any immediate plans to change that," he added.
Kevin Weil, a former Facebook and Instagram executive, joined OpenAI in June as chief product officer.
Photo by Horacio Villalobos/Corbis via Getty Images
Sam Altman recently told students that the age of "outrunning AI" is over.
They should develop new skills to compete in a changing world, the OpenAI CEO added.
OpenAI's CPO said to ask yourself when doing something new: "Is there a way that AI could help me do this faster?"
Sam Altman says it isn't a matter of if AI is going to outpace humans, but when.
"You will not outrun the AI on raw horsepower," Altman, the CEO of OpenAI, said in a Q&A session alongside OpenAI CPO Kevin Weil earlier this month with students at the University of Tokyo. "That's over. That's probably over this year."
So what's a human to do?
Weil, who has previously worked at Facebook and Instagram, said the sooner students start integrating AI into their daily lives, the better prepared they'll be when it crops up in future professions.
"To me, the lesson in there, the thing to take away now, is just start using these tools," the chief product officer said. "Start incorporating them into the way that you work, into the way that you study. When you're doing something new, ask yourself, 'Is there a way that AI could help me do this faster?'"
Altman said that if students are worried, they should try thinking about it differently.
"I think that the wrong way to think about it is just like β this thing is going to happen, and like it's going to beat us at everything," Altman said. "What will happen is, it'll be like step-by-step evolving together, and what we do will be unimaginable to people that used to have to work without this technology."
Altman told the students that trying to best AI in terms of pure skill is like trying to "outrun the calculator" at arithmetic.
"Are you going to be better at the AI at math, or a better programmer than the AI on its own, or better at physics?" Altman said. "The answer is no, you will not be better at any of those things. And so specific skills β you'll be able to do things with AI that no one could do before, and there'll be new ways to work with it."
In Altman's vision of the future, AI is so far advanced that everyone has access to the equivalent of "the best, most competent company on Earth." To compete in that cutting-edge world, Altman recommended the students in attendance develop new skills to help them leverage AI to their advantage.
"The skills that you need in that world are figuring out what people want, sort of creative vision, quick adaptability, resilience as everything is changing around you, and the sort of learning how to work with these tools to do way more than people could without it," Altman said.
When students do enter the workforce, Weil said they could benefit from keeping AI in mind. The best-positioned companies are those that see the technology as a potential boost, rather than a competitor, he added.
"If you're building something, and you're nervous about our next model release because it might be able to do the thing that you're doing, that's not a good place to be," Weil said. "But if you're building something and you can't wait for our next model release, because you're just at the edge of capabilities and our next model release that'll be that much smarter is going to make your product amazing, that's a good place to be."
OpenAI, which generates revenue from selling access to its AI models to companies as well as consumers, was valued at $157 billion in October β making it one of the world's most valuable startups. The company launched its first AI agent, Operator, to subscribers paying $200 a month for a ChatGPT Pro subscription in January.
Also in January, Altman wrote a blog post predicting the entry of the first "AI agents" into the workforce by 2025, programs that could "materially change the output of companies."
On Sunday, Altman published a new blog post titled "Three Observations," where he invited the reader to think of an AI agent as a "real-but-relatively-junior virtual coworker."
"Now imagine 1,000 of them. Or 1 million of them," Altman wrote. "Now imagine such agents in every field of knowledge work."
Some shoppers who placed orders through DHL Express said they were hit with high fees, sometimes more than the item's retail price, since Trump's China tariffs went into effect.
Sven Hoppe/picture alliance via Getty Images
US shoppers said they've received higher shipping-related fees after Trump's new tariffs on Chinese goods went into effect.
Trump ordered the end of the de minimis exemption, long used by Shein and Temu to avoid import fees.
However, a follow-up executive order signed by Trump on Friday paused the ending of the de minimis exemption.
President Donald Trump has pressed pause on an earlier executive order to end an exemption used by Chinese e-commerce giants Shein and Temu to avoid import fees.
Meanwhile, some Americans have received messages from companies and shipping firms this week asking them to pay additional fees in order to receive the packages they had previously ordered. Some of the shoppers shared screenshots of the messages and their purchase receipts with Business Insider.
Melissa Covarrubias, a San Diego-based influencer, paid additional fees of over $100 on an order shipped through DHL. Covarrubias told BI the added costs felt like "another way to squeeze more money from everyday consumers."
President Trump initially signed an executive order on Saturday to implement new tariffs on imported goods from China, Canada, and Mexico and ended the de minimis exemption. However, he later announced the tariffs on Canada and Mexico would be paused for a month, and on Friday issued a temporary pause on the removal of the de minimis exemption through another executive order.
Section 321, also known as de minimis, let importers avoid paying duty and tax on shipments valued under $800 that were headed directly to US customers.
Some US consumers have taken to social media in the last week to share examples of being charged outstanding duty fees on imported orders from China.
Covarrubias said she placed an order with Australian-based athleisure brand Crop Shop Boutique through DHL's delivery service, which initially cost $305. She paid the original price for her order and was later asked to pay a "duty payment" of $115.91.
"CSB assured customers that they would either cover 50% of the tariffs or provide a gift card for the same amount," she said. "Since my package had already arrived in San Diego, I paid the full tariff fee of $115.91, and CSB later refunded me half. While I appreciate the partial reimbursement, the high duty fees make me hesitant to shop with them again."
CSB's website says that its suppliers and manufacturers are located in China and Turkey, and that its shipments come from its warehouse in Queensland, Australia. A spokesperson for CSB did not immediately respond to a request for comment.
"DHL Express has a standardized set of fees and handling charges that apply for the customs clearance process," a DHL spokesperson said. "These fees are in addition to government taxes and duties. Charges vary by shipment and are listed on our website. Shipments under the de minimis threshold are exempt."
Cristina Adams, a beauty and lifestyle influencer, said she had a similar experience, waiting on a pair of frames from Lensmart en route from China.
Adams said that after she paid for her order β $14.95 for the frames and an extra $21.95 for expedited shipping β she received an email from DHL requesting a duty payment of $18.82.
"I did end up paying the fee to receive my package, as it was something very necessary," Adams said. "But I feel like this is unfair, as I already paid an additional fee to receive the package already."
Adams said the "inconvenience" would make her more conscious of where she's buying from, going forwards. Covarrubias said she also plans to make changes to her shopping habits.
"Moving forward, I'm going to be more mindful of where I shop," Covarrubias said. "Malls and local stores often carry the same items at higher prices, and I'd rather not pay a premium for something I can get elsewhere for less."
Covarrubias said that, in an effort to cut down on spending, she also canceled her Amazon account.
"At the end of the day, we never truly know what's ahead β whether it's more economic changes, inflation, or new policies that make it even harder for the average person to afford quality products," she said.
Travis Kalanick said pressure from Chinese rivals drove innovation on both sides.
Mike Windle/Getty Images for Vanity Fair
Intense pressure from Chinese rivals drove innovation at Uber, cofounder Travis Kalanick said in a recent podcast episode.
He said competitors copied what Uber rolled out at dizzying speeds.
Eventually, persistent copying led to genuine creativity, he said, allowing China to out-innovate some US companies.
In the thick of the 2010s, Uber was waging "all-out war" in China.
Travis Kalanick, Uber's cofounder who resigned in 2017, said that the sheer speed at which Uber's Chinese competitors copied the company's innovations was maddening.
And the way he sees it, that relentless approach eventually gave way to true innovation β the kind of ingenuity recently displayed by Chinese AI startup DeepSeek that sent shockwaves through Silicon Valley.
"There's no way I could express the frenetic intensity of copying that they would do on everything that we would roll out in China," Kalanick said on a recent episode of the "All-In Podcast." "And it was so epically intense that I basically had a massive amount of respect for their ability to copy what we did."
In 2016, the company was in a fierce battle for market share with Chinese rideshare rival Didi Chuxing, to which Uber would later sell its China operations in 2016. But before DiDi absorbed Uber's China business, the pressure to outpace each other created a veritable arms race.
"I just couldn't believe it. We would do real hard work, make it, we'd dial it, and it would be epic and it would be awesome," Kalanick said. "We'd roll it out, and then like two weeks later β boom! They've got it. A week later β boom! They've got it. And, of course, I used that to drive our team."
The competition pushed Kalanick to recruit "over 400 Chinese nationals," which he said created a sort of subculture within Uber's Silicon Valley office.
"We had a whole floor for the China growth team and it was primarily Chinese Nationals," Kalanick said. "We had billboards on the 101 in Silicon Valley in Chinese β Uber billboards to join our team in Chinese β to serve the homeland, right? It was like an all-out war. It was really epic."
Eventually, Kalanick said, that furious pace of development β and getting used to working at that speed β gave way to innovation.
"What happens is, when you get really, really good at copying, and that time gets tighter, and tighter, and tighter, and tighter, and tighter, you eventually run out of things to copy," Kalanick said. "And then it flips to creativity and innovation."
In some cases, Kalanick said that Chinese companies are now getting ahead of their American counterparts.
"But as they exercise that muscle, it gets better, and better, and better," Kalanick said. "So, if you want to know about the future of food, like online food delivery, you don't go to New York City β you go to Shanghai."
In part, Chinese companies are well-positioned to take advantage of circumstances that are unique to their country, Kalanick said.
"You know, they're taking advantage of their economics on labor and things like this," Kalanick said. "It wouldn't exactly work that way here, but a lot of the innovation you will see coming out on Uber Eats or DoorDash β like the stuff that's coming out now β is stuff that existed three years ago, four years ago in China. Maybe longer."
CloudKitchens and DiDi Chuxing did not immediately respond to requests for comment.
At some point along the way, Chinese businesses went from lagging behind to setting the bar on innovative features, Kalanick said.
"Eventually you cross that threshold of copying, and you're innovating, and then you're leading," Kalanick said. "And I think we see that in a whole bunch of different places."
Morris Chang, the founder of TSMC, said he essentially "bet" the company's future on a deal with Apple to manufacture iPhone chips. Today, Apple is one of the chipmaker's biggest customers.
Thomson Reuters
TSMC's founder spoke about how the company secured Apple's business in a rare podcast interview.
Morris Chang said it was TSMC's reputation with customers that helped them beat Intel for an iPhone contract.
The founder said he "bet" the company on the Apple deal, but felt it was a winning wager.
The chip business is fiercely competitive. So what's it like to negotiate a contract with Apple, one of the biggest customers in town?
Morris Chang, the 93-year-old founder of chipmaking giant TSMC, recently gave a rare interview where he gave a behind-the-scenes look at how he navigated a landmark deal to manufacture iPhone chips.
It began with an unexpected dinner, lots of listening, and a compromise of sorts.
In the early days of the smartphone revolution, Chang said TSMC was positioned in exactly the right place at the right time.
"I quoted Shakespeare in my autobiography," Chang said on a recent episode of "Acquired." "That there's a tide in the affairs of men which, taken at its flood, leads on to fortune. I decided that this was β 28 nanometers β was going to be our tide."
The release of TSMC's 28-nanometer chips in 2011 came at the perfect moment, Chang said, with the company poised to take advantage of smartphone developers' growing need for semiconductor technology.
However, while Chang said he had thought for years about how to land Apple's business, the company had its own preferred way of doing business.
"Apple is a very close-mouthed company," Chang said. "If you try to talk to them, if you offer, you know, your service, they will just tell you to go away. They will come to see you when they are ready. That's what I knew about Apple, even then. And I know the same thing now."
Around that time, an opportunity presented itself in the form of an unexpected dinner with Jeff Williams, Apple's operations boss, at Chang's own home. Chang said Williams got right to laying out the framework for a potential deal for TSMC to make iPhone chips.
"Foundry our wafers, something like that. Pretty straightforward," Chang recalled Williams telling him. "I listened. That night, I think Jeff talked maybe 80% and I talked 20%."
However, there was a twist β Apple wanted TSMC to produce a 20-nanometer chip, which Chang viewed as a step back from the 16-nanometer chip he viewed as the natural next step in TSMC's foundry development.
"Now, that was a surprise to me," Chang said. "And, frankly, it was also a disappointment, because the progression after 28 was going to be 16."
"A half step is a detour," the founder added.
Williams offered TSMC a 40% gross margin, Chang said. The TSMC founder said he believed Apple was trying to be generous with the offer, but TSMC had already achieved 45% gross margins.
Chang said he kept quiet about that element of the proposal, deciding the dinner was not the right time to open up a negotiation over pricing.
Ultimately, Chang said TSMC decided to produce only half of the supply of chips Apple was asking for. After relaying his offer, Chang said Williams called to put negotiations on pause while Apple entered into talks with Intel, who at the time made chips for Apple's Mac computers.
Still, Chang said he wasn't "all that worried" because he felt TSMC better fit the criteria that Apple was searching for in a supplier.
"Technology β at that time we thought we were almost at par with Intel," Chang said. "Almost. In fact, I thought we were at par with Intel. Manufacturing, I thought we were better than Intel. And customer trust, we thought that our customers trusted us more than Intel's customers trusted Intel. "
Eventually, Chang said he heard some good news from Tim Cook himself while getting lunch with him at Apple's cafeteria and carrying their food trays up to his office.
"He told me there's nothing to worry about because Intel just does not know how to be a foundry," Chang said. "That's a very short but very satisfactory answer to me."
Chang said that he believes it was TSMC's reputation for addressing customers' every concern that gave it a leg up.
"I think Tim meant that the customer asks a lot of things," Chang said. "We have learned to respond to every request. Some of them were crazy, some of them were irrational β but we responded to each request courteously. Which we do."
"I knew a lot of Intel's customer customers in Taiwan, you know, all the PC makers are Intel's customers β none of them liked Intel," he added. "Intel always acted like they were the only guy."
Intel and Apple did not immediately respond to Business Insider's request for comment prior to publication. A spokesperson for TSMC declined to comment.
Borrowing billions, Chang's gamble on investing in 20-nanometer foundries ultimately paid off, he said.
"I bet the company, but I didn't think I would lose," Chang said.
After a final meeting with Williams sealed the deal, Chang said he told the Apple executive that they should celebrate at a 3-star restaurant.
"Jeff jokingly said, 'If you didn't like the pricing we probably would be going to a McDonald's,'" Chang said.
Fast forward to today, and Apple is one of TSMC's biggest customers, manufacturing many of the tech giant's custom-designed chips for the iPhone, Mac, and iPad.
Last year, TSMC started producing custom A16 chips for Apple at one of its Phoenix factories, two years after now-CEO Tim Cook said Apple would begin using US-made chips for the first time in almost a decade.
Deutsche Bank's Christian Sewing is the latest CEO to defend DEI initiatives at his company.
Ralph Orlowski/REUTERS
Several firms have rolled back DEI efforts amid pressure from conservative groups and the White House.
Some CEOs have voiced their support or defended the diversity programs at their companies.
Deutsche Bank's CEO is the latest bank executive to defend DEI initiatives.
The list of CEOs who are publicly backing their companies' DEI policies is growing.
Deutsche Bank CEO Christian Sewing is the latest, joining JPMorgan's Jamie Dimon and Goldman Sachs' David Solomon in publicly defending DEI programs amid wider external criticism of diversity initiatives from conservative activists and President Donald Trump's new administration.
One of Trump's first executive orders placed federal DEI staffers on administrative leave as work began to dismantle their departments.
The pullback on diversity, equity, and inclusion in the private sector began before Trump took office. A slew of companies β including Meta, Walmart, and McDonald's β either reduced or ended their own DEI initiatives. Some had been targeted by conservative activist groups.
However, amid the tensions around DEI, some executives are taking a public stance in support of their firms' policies.
Deutsche Bank
During a press conference Thursday, Deutsche Bank CEO Christian Sewing expressed support for his bank's DEI programs.
Thomas Lohnes/Getty Images
In a Frankfurt press conference on Thursday, Sewing said his company was "firmly behind" its DEI programs, calling them "integral" to its strategy.
"Quite honestly, I know what diversity has brought us on the management board at the top reporting level," Sewing said. "That's why we are strong supporters of these programs."
If the legality of DEI programs should ever change, the bank might reevaluate its stance, he added.
"But in terms of our basic attitude, in terms of our mindset, both issues β whether it's diversity policy, inclusion, or sustainability β are an integral part of Deutsche Bank's strategy," he said.
JPMorgan
JPMorgan CEO Jamie Dimon said to "bring them on" in response to apparent targeting by activist shareholders.
"Bring them on," he told CNBC on January 22. "We are going to continue to reach out to the Black community, the Hispanic community, the LGBT community, the veterans community."
Goldman Sachs
David Solomon, the CEO of Goldman Sachs, said clients think about talent diversity.
Patrick T. Fallon/AFP via Getty Images
Solomon said that while he'd heard of shareholder proposals, he hadn't yet reviewed them.
"We're advising our clients. They think about these things," Solomon said in a separate interview with CNBC on January 22. "They think about decarbonization, they think about climate transition. They think about their businesses, how they find talent, the diversity of the talent they find all over the world."
Goldman Sachs' stated inclusion goals are geared toward funneling more women into leadership positions, making "progress towards racial equity," and ensuring diversity both among its vendors and in its boardroom.
Cisco
Cisco CEO Chuck Robbins said a diverse workforce is "better."
Richard Drew/AP
In an interview with Axios on January 22, Chuck Robbins, the CEO of Cisco, said that a diverse workforce being better was an inarguable fact.
"I think the pendulum swings a little wide in both directions. And for us, it's about finding the equilibrium," Robbins said, adding: "You cannot argue with the fact that a diverse workforce is better."
Robbins added that DEI was being discussed as "single issue" but that he believed it's far more complex.
"And in reality, it's made up of 150 different things, and maybe seven of them got a little out of hand," he said. "I think those six or seven things are going to get solved, and then you're going to be left with common sense."
Costco
Costco CEO Ron Vachris received a letter from Republican attorneys general urging him to end the company's DEI practices.
Costco
Costco has been clear about its support for DEI, even as it faces mounting pressure from conservative groups to walk back its policies.
Nearly all of Costco's shareholders rejected a proposal by the National Center for Public Policy Research last week that was similar to the one received by JPMorgan. It would have required Costco to issue a report on the legal and financial risks of DEI policies.
"The overwhelming support of our shareholders' vote really puts an answer to that question," Costco CEO Ron Vachris said.
Costco's board has also previously issued statements reaffirming the company's dedication to DEI.
"Our commitment to an enterprise rooted in respect and inclusion is appropriate and necessary," the board wrote in December.
The company continues to face scrutiny for its policies, as 19 Republican attorneys general sent a letter to Vachris urging him to end what they called "divisive and discriminatory DEI practices."