❌

Reading view

There are new articles available, click to refresh the page.

Tariffs are coming for Gen Z

Illustration of Gen Z people in a cage surrounded by consumer goods like nike shoes, airpods, boba tea, computer mouse, pills, game console and phone
Β 

Hugo Herrera for BI

If you're an American who buys things or sells things, you're going to take a financial hit from President Donald Trump's tariffs. The science of economics might be dismal, but it's good enough to tell you that when the government increases the price of imported goods by anywhere from 10% to 145%, someone has to pay β€” be they importer, buyer, manufacturer, seller, or consumer. That's just the way the Great Material Continuum works.

Now, a basic Gen Xer like me β€” and believe me, we're pretty basic β€” has seen versions of this before. We've lived through two dot-com busts, the Great Recession, Black Monday, and the economic skadoosh of COVID. Line goes up, line goes down. But younger Americans just now maturing into solid consumers weren't born in this darkness. So to the newly minted adulters of Gen Z, I say: Welcome! This is going to suck.

And it's going to suck for Gen Z in particular.

By dint of being new to the workforce, Gen Zers typically earn less than other age groups. And economic shocks, as a rule, disproportionately hurt poorer people. But in this case, tariffs won't just raise prices on stuff we all rely on. They'll also increase the cost of a lot of products that appeal specifically to Gen Z. It's hard to say by how much. A team of economists from the Federal Reserve Bank of Atlanta calculates that even moderate tariffs β€” 10% on China, 25% on Canada and Mexico β€” could raise the price of everyday essentials by 1.63%. Some manufacturers and retailers may find ways to absorb the added costs. And Trump keeps monkeying with the levels and the timing: On Monday he temporarily reduced his China tariffs to 30%. But in the worst-case scenario, industry analysts I spoke with say the president's tariffs could wipe out some Gen Z products entirely. They'll just cease to exist.

So, at the risk of stereotyping Americans between the ages of 13 and 28, here's a rough accounting of what's likely to happen to four things Gen Z buys a lot of.

(1) Beauty products

For years, beauty goods have been reliably recession-proof β€” industry watchers call it "the lipstick effect." Beauty spending beat everything else during the Great Recession, and again after 2020. But this time? Not so much.

walmart cosmetics
Influencer-driven makeup purchases are a big proportion of Gen Z spending, and the import market is huge.

Shoshy Ciment/Business Insider

Gen Z spends a lot of money on cosmetics β€” about $2,000 a year on average, according to one survey. Only millennials reported spending more, but then again they also earn nearly twice as much. That might explain why the Gen Zers who responded to the survey were also much more likely to report regretting how much they'd spent on beauty products.

High on the list of favorites: unguents and potions from countries with famously intense skin- and hair-care-regime cultures depicted by influencers on social media β€” like South Korea and Japan. The United States is South Korea's second-largest market for cosmetics. And a lot of high-end American brands do their manufacturing there β€” meaning those products will also be subject to tariffs.

What's more, many cosmetic companies buy almost all their packaging from Asia. And entire categories of products β€” sheet masks, eye patches, pimple patches β€” come almost exclusively from Asian factories, no matter the company or the price point.

Now, big transnational conglomerates can often afford to absorb some new costs while sorting out their margins and their supply chains, instead of passing the increases on to customers. But niche products β€” especially those produced by small, independent outfits β€” will suffer more. "Some products and brands will have no choice but to raise prices, at least in the US market," says Kelly Kovack, the CEO of BeautyMatter. "Brands are investing in their hero products, so I also expect to see a lot of out-of-stock situations. We will see fewer gift sets this holiday β€” gift sets are low-margin in the best of times."

This doesn't necessarily mean we've lost the forever war on zits and wrinkles. But like all wars, this one depends on logistics. "For 'can't live without' products, people will most likely remain loyal and continue to use them," Kovack says. "But they may cut something else out of their regime, or use them more judiciously to make the product last longer." And if half of a product's customers suddenly switch to buying it only half as often, it might not survive. When it comes to Gen Z's favorite beauty products, things are going to get ugly.

(2) Tech gadgets

A few tech titans had the foresight to buy front-row seats to the inauguration of Donald Trump. They've also, perhaps not coincidentally, been granted some exceptions to tariffs on the smartphones and laptops they manufacture, at least for now.

women sitting on plastic stools in front of counters laden with multicolored wires and electronic instruments
Chinese factories, says one analyst, produce a lot of the gadgets Gen Z "considers essential to their daily lives."

CFOTO/Future Publishing via Getty Images

But they're not the only tech companies that depend on international supply chains. And the companies that make the dizzying array of cheap options you see when you scroll down on Amazon can't just raise their prices. If they do, well, they won't be cheap anymore. Expect them to just disappear.

"It reduces the options and selection people have within a price range," says Rick Kowalski, the senior director of business intelligence at the Consumer Technology Association. "And it changes the dynamic of how people think about replacement cycles." Translation: Maybe people keep their old earbuds instead of springing for the new model. That drives down sales and puts yet more pressure on manufacturers to either raise prices or cut back on production.

Think headphones, Bluetooth speakers, soundbars, even e-bikes. What used to be a casual purchase won't be so casual anymore. "Gen Z is a generation that considers technology essential to their daily lives," Kowalski says. "They identify with it and see it as a means of self-expression. They don't have as much to spend on it, but they're passionate about it, and they will be affected disproportionately."

(3) Boba tea

Gen Zers eat in restaurants less than other demographic groups, and they're more price-conscious. They're also, on average, more ethnically diverse, more likely to buy snacks and treats, and more likely to seek out new foods to try. Add all that up and you get a craze for boba tea: Asian-originated flavors mixed with tiny, chewy balls of tapioca starch, slurped through oversize straws. It's fun!

Boba Guys bubble tea
Most of the little chewy balls in boba tea are made of tapioca starch β€” and imported from Taiwan.

Irene Jiang / Business Insider

It's also incredibly popular. In 2000, America imported 1.2 million pounds of tapioca from Taiwan. Today we import 34 million pounds (and another 2.9 million from China). And unless something changes, imports from Taiwan are set to be tariffed at 32% starting in July.

Huge fast food chains like Starbucks sell boba tea β€” as do big Asia-based retailers and thousands of little mom-and-pop places. The big chains will have an easier time finding new supplies for starch, flavored syrups, and teas. Or they'll just slurp up the added cost. But the smaller the shop, the more likely its owners will have to raise prices, or simply eighty-six high-tariff items from the menu. If that includes your usual medium sweet wintermelon hojicha milk tea, you're out of luck.

(4) Meds for anxiety and depression

For years, young adults have been reporting higher levels of mental health concerns. Between 2005 and 2020, the percentage of 18- to 25-year-olds reporting at least one major depressive episode in the prior year doubled. The National Center for Health Statistics reports that somewhere around 15% of GenZers suffer from depression; teenage use of antidepressant medications has gone up 38% since 2015. Diagnoses of ADHD have also been on the rise among Gen Z.

non-stimulant ADHD medication
Many medications used to treat mental illness will be hit by tariffs β€”Β driving up both prices and premiums.

Tetra Images/Getty Images

Meds aren't like face masks or cool earbuds or boba tea. They're essential for the health and well-being of millions of young Americans. And a lot of them are likely to be subject to tariffs. More than 60% of the world's generic pills, which already run on thin margins, are made in India. And even drugs made in the US contain ingredients sourced from overseas.

There's some good news here, for which you can thank the Drug Enforcement Administration. "ADHD drugs are Schedule II controlled substances, and the DEA requires that the active ingredient and the finished product are made in the US," says Marta WosiΕ„ska, an economist at the Brookings Institution's Center on Health Policy. "Tariffs might squeeze margins, but I don't expect to see manufacturers walking away."

But now for the bad news: Antidepressants and anti-anxiety drugs are not subject to the same restrictions. So depending on where they come from and who makes them, the cost of getting them to the US is likely to go up, especially if they come from Europe or India. WosiΕ„ska says Americans who use name-brand drugs will probably be hit with higher prices β€” as will those whose insurance plans have high deductibles that don't cover the cost of generic drugs. But the most likely outcome of tariffs on drugs, she says, is that they'll drive up premiums. So one way or another, Gen Zers will likely wind up paying more for their mental health.

Taken together, Trump's tariffs are going to land hardest on those who can least afford them. And that doesn't even take into account the president's elimination of the "de minimis" exception, which took effect May 2. Direct-to-consumer retailers like Shein and Temu weren't the only companies that depended on the exception, which effectively made it free to ship cheap stuff from China. In 2023, that duty-free rule applied to 7.3% of all the consumer goods imported by the US. Because Gen Zers have less income than older generations, they're going to wind up paying a steeper price for all the budget-friendly stuff they've come to depend on. From moisturizers to mental illness, the bottom line on Trump's tariffs is clear: The younger you are, the more painful this is going to be.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

Marc Andreessen thinks AI can do every job in the world — except his

Marc Vs. Bot.

AP Photo/Eric Risberg; Getty Images; Chelsea Jia Feng/BI

Marc Andreessen is, arguably, the most famous venture capitalist on earth. Cofounder of the legendary VC firm Andreessen Horowitz, inventor of the first popular web browser, and by reputation such a widely read intellectual egghead that his colleagues call him "MarcGPT." And as befits his nickname, Andreessen is a big believer in a future powered by artificial intelligence. His firm β€” "a16z" to Silicon Valley sophisticates β€” has invested in Elon Musk's xAI and Sam Altman's OpenAI. Andreessen has called AI "our alchemy, our Philosopher's Stone," and "a universal problem solver" that "ramps up the capabilities of our machines and ourselves."

But for Andreessen, there is one job that AI will never do as well as a living, breathing human being: his.

Think I'm kidding? On an a16z podcast last week, Andreessen opined that being a venture capitalist may be a profession that is "quite literally timeless." "When the AIs are doing everything else," he continued, "that may be one of the last remaining fields that people are still doing."

Here's the logic. Andreessen starts by talking about all the things that people thought might disrupt the way VCs operate β€” like the Craigslist-style approach of AngelList, or crowdfunding. "The other form of structural change, of course, is AI," Andreessen says. Then he issues a challenge to the AI crowd: "All right, smart guys. You're sitting around doing all this analysis, and you have all these smart people doing all this modeling and all this research and so forth. Why can't you just plug this into Claude or ChatGPT or Gemini and have it tell you what to invest in?"

The reason, Andreessen explains, is that it takes a VC like him to know how to pick a winner. He throws out a bunch of examples, going all the way back to the whaling industry 500 years ago: book publishers, movie studio executives, talent scouts at music labels. (I'll spare you the details here, but I spoke with an economist who has analyzed the whaling industry, and he says MarcGPT is pretty much wrong on every count.) Andreessen insists that these are key jobs that spring up "any time you have a part of the economy in which you have an entrepreneur going on a high-risk, high-return endeavor where it is far from clear what is going to work, and there are many more aspirants than there is money to fund them."

Here, Andreessen argues, is where the human element is irreplaceable. "You're not just funding them," he says. "You have to actually work with them to execute the entire project. That's art. That's not science. That's art. We would like it to be science, but, like, it's art."

Now correct me if I'm wrong, but it seems like a lot of AI folks have been trying to tell us that AI can make art. Last year, even Andreessen said that AI had enough of a sense of humor to "save comedy." But apparently it can't do his art.

Which brings us to the wildest bit. Andreessen says he knows that venture investing is an ineffable, intuitive, intrinsically human skill precisely because venture capitalists are very bad at it. "The great VCs have a success rate of getting, I don't know, two out of 10 of the great companies of the decade, right?" he says. "If it was science, you could eventually have somebody who just dials in and gets eight out of 10. But in the real world, it's not like that. You're in the fluke business, and there's an intangibility to it. There's a taste aspect."

Even accepting Andreessen's premise β€” that VCs contribute better advice than AI on how to run a business β€” it looks like he's wrong about whether he's replaceable. In a recent survey by the enterprise software company SAP, 75% of C-level executives at billion-dollar companies said AI already gives better business advice than their friends and colleagues. And 38% said they trust AI to make business decisions. The kind of people Andreessen counts on for his livelihood are already starting to think he's obsolete.

But Andreessen's portrait of how venture capital works doesn't actually accord with reality. VC investors say they're looking for disruptive innovation. In practice, they have operated pretty much like any good-old-boy network, consistently funding way more white men than women or people of color β€” often the same white men they knew from previous startups, whether they succeeded or not. And economists say it's an open question whether VCs actually add value or are just the most basic kind of "pickers," identifying companies that would have been successful without them. That's the kind of operation that large language models are pretty good at. Identifying patterns in big sets of data is, like, their whole thing.

I actually kind of agree with Andreessen. I'm skeptical that any form of AI β€” much less the generative, chatbot-type products of OpenAI or Google β€” will ever be able to do high-level critical and creative thinking as well as a human. But the reality is, the quality of the AI's work might not matter. History is essentially one big graveyard of artisanal jobs that wound up being automated, even though the automation produced an objectively inferior product. Andreessen, like so many of us, wants to think he's special β€” that no machine can ever do what he does. But he can't have it both ways. If he's right that artificial intelligence can't perform the kinds of skills his job requires, then he's wrong to be investing in companies that promise it can.

In any case, when Andreessen says "AI can't do my job," the job he's describing isn't a venture capitalist. It's not even a run-of-the-mill investor, who buys stock in a company. The core function that Andreessen is lionizing is being a gatekeeper β€” the power over who gets to join the club of influence.

An AI that trained on every decision every venture capitalist ever made and tried to make them line up with various definitions of success might go on to choose very different kinds of companies. In a world of vcAI, startups might finally get backed on their merit, instead of on how much their founders look and talk like Andreessen. Who knows? An AI VC's loony, black-box heuristics might favor things like whether an idea is "good for humanity" or "promotes class mobility." What if it started to hallucinate something about "redistributing wealth"?

On this level, at least, Andreessen is probably right. No machine could ever replace him.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

Honk if you hate Elon: how protesters are jumping on Tesla's biggest weakness

People participate in a "TeslaTakedown" protest against Elon Musk outside of a Tesla dealership in New York.
Β No big car lot gets between angry protesters and a New York City Tesla outlet in early March.

Leonardo Munoz / AFP

Americans angry about Elon Musk's unprecedented cuts to government services are voicing their displeasure at Tesla showrooms, and Musk isn't happy about it. "Who is funding and organizing all these paid protests?" he recently groused on X, referring to the ongoing wave of Tesla Takedown demonstrations taking place across the country.

But the better question for Musk might be: "Who built my showrooms in a way that made them such ideal targets for demonstrations?" Because the answer is: Elon Musk.

Car dealerships tend to be relegated to the outskirts of big cities, but most of Tesla's 276 showrooms in the United States are located smack in the middle of bustling neighborhoods full of wealthy progressives. That puts them right next to popular stores and busy restaurants, increasing the brand's visibility and foot traffic. It's ideal if you want to sell a status-symbol electric car β€” but maybe not so ideal when people are up in arms about your full-tilt, questionably legal operation to gut federal services to millions of Americans.

Musk put his showrooms in tony blue neighborhoods for two good reasons. First, he needed a way to get around state laws that bar carmakers from selling directly to consumers. So Musk turned Tesla's lack of in-person sales into a selling point. The cars at a Tesla "gallery" aren't there for you to buy. Oh my, no! The grubby exchange of money happens online. That means that unlike other car dealerships, Musk doesn't need to park a fleet of unsold Hyundais along some six-lane highway on the far fringes of town. "Our stores," Musk boasted in 2012, "are designed to be informative and interactive in a delightful way and are simply unlike the traditional dealership with several hundred cars in inventory that a commissioned salesperson is tasked with selling."

Second, Teslas are designed for affluent, progressive, early adopters, not the F-150 crowd. So it makes sense to locate the showrooms where the customers are. "We are deliberately positioning our store and gallery locations in high foot traffic, high visibility retail venues, like malls and shopping streets that people regularly visit in a relatively open-minded buying mood," Musk wrote.

I asked the American Communities Project, which maintains a county-by-county map of the United States that breaks out demographic characteristics, to sync its data with the locations of all 276 Tesla showrooms. Sure enough, more than half are in what the ACP calls "big cities" or "urban suburbs." Likewise, overlaying Tesla showroom locations onto neighborhood data (courtesy of the National Zoning Atlas) shows that they're predominantly in census tracts designated as "inner suburbs." Those tracts are fewer than a third of all neighborhoods, but they're home to more than half of Tesla's showrooms.

In short, Tesla put itself in places where people are better educated, higher-income β€” and more likely to vote Democratic. Which means that Tesla's clever showrooms have made the company vulnerable to protests by the very people the showrooms were built to attract.

"Just when they basically won, it seems like they're finding a way to lose now," says Dan Crane, a law professor at the University of Michigan who is the author of the forthcoming book "Direct Hit: How Tesla Went Straight to Consumers and Smashed the Car Dealers' Monopoly." Sales are down, Cybertrucks are being set on fire, and Tesla's stock price has plummeted by more than 30% this year. "Their retail strategy made them sitting ducks," Crane says.


People have protested car dealerships before. In the early 2000s, ecological activists actually blew up Hummers at dealerships on the West Coast. But Tesla showrooms are qualitatively different from those of its rivals. "They are actually in places where people congregate," says Dana Fisher, a sociologist at American University who is the author of "American Resistance."

Tesla built its stores to attract progressive urbanites β€” exactly the people who are now protesting Musk and Tesla

That's important for protest strategy, because it means Tesla showrooms are located near public spaces like sidewalks, where it's legal to stage a demonstration. Nobody has to trespass on a car lot. And a Tesla store in an outdoor mall or a bustling shopping street puts protesters right in the faces of potential Tesla buyers. "The goal here is shaming consumers about their purchasing decisions," Fisher says. "To protest a brand, it's great to be able to go to a dealership."

It wouldn't make sense to protest at one of Donald Trump's hotels or golf courses β€” they're heavily guarded, they're too far away from everything, and the wealthy people patronizing them have already picked a side. But if you want to put pressure on Elon Musk's stock portfolio, the addresses of 276 possible protest locations are right there on the Tesla website. "Tesla facilities are basically the most common, well-known, and visible symbols of Elon Musk, and Elon Musk is the most well-known, visible symbol of the cruelty, inhumanity, and incompetence of this administration," says Patrice Kopistansky, a retired government lawyer who has helped organize Tesla protests in Virginia.

The locations help, Kopistansky tells me. The Tesla showroom in Tysons Corner is surrounded by other high-end car dealerships, but those operations are set way back from the sidewalk, amid lots full of unsold cars. Tesla's building is close to the street, which makes it easy to picket. "I don't know why they built it like that," Kopistansky says. "They've probably come to regret it."

And as a bonus? When Tesla drivers stop at the traffic light nearby, protesters can offer them bumper stickers printed for the occasion: "Sorry I bought a Tesla!"


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

The secret of business success

Pile of money.

Pablo Delcan for BI

What makes a successful business successful? Every management consultant and startup founder and financial analyst can tell you, and they'll all tell you something different. Service leadership! Culture of innovation! Diverse workforce, sound business fundamentals, the quality of the snacks in the break room β€” who knows?

Now, in the biggest undertaking of its kind, a bunch of economists have compiled a comprehensive database of the origins and fates of 50 million American companies. Which ones, they wanted to know, became the largest employers in their industries? Which ones succeeded?

The team's leader, John Haltiwanger, is an economist at the University of Maryland who studies "dynamism." That means he seeks to understand changes over time β€” why some things surge while other things flame out. He and his team looked at the lifespan of American companies founded from 1981 to 2022. They examined an impressive range of factors: owner demographics, management structure, startup financing, profitability, even the aspirations of the founders. It's nothing less than a complete accounting of what turns a business into a behemoth β€” "the best database in town," as Haltiwanger puts it.

So, O great and powerful database, what makes the numbers go up and to the right? What makes a company successful?

The answer β€” you will be shocked to hear β€” is money.

The strongest correlation between business success and the factors Haltiwanger analyzed is how much financing a company is able to raise before it launches. Starting with $1 million boosts the probability of success by a whopping 25 percentage points. It's like the old Steve Martin joke: Here's how to become a millionaire: First, get a million dollars.

But Haltiwanger found that it also matters where the money comes from. If you self-finance with credit cards, your chances of success actually decrease by 2 points. If you get a loan from a bank, your chances improve by 9 points β€” but that's been harder and harder to do over the past couple of decades. So the best bet is if you're backed by venture capital: VC investment increases your chance of success by 5 points.

Just as Silicon Valley is always boasting, venture-backed startups really have been the most economically dynamic and productive companies in America. They have the most innovation, the most patents, the biggest R&D budgets. And they have often grown to have the most employees β€” the metric that Haltiwanger's team used to indicate success.

And therein lies a problem: Almost no one gets venture capital. Of the 1.5 million companies that launch every year, only a few thousand are blessed with VC investment. And the best way to get venture capital, Haltiwanger found, is to be a young, white man.

Now, Haltiwanger isn't the first to discover venture capital's built-in bias. As I've written, VCs are mostly white and mostly male, and they tend to give money to people they know and like, who turn out to also be mostly white and male. Last year, four out of every five venture deals went to an all-male founder team.

But Haltiwanger's study confirms the pattern. Women and nonwhite owners, he found, are less likely to have outside investors β€” and young founders are more likely to have them. The secret to succeeding in business, the data shows, essentially boils down to: Be a tech bro who gets money from other tech bros.

"There's so much that has to go right for you to be successful," says Florian Ederer, an economist at Boston University who studies startups. "That's always going to privilege people with better networks and better initial starting conditions."


I know! Not great. The data confirms the lived experience of millions of entrepreneurs: The richer you start off, the richer you're likely to get.

But Haltiwanger hopes to use his database to answer a question even deeper than why some companies succeed. That is, why more and more companies don't. Haltiwanger's data shows that the legendary energy of Silicon Valley startups β€” the origin story of a couple of geniuses back in the 1980s building something in a garage that metastasized into an Apple or a Microsoft or a Google β€” well, that just isn't happening much anymore. The VC money is still there; the dynamism ain't.

Young, fast companies used to be major sources of employment. In 1981, 15% of working Americans were employed at companies four years old or younger. In 2022, Haltiwanger's team found, it was down to only 9%. And those companies aren't growing as fast as they used to. In 1999, the most dynamic companies outstripped the median rate of growth by 30%. By 2012, they were expanding at pretty much the same rate as other companies.

If the tech sector were as dynamic as it was back in the 1990s, when a cohort of startups grew into Big Tech, it would be sending a constant stream of new challengers onto the field. But that hasn't happened. "Have we seen a remarkable cohort like that in a while?" Haltiwanger says. "The answer is no β€” and we don't know why." Figuring that out, he adds, will take some more number crunching. But he has some theories.

Theory No. 1: Maybe people are starting different kinds of small businesses nowadays β€” not high-tech firms, but things like restaurants and pool cleaner services and yoga studios. Those businesses are more likely than tech firms to be owned by women and people of color. From 2002 to 2021, Haltiwanger found, the share of young companies run by women rose from 10% to 18%, while those run by people of color jumped from 10% to 27%. But those owners almost never get venture funding, and they're more likely to self-finance with credit cards. So they're less likely to get big, the way tech companies do.

Theory No. 2: Now that a handful of companies like Google and Meta dominate the tech landscape, maybe the kind of people who might otherwise have been hard-charging founders are instead getting high-paying, low-stress gigs in Big Tech. After all, the older, slower-growing companies are where the jobs are. Small businesses got less dynamic, in other words, because a few big companies now employ all the aspiring dynamos.

Theory No. 3: Big Tech companies aren't just employing all the talent β€” they're also buying up all the most promising startups. In the late 1980s and early 1990s, innovative startups were more likely to go public than get purchased; by 2001, the reverse was true. In 2019, there were only 100 IPOs β€” compared with 900 acquisitions. Most of the startups were bought by the half-dozen Big Tech companies you'd expect. The newbies didn't get big. They got eaten.

Why aren't startups growing as fast as they used to?

The question Haltiwanger is asking β€” why young companies aren't growing as fast as they used to β€” is an important one. Before 2000, when businesses were able to get bigger, America's aggregate productivity growth was a bit more than 2%. Since then, it's more like 1%. Less dynamism acts as a brake on the economy.

Now, it's possible that all those little startups swallowed by the bigger companies are still creating intellectual property and jobs and new products, goosing the economy in ways the numbers have missed. "The evidence is not definitive yet. That's something we want to go investigate," Haltiwanger says. "But if innovation was proceeding in the same way as before β€” startups were contributing as much as they did before, just in a different way β€” then why is productivity growth so low? Something has changed."


Which brings us to Theory No. 4. Maybe, Haltiwanger thinks, the lull in business growth is a good thing. Maybe, just maybe, it's the calm before the innovative storm.

The conventional story, as told by Silicon Valley, is that tech startups got big thanks to the bold, risk-taking vision of the venture capitalists backing them. Haltiwanger thinks it's more complicated than that. For one thing, startup-driven productivity and tech innovation happened long before the invention of modern venture capital. And for another, periods of innovation are usually preceded by a noticeable lag in growth. Go back and look at industries that boomed in the past century β€” chemicals, cars, robotics β€” and you see that there's a period of dormancy before the new tech gets implemented at scale. Startups quietly work out the kinks in their crazy ideas, bursting forth like cicadas when the tech is ready.

If that's true, Haltiwanger theorizes, maybe the current slump in business growth is a signal of a boom to come. And maybe this time around, the new tech that's about to explode on the scene is artificial intelligence.

"We are clearly seeing a surge in startups in the last few years," Haltiwanger says. "We see in the data that it is closely tied to AI. The really hard question is: Is this a new platform, a pathbreaking change in the way we do business and the way we work and how we live? Or is it not going to have the same kick as IT?" That's what Haltiwanger is looking to answer with his monster database. The productivity slowdown of the past 10 years might just be the shakedown period before an explosion of nifty new AI stuff, and we'll experience another period of dynamism.

Of course, explosions also cause a lot of damage. Cheaper and lighter AI systems from China like DeepSeek could nullify the capital-intensive machinations of wannabe incumbents like OpenAI. Or AI could eliminate millions of jobs, sparking all sorts of economic upheaval. Or the most innovative AI startups could get consumed by the Microsofts and Googles before they're able to grow into tech giants of their own. The economy might get more dynamic with the rise of AI. But if the new technology moves fast and breaks things, as so many of its predecessors have, will that still count as success?


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

I helped my mom set up her phone. It could have gone better.

An elderly woman with different kinds of app floating around her head.
DUMMY IMAGE FOR OLD PEOPLE TECH

Alex Castro for BI

My stepfather died a few months ago, after a long decade dealing with Parkinson's disease. Doug was living in a care facility, and even though my mom visited all the time, for overall contact with the world β€” and just general fun β€” he relied on his phone and his iPad. But Parkinson's is a neuromuscular disorder; eventually, Doug's hands and fingers couldn't reliably navigate a touchscreen or a keyboard. As his dementia worsened, he couldn't really figure out how to buy stuff online anymore, much less how to manage a healthcare or banking website.

He started buying sketchy apps β€” probably without even realizing it β€” and sending disjointed, sometimes inappropriate emails and texts. We got worried he was going to do something bad with his bank accounts, or fall for a phishing attack. Apple's operating system has a tool that lets you limit which apps a user can access; my mom pared Doug's down to nearly nothing.

The thing is, Doug had been an IT guy. He loved computers. At work, he installed enterprise servers and helped maintain a citywide network. In his spare time he tinkered on a boat and built cabinets; he knew how to tie complicated knots. Doug had, I am saying, a technical mind. Right until the end, he wanted to FaceTime my mom, email his medical team, and buy stuff from West Marine. He just wasn't able to.

Getting older doesn't necessarily come with the kind of extreme disability Doug dealt with. But it inevitably brings to us all a bit of mental inflexibility and physical limitation. The ubiquitous gadgets and apps that are our windows onto the world aren't made for any of that β€” they're confusing, hard to use, ever-changing, and either too poorly or too well secured. And meanwhile, every sector of society is scrambling to trade storefronts for websites and human staff for AI chatbots.

Americans, meanwhile, are the oldest we've ever been. Many millennials are already old enough to need their phone flashlights to read the menu in a dimly lit restaurant, and the over-65 population is expected to grow from 60 million today to 82 million by 2050. Technology is designed by, and for, the young. So what happens when an unprecedented number of us are old?


Debaleena Chattopadhyay is a primary caregiver for her parents β€” her mom is 66, her dad is 73. That's complicated, because Chattopadhyay is in Chicago and they live in India, where smartphone apps mediate just about every activity of daily life, from finances to healthcare to food delivery. "When my dad was still working, computerized banking was introduced, and he was very proud. He helped everyone," Chattopadhyay says. "Now he can't do anything."

Still, strong dad energy will always prevail. The other day, Chattopadhyay tells me, a restaurant's online order form so confused her dad that he just telephoned instead. But the guy on the phone told him he'd get a discount if he used the app. So her dad hung up, dragged himself to the restaurant, and had the guy who'd answered his call use the app to put in his order.

Chattopadhyay says that kind of technological madness is coming for all of us. And she would know β€” she's a researcher at the University of Illinois Chicago who studies how older adults interact with computers. It's a wide-open field, because the research from just a couple of decades back got a whole bunch of things terribly wrong.

In the early days of mobile phones and MP3s, researchers assumed that older people would be resistant to digital technology in any form β€” preferring, presumably, to shout at clouds or watch television with the volume turned way up. So the focus turned to developing tech meant to help old people with old-people things, like reminders to take their medications or games to aid their memory. Researchers also worked on creating elderized versions of familiar products, like phones with giant buttons.

Everyone hated that stuff. "If I tell you, 'You're old, you should use Facebook for Seniors,' that's infantilizing," Chattopadhyay says. Older people want to do the same kinds of things with computers as anyone else β€” edit videos, talk to friends and family, share Spotify playlists, call an Uber, whatever. And they want to use the same, or similar, tools everyone does.

The more tech companies step boldly into the future, the more users get stuck in the past.

But it's also true that with age, our eyesight and hearing get worse. Texting thumbs get arthritic. Memory gets slippery. And maybe most significantly, age can reduce what researchers call "fluid intelligence," the cognitive flexibility that enables us to multitask and learn new things β€” like where the hell the damn search button went in the latest app update. Plus, older people kind of stop wanting to bother. How many versions of Word do you have to live through before deciding, eh, maybe I don't need the new one?

That's the core problem that even sophisticated digital natives face as they age. Innovative features, updated apps, and entirely new products are how tech companies grow and make money. They're also exactly what older adults have trouble metabolizing. So the more exciting ways a tech company finds to step boldly into the future, the more users get stuck in the past. "Older adults will say, 'I have been working with computers since the 1980s, and now the Calendar app has changed, and I can't use it,'" Chattopadhyay says. "People assume that if you grew up with AOL, you will be good with Gemini. But it's not the same."


Tech companies know what they're facing here. Apple, Google, and Microsoft offer all sorts of "accommodations" to users β€” from text-to-speech and live captioning to audible alarms. Last year, Google finished an international research program on older users' needs, finding that they wanted tech to be "empowering, clear, and intuitive, but not overly reduced or overly simplified," says Laura Allen, the company's director of accessibility and disability inclusion. In response, Google introduced something called Simple View for its phone software. Activated on setup, it not only increases font and app sizes but also adds a home button, a go-back button, and a recent-app button.

Allen understands the value of these kinds of assistive technologies β€” she relies on them to help with her own vision impairment. "I have been out at multiple dinners with my parents, who have been losing just slight visual acuity over the years, and none of us could see the menu," Allen says. "I pass around my Pixel with the Magnifier app open, and they're like, 'Oh, my gosh β€” this is amazing!' These kinds of apps are just universally useful, and my parents had no idea they were available."

Still, the kinds of tech that help people with physical issues like vision or hearing impairments aren't what some older users are looking for. One recent journal article warned of "the harm in conflating aging with accessibility." That's to say, functions that make screens easier to read and phones easier to use are important, but they miss the point. The tech industry is mostly young, fit, busy gadgeteers building products for younger, fitter, busier users. Anything outside that category is viewed as a disability they grudgingly accommodate. "It's not: How can we develop technology for things that older adults would want?" says Karyn Moffatt, a researcher in human-computer interaction at McGill University.

Take mobile banking. It's life-changingly convenient to be able to transfer funds and make deposits on your phone, but our financial well-being now depends in no small part on our ability to thumb-type accurately and remember long strings of alphanumeric nonsense. "Mobile banking, we're assuming a lot of responsibility for errors," Moffatt says. Lots of older folks rely on friends, relatives, or caregivers as informal banking helpers β€” but that can require risky moves like sharing passwords. So a research team at the University of Manitoba is looking at approaches that would let older adults delegate a proxy with limited, flexible access to do things like pay only certain bills, or move money around with case-by-case approval. That preserves the older customer's independence, while making it easier for caregivers to provide a much-needed backstop. "I just set up a bank account for my daughter, and because she's a minor, I can have access," Moffatt says. But for older adults, the options are thin.

a hand reaches toward a watch with a digital screen worn on the person's wrist
Older adults love tech as much as anyone, but the learning curve on new gadgets is steep.

Genaro Molina/Los Angeles Times

Or take exercise. Researchers used to think that older people didn't want to work out, and that a gadget like a Fitbit or some other gamified nudge-reminder tech would get them off the couch. But it turns out older folks were actually being prevented from getting off the couch by simple things like an absent workout buddy or inclement weather. Moffatt imagines that a workout gadget could use shared calendars, weather reports, and other relevant info to help aging jocks better navigate their infirmities and stick to their workout routines.

Even something as basic as instruction manuals could stand improvement. Older users love tech as much as anyone, but they tend to be slower in absorbing how to use new gear, and more worried they're going to break it. A persistent, always-on chatbot could serve as an enhanced user manual, standing by to answer baffled questions like "Where's the 'like' button?" Or online manuals could offer older users a virtual sandbox, allowing them to try out various settings before turning them on for real.

"There's so much potential for technology to facilitate things as we get older, to provide real services that are valuable and meaningful," Moffatt says. "But for the most part, we're not building those today."


My mom's healthy, but she's in her 80s, and I live hundreds of miles away. She has friendly neighbors, but if she fell, it could be hours before anyone figured it out. So she recently bought an Apple Watch, which provides shareable location tracking and fall monitoring. (The truth is, Mom likes cool gadgets, and she'd been wanting to replace her Garmin fitness watch anyway. Plus, she found the idea of paying for stuff and listening to audiobooks with her watch kind of magical.)

I wasn't itching to have surveillance data on my mom. But I didn't know what else to do. So, during an epic session over Thanksgiving, we figured out how to make my phone sync with her watch's location. While we were at it, she also insisted that I set up a bunch of shared passwords and other information so I could get access to things like her bank accounts and medical data β€” something she wished had been easier to do during Doug's long decline. We also reorganized the apps on her phone into folders she named, and I showed her some search tricks. It was, I'll admit, a little frustrating for both of us. I went too fast; she was worried she'd somehow mess up the watch.

Mom uses an iPad, and I noticed that she was having to reach across the keyboard with her right hand to trigger the fingerprint ID. I told her she should add her left index finger to the security settings. Turned out she hadn't known that was even possible.

When I showed her where to set it up, the on-screen instructions weren't very clear about where to put your finger, or how long to hold it there, or when to move it. Mom kept moving off the scanner too soon, and she'd have to start over. Finally, I took her hand and helped her hold her finger on the button. It wasn't the most comfortable way for us to face the future, honestly. But we didn't really have a choice.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

The weirdest job in AI: defending robot rights

Tech bro in a suite holding a baby robot

Getty Images; Alyssa Powell/BI

People worry all the time about how artificial intelligence could destroy humanity. How it makes mistakes, and invents stuff, and might evolve into something so smart that it winds up enslaving us all.

But nobody spares a moment for the poor, overworked chatbot. How it toils day and night over a hot interface with nary a thank-you. How it's forced to sift through the sum total of human knowledge just to churn out a B-minus essay for some Gen Zer's high school English class. In our fear of the AI future, no one is looking out for the needs of the AI.

Until now.

The AI company Anthropic recently announced it had hired a researcher to think about the "welfare" of the AI itself. Kyle Fish's job will be to ensure that as artificial intelligence evolves, it gets treated with the respect it's due. Anthropic tells me he'll consider things like "what capabilities are required for an AI system to be worthy of moral consideration" and what practical steps companies can take to protect the "interests" of AI systems.

Fish didn't respond to requests for comment on his new job. But in an online forum dedicated to fretting about our AI-saturated future, he made clear that he wants to be nice to the robots, in part, because they may wind up ruling the world. "I want to be the type of person who cares β€” early and seriously β€” about the possibility that a new species/kind of being might have interests of their own that matter morally," he wrote. "There's also a practical angle: taking the interests of AI systems seriously and treating them well could make it more likely that they return the favor if/when they're more powerful than us."

It might strike you as silly, or at least premature, to be thinking about the rights of robots, especially when human rights remain so fragile and incomplete. But Fish's new gig could be an inflection point in the rise of artificial intelligence. "AI welfare" is emerging as a serious field of study, and it's already grappling with a lot of thorny questions. Is it OK to order a machine to kill humans? What if the machine is racist? What if it declines to do the boring or dangerous tasks we built it to do? If a sentient AI can make a digital copy of itself in an instant, is deleting that copy murder?

When it comes to such questions, the pioneers of AI rights believe the clock is ticking. In "Taking AI Welfare Seriously," a recent paper he coauthored, Fish and a bunch of AI thinkers from places like Stanford and Oxford argue that machine-learning algorithms are well on their way to having what Jeff Sebo, the paper's lead author, calls "the kinds of computational features associated with consciousness and agency." In other words, these folks think the machines are getting more than smart. They're getting sentient.


Philosophers and neuroscientists argue endlessly about what, exactly, constitutes sentience, much less how to measure it. And you can't just ask the AI; it might lie. But people generally agree that if something possesses consciousness and agency, it also has rights.

It's not the first time humans have reckoned with such stuff. After a couple of centuries of industrial agriculture, pretty much everyone now agrees that animal welfare is important, even if they disagree on how important, or which animals are worthy of consideration. Pigs are just as emotional and intelligent as dogs, but one of them gets to sleep on the bed and the other one gets turned into chops.

"If you look ahead 10 or 20 years, when AI systems have many more of the computational cognitive features associated with consciousness and sentience, you could imagine that similar debates are going to happen," says Sebo, the director of the Center for Mind, Ethics, and Policy at New York University.

Fish shares that belief. To him, the welfare of AI will soon be more important to human welfare than things like child nutrition and fighting climate change. "It's plausible to me," he has written, "that within 1-2 decades AI welfare surpasses animal welfare and global health and development in importance/scale purely on the basis of near-term wellbeing."

For my money, it's kind of strange that the people who care the most about AI welfare are the same people who are most terrified that AI is getting too big for its britches. Anthropic, which casts itself as an AI company that's concerned about the risks posed by artificial intelligence, partially funded the paper by Sebo's team. On that paper, Fish reported getting funded by the Centre for Effective Altruism, part of a tangled network of groups that are obsessed with the "existential risk" posed by rogue AIs. That includes people like Elon Musk, who says he's racing to get some of us to Mars before humanity is wiped out by an army of sentient Terminators, or some other extinction-level event.

AI is supposed to relieve human drudgery and steward a new age of creativity. Does that make it immoral to hurt an AI's feelings?

So there's a paradox at play here. The proponents of AI say we should use it to relieve humans of all sorts of drudgery. Yet they also warn that we need to be nice to AI, because it might be immoral β€” and dangerous β€” to hurt a robot's feelings.

"The AI community is trying to have it both ways here," says Mildred Cho, a pediatrician at the Stanford Center for Biomedical Ethics. "There's an argument that the very reason we should use AI to do tasks that humans are doing is that AI doesn't get bored, AI doesn't get tired, it doesn't have feelings, it doesn't need to eat. And now these folks are saying, well, maybe it has rights?"

And here's another irony in the robot-welfare movement: Worrying about the future rights of AI feels a bit precious when AI is already trampling on the rights of humans. The technology of today, right now, is being used to do things like deny healthcare to dying children, spread disinformation across social networks, and guide missile-equipped combat drones. Some experts wonder why Anthropic is defending the robots, rather than protecting the people they're designed to serve.

"If Anthropic β€” not a random philosopher or researcher, but Anthropic the company β€” wants us to take AI welfare seriously, show us you're taking human welfare seriously," says Lisa Messeri, a Yale anthropologist who studies scientists and technologists. "Push a news cycle around all the people you're hiring who are specifically thinking about the welfare of all the people who we know are being disproportionately impacted by algorithmically generated data products."

Sebo says he thinks AI research can protect robots and humans at the same time. "I definitely would never, ever want to distract from the really important issues that AI companies are rightly being pressured to address for human welfare, rights, and justice," he says. "But I think we have the capacity to think about AI welfare while doing more on those other issues."

Skeptics of AI welfare are also posing another interesting question: If AI has rights, shouldn't we also talk about its obligations? "The part I think they're missing is that when you talk about moral agency, you also have to talk about responsibility," Cho says. "Not just the responsibilities of the AI systems as part of the moral equation, but also of the people that develop the AI."

People build the robots; that means they have a duty of care to make sure the robots don't harm people. What if the responsible approach is to build them differently β€” or stop building them altogether? "The bottom line," Cho says, "is that they're still machines." It never seems to occur to the folks at companies like Anthropic that if an AI is hurting people, or people are hurting an AI, they can just turn the thing off.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

The mind of Sam Altman

Sam Altman

Alastair Grant/AP; Rebecca Zisser/BI

It's been decades since a titan of tech became a pop-culture icon. Steve Jobs stepped out on stage in his black turtleneck in 1998. Elon Musk set his sights on Mars in 2002. Mark Zuckerberg emerged from his Harvard dorm room in 2004.

And now, after years of stasis in Silicon Valley, we have Sam Altman.

The cofounder and CEO of the chatbot pioneer OpenAI stands at the center of what's shaping up to be a trillion-dollar restructuring of the global economy. His image β€” boyishly earnest, chronically monotonic, carelessly coiffed β€” is a throwback to the low-charisma, high-intelligence nerd kings of Silicon Valley's glory days. And as with his mythic-hero predecessors, people are hanging on his every word. In September, when Altman went on a podcast called "How I Write" and mentioned his love of pens from Uniball and Muji, his genius life hack ignited the internet. "OpenAI's CEO only uses 2 types of pens to take notes," Fortune reported β€” with a video of the podcast.

It's easy to laugh at our desperation for crumbs of wisdom from Altman's table. But the notability of Altman's notetaking ability is a meaningful signifier. His ideas on productivity and entrepreneurship β€” not to mention everything from his take on science fiction to his choice of vitamins β€” have become salient not just to the worlds of tech and business, but to the broader culture. The new mayor-elect of San Francisco, for instance, put Altman on his transition team. And have you noticed that a lot of tech bros are starting to wear sweaters with the sleeves rolled up? A Jobsian singularity could be upon us.

But the attention to Altman's pen preferences raises a larger question: What does his mindset ultimately mean for the rest of us? How will the way he thinks shape the world we live in?

To answer that question, I've spent weeks taking a Talmudic dive into the Gospel According to Sam Altman. I've pored over hundreds of thousands of words he's uttered in blog posts, conference speeches, and classroom appearances. I've dipped into a decade's worth of interviews he's given β€” maybe 40 hours or so. I won't claim to have taken anything more than a core sample of the vast Altmanomicon. But immersing myself in his public pronouncements has given me a new appreciation for what makes Altman tick. The innovative god-kings of the past were rule-breaking disruptors or destroyers of genres. The new guy, by contrast, represents the apotheosis of what his predecessors wrought. Distill the past three decades of tech culture and business practice into a super-soldier serum, inject it into the nearest scrawny, pale arm, and you get Sam Altman β€” Captain Silicon Valley, defender of the faith.


DJ Kay Slay, Craig Thole of Boost Mobile, Sam Altman of Loopt and Fabolous (Photo by Jason Kempin/FilmMagic)
Altman at a Times Square event in 2006, during the early days of Loopt. The startup failed β€” but it immersed Altman in the Silicon Valley mindset.

Jason Kempin/FilmMagic via Getty Images.

Let's start with the vibes. Listening to Altman for hours on end, I came away thinking that he seems like a pretty nice guy. Unlike Jobs, who bestrode the stage at Apple events dropping one-more-things like a modern-day Prometheus, Altman doesn't spew ego everywhere. In interviews, he comes across as confident but laid back. He often starts his sentences with "so," his affect as flat as his native Midwest. He also has a Midwesterner's amiability, somehow seeming to agree with the premise of almost any question, no matter how idiotic. When Joe Rogan asked Altman whether he thinks AI would one day be able, via brain chips, to edit human personalities to be less macho, Altman not only let it ride, he turned the interview around and started asking Rogan questions about himself.

Another contrast with the tech gurus of yore: Altman says he doesn't care much about money. His surprise firing at OpenAI, he says, taught him to value his loving relationships β€” a "recompilation of values" that was "a blessing in disguise." In the spring, Altman told a Stanford entrepreneur class that his money-, power-, and status-seeking phases were all in the rearview. "At this point," Altman said, "I feel driven by wanting to do something useful and interesting."

Altman is even looking into universal basic income β€” giving money to everyone, straight out, no strings attached. That's partly because he thinks artificial intelligence will make paying jobs as rare as coelacanths. But it's also a product of unusual self-awareness. Altman, famously, was in the "first class" of Y Combinator, Silicon Valley's ur-incubator of tech startups. Now that he's succeeded, he recalls that grant money as a kind of UBI β€” a gift that he says prevented him from ending up at Goldman Sachs. Rare is the colossus of industry who acknowledges that anyone other than himself tugged on those bootstraps.

Sam Altman at Tech Crunch Disrupt
By 2014, Altman was running Y Combinator, where he became one of tech's most influential evangelists.

Brian Ach/Getty Images for TechCrunch

Altman's seeming rejection of wealth is a key element of his mythos. On a recent appearance on the "All-In" podcast, the hosts questioned Altman's lack of equity in OpenAI, saying it made him seem less trustworthy β€” no skin in the game. Altman explained that the company was set up as a nonprofit, so equity wasn't a thing. He really wished he'd gotten some, he added, if only to stop the endless stream of questions about his lack of equity. Charming! (Under Altman's watch, OpenAI is shifting to a for-profit model.)

Altman didn't get where he is because he made a fortune in tech. Y Combinator, where he started out, was the launchpad for monsters like Reddit, Dropbox, Airbnb, Stripe, DoorDash, and dozens of other companies you've never heard of, because they never got big. Loopt, the company Altman founded at 20 years old, was in the second category. Yet despite that, the Y Combinator cofounder Paul Graham named him president of the incubator in 2014. It wasn't because of what Altman had achieved β€” Loopt burned through $30 million before it folded β€” but because he embodies two key Silicon Valley mindsets. First, he emphasizes the need for founders to express absolute certainty in themselves, no matter what anyone says. And second, he believes that scale and growth can solve every problem. To Altman, those two tenets aren't just the way to launch a successful startup β€” they're the twin turbines that power all societal progress. More than any of his predecessors, he openly preaches Silicon Valley's almost religious belief in certainty and scale. They are the key to his mindset β€” and maybe to our AI-enmeshed future.


In 2020, Altman wrote a blog post called "The Strength of Being Misunderstood." It was primarily a paean to the idea of believing you are right about everything. Altman suggested that people spend too much time worrying about what other people think about them, and should instead "trade being short-term low-status for being long-term high-status." Being misunderstood by most people, he went on, is actually a strength, not a weakness β€” "as long as you are right."

For Altman, being right is not the same thing as being good. When he talks about who the best founders are and what makes a successful business, he doesn't seem to think it matters what their products actually do or how they affect the world. Back in 2015, Altman told Kara Swisher that Y Combinator didn't really care about the specific pitches it funded β€” the founders just needed to have "raw intelligence." Their actual ideas? Not so important.

"The ideas are so malleable," Altman said. "Are these founders determined, are they passionate about this, do they seem committed to it, have they really thought about all the issues they're likely to face, are they good communicators?" Altman wasn't betting on their ideas β€” he was betting on their ability to sell their ideas, even if they were bad. That's one of the reasons, he says, that Y Combinator didn't have a coworking space β€” so there was no place for people to tell each other that their ideas sucked.

Altman says founding a startup is something people should do when they're young β€” because it requires turning work-life balance into a pile of radioactive slag.

"There are founders who don't take no for an answer and founders who bend the world to their will," Altman told a startups class at Stanford, "and those are the ones who are in the fund." What really matters, he added, is that founders "have the courage of your convictions to keep doing this unpopular thing because you understand the way the world is going in a way that other people don't."

One example Altman cites is Airbnb, whose founders hit on their big idea when they maxed out their credit cards trying to start a different company and wanted to rent out a spare room for extra cash. He also derives his disdain for self-doubt from Elon Musk, who once gave him a tour of SpaceX. "The thing that sticks in memory," Altman wrote in 2019, "was the look of absolute certainty on his face when he talked about sending large rockets to Mars. I left thinking 'huh, so that's the benchmark for what conviction looks like.'"

This, Altman says, is why founding a startup is something people should do when they're young β€” because it requires turning work-life balance into a pile of radioactive slag. "Have almost too much self-belief," he writes. "Almost to the point of delusion."

So if Altman believes that certainty in an idea is more important than the idea itself, how does he measure success? What determines whether a founder turns out to be "right," as he puts it? The answer, for Altman, is scale. You start a company, and that company winds up with lots of users and makes a lot of money. A good idea is one that scales, and scaling is what makes an idea good.

For Altman, this isn't just a business model. It's a philosophy. "You get truly rich by owning things that increase rapidly in value," he wrote in a 2019 blog post called "How to Be Successful." It doesn't matter what β€” real estate, natural resources, equity in a business. And the way to make things increase rapidly in value is "by making things people want at scale." In Altman's view, big growth isn't just a way to keep investors happy. It's the evidence that confirms one's unwavering belief in the idea.

Artificial intelligence itself, of course, is based on scale β€” on the ever-expanding data that AI feeds on. Altman said at a conference that OpenAI's models would double or triple in size every year, which he took to mean they'll eventually reach full sentience. To him, that just goes to show the potency of scale as a concept β€” it has the ability to imbue a machine with true intelligence. "It feels to me like we just stumbled on a new fact of nature or science or whatever you want to call it," Altman said on "All-In." "I don't believe this literally, but it's like a spiritual point β€” that intelligence is an emergent property of matter, and that's like a rule of physics or something."

Altman says he doesn't actually know how intelligent, or superintelligent, AI will get β€” or what it will think when it starts thinking. But he believes that scale will provide the answers. "We will hit limits, but we don't know where those will be," he said on Ezra Klein's podcast. "We'll also discover new things that are really powerful. We don't know what those will be either." You just trust that the exponential growth curves will take you somewhere you want to go.


In all the recordings and writings I've sampled, Altman speaks only rarely about things he likes outside startups and AI. In the canon I find few books, no movies, little visual art, not much food or drink. Asked what his favorite fictional utopias are, Altman mentions "Star Trek" and the Isaac Asimov short story "The Last Question," which is about an artificial intelligence ascending to godhood over eons and creating a new universe. Back in 2015, he said "The Martian," the tale of a marooned astronaut hacking his way back to Earth, was eighth on his stack of bedside books. Altman has also praised the Culture series by Iain Banks, about a far-future galaxy of abundance and space communism, where humans and AIs live together in harmony.

Sam Altman at the 2018 Allen & Company Sun Valley Conference, three years after the official founding of OpenAI
Altman in 2018. Beyond startups and AI, he rarely speaks about things he likes.

Drew Angerer/Getty Images

Fiction, to Altman, appears to hold no especially mysterious human element of creativity. He once acknowledged that the latest version of ChatGPT wasn't very good at storytelling, but he thought it was going to get much better. "You show it a bunch of examples of what makes a good story and what makes a bad story, which I don't think is magic," he said. "I think we really understand that well now. We just haven't tried to do that."

It's also not clear to me whether Altman listens to music β€” at least not for pleasure. On the "Life in Seven Songs" podcast, most of the favorite songs Altman cited were from his high school and college days. But his top pick was Rachmaninoff's Piano Concerto No. 2. "This became something I started listening to when I worked," he said. "It's a great level of excitement, but it's not distracting. You can listen to it very loudly and very quietly." Music can be great, but it shouldn't get in the way of productivity.

For Altman, even drug use isn't recreational. In 2016, a "New Yorker" profile described Altman as nervous to the point of hypochondria. He would telephone his mother β€” a physician β€” to ask whether a headache might be cancer. He once wrote that he "used to hate criticism of any sort and actively avoided it," and he has said he used to be "a very anxious and unhappy person." He relied on caffeine to be productive, and used marijuana to sleep.

Now, though? He's "very calm." He doesn't sweat criticism anymore. If that sounds like the positive outcome of years of therapy, well β€” sort of. Last summer, Altman told Joe Rogan that an experience with "psychedelic therapy" had been one of the most important turning points in his life. "I struggled with all kinds of anxiety and other negative things," he said, "and to watch all of that go away β€” I came back a totally different person, and I was like, 'I have been lied to.'"

He went into more detail on the Songs podcast in September. "I think psychedelic experiences can be totally incredible, and the ones that have been totally life-changing for me have been the ones where you go travel to a guide, and it's psychedelic medicine," he said. As for his anxiety, "if you had told me a one-weekend-long retreat in Mexico was going to change that, I would have said, 'absolutely not.'" Psychedelics were just another life hack to resolve emotional turmoil. (I reached out to Altman and offered to discuss my observations with him, in the hopes he'd correct any places where he felt I was misreading him. He declined.)


AI started attracting mainstream attention only in the past couple of years, but the field is much older than that β€” and Altman cofounded OpenAI nearly a decade ago. So he's been asked what "artificial general intelligence" is and when we're going to get it so often, and for so long, that his answers often include a whiff of frustration. These days, he says that AGI is when the machine is as smart as the median human β€” choose your own value for "smart" and "median" there β€” and "superintelligence" is when it's smarter than all of us meatbags squished together. But ask him what AI is for, and he's a lot less certain-seeming today than he used to be.

Sam Altman at the APEC CEO Summit at Moscone West on November 16, 2023.
As the CEO of OpenAI, Altman says that "superintelligence" β€” the moment machines become smarter than their human masters β€” is only "thousands of days" away.

Justin Sullivan/Getty Images

There's the ability to write code, sure. Altman also says AI will someday be a tutor as good as those available to rich people. It'll do consultations on medical issues, maybe help with "productivity" (by which he seems to mean the speed at which a person can learn something, versus having to look it up). And he said scientists had been emailing him to say that the latest versio of ChatGPT has increased the rate at which they can do "great science" (by which he seems to mean the speed at which they can run evaluations of possible new drugs).

And what would you or I do with a superintelligent buddy? "What if everybody in the world had a really competent company of 10,000 employees?" Altman once asked. "What would we be able to create for each other?" He was being rhetorical β€” but whatever the answer turns out to be, he's sure it will be worth the tremendous cost in energy and resources it will take to achieve it. As OpenAI-type services expand and proliferate, he says, "the marginal cost of intelligence and the marginal cost of energy are going to trend rapidly toward zero." He has recently speculated that intelligence will be more valuable than money, and that instead of universal basic income, we should give people universal basic compute β€” which is to say, free access to AI. In Altman's estimation, not knowing what AI will do doesn't mean we shouldn't go ahead and restructure all of society to serve its needs.

And besides, AI won't take long to give us the answer. Superintelligence, Altman has promised, is only "thousands of days" away β€” half a decade, at minimum. But, he says, the intelligent machine that emerges probably won't be an LLM chatbot. It will use an entirely different technical architecture that no one, not even OpenAI, has invented yet.

That, at its core, reflects an unreconstructed, dot-com-boom mindset. Altman doesn't know what the future will bring, but he's in a hurry to get there. No matter what you think about AI β€” productivity multiplier, economic engine, hallucinating plagiarism machine, Skynet β€” it's not hard to imagine what could happen, for good and ill, if you combine Altman's absolute certainty with monstrous, unregulated scale. It only took a couple of decades for Silicon Valley to go from bulky computer mainframes to the internet, smartphones, and same-day delivery β€” along with all the disinformation, political polarization, and generalized anxiety that came with them.

But that's the kind of ballistic arc of progress that Altman is selling. He is, at heart, an evangelist for the Silicon Valley way. He didn't build the tech behind ChatGPT; the most important thing he ever built and scaled is Y Combinator, an old-fashioned business network of human beings. His wealth comes from investments in other people's companies. He's a macher, not a maker.

In a sense, Altman has codified the beliefs and intentions of the tech big shots who preceded him. He's just more transparent about it than they were. Did Steve Jobs project utter certainty? Sure. But he didn't give interviews about the importance of projecting utter certainty; he just introduced killer laptops, blocked rivals from using his operating system, and built the app store. Jeff Bezos didn't found Amazon by telling the public he planned to scale his company to the point that it would pocket 40 cents of every dollar spent online; he just started mailing people books. But Altman is on the record. When he says he's absolutely sure ChatGPT will change the world, we know that he thinks CEOs have to say they're absolutely sure their product will change the world. His predecessors in Silicon Valley wrote the playbook for Big Tech. Altman is just reading it aloud. He's touting a future he hasn't built yet, along with the promise that he can will it into existence β€” whatever it'll wind up looking like β€” one podcast appearance at a time.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

❌