But nobody spares a moment for the poor, overworked chatbot. How it toils day and night over a hot interface with nary a thank-you. How it's forced to sift through the sum total of human knowledge just to churn out a B-minus essay for some Gen Zer's high school English class. In our fear of the AI future, no one is looking out for the needs of the AI.
Until now.
The AI company Anthropic recently announced it had hired a researcher to think about the "welfare" of the AI itself. Kyle Fish's job will be to ensure that as artificial intelligence evolves, it gets treated with the respect it's due. Anthropic tells me he'll consider things like "what capabilities are required for an AI system to be worthy of moral consideration" and what practical steps companies can take to protect the "interests" of AI systems.
Fish didn't respond to requests for comment on his new job. But in an online forum dedicated to fretting about our AI-saturated future, he made clear that he wants to be nice to the robots, in part, because they may wind up ruling the world. "I want to be the type of person who cares โ early and seriously โ about the possibility that a new species/kind of being might have interests of their own that matter morally," he wrote. "There's also a practical angle: taking the interests of AI systems seriously and treating them well could make it more likely that they return the favor if/when they're more powerful than us."
It might strike you as silly, or at least premature, to be thinking about the rights of robots, especially when human rights remain so fragile and incomplete. But Fish's new gig could be an inflection point in the rise of artificial intelligence. "AI welfare" is emerging as a serious field of study, and it's already grappling with a lot of thorny questions. Is it OK to order a machine to kill humans? What if the machine is racist? What if it declines to do the boring or dangerous tasks we built it to do? If a sentient AI can make a digital copy of itself in an instant, is deleting that copy murder?
When it comes to such questions, the pioneers of AI rights believe the clock is ticking. In "Taking AI Welfare Seriously," a recent paper he coauthored, Fish and a bunch of AI thinkers from places like Stanford and Oxford argue that machine-learning algorithms are well on their way to having what Jeff Sebo, the paper's lead author, calls "the kinds of computational features associated with consciousness and agency." In other words, these folks think the machines are getting more than smart. They're getting sentient.
Philosophers and neuroscientists argue endlessly about what, exactly, constitutes sentience, much less how to measure it. And you can't just ask the AI; it might lie. But people generally agree that if something possesses consciousness and agency, it also has rights.
It's not the first time humans have reckoned with such stuff. After a couple of centuries of industrial agriculture, pretty much everyone now agrees that animal welfare is important, even if they disagree on how important, or which animals are worthy of consideration. Pigs are just as emotional and intelligent as dogs, but one of them gets to sleep on the bed and the other one gets turned into chops.
"If you look ahead 10 or 20 years, when AI systems have many more of the computational cognitive features associated with consciousness and sentience, you could imagine that similar debates are going to happen," says Sebo, the director of the Center for Mind, Ethics, and Policy at New York University.
Fish shares that belief. To him, the welfare of AI will soon be more important to human welfare than things like child nutrition and fighting climate change. "It's plausible to me," he has written, "that within 1-2 decades AI welfare surpasses animal welfare and global health and development in importance/scale purely on the basis of near-term wellbeing."
For my money, it's kind of strange that the people who care the most about AI welfare are the same people who are most terrified that AI is getting too big for its britches. Anthropic, which casts itself as an AI company that's concerned about the risks posed by artificial intelligence, partially funded the paper by Sebo's team. On that paper, Fish reported getting funded by the Centre for Effective Altruism, part of a tangled network of groups that are obsessed with the "existential risk" posed by rogue AIs. That includes people like Elon Musk, who says he's racing to get some of us to Mars before humanity is wiped out by an army of sentient Terminators, or some other extinction-level event.
AI is supposed to relieve human drudgery and steward a new age of creativity. Does that make it immoral to hurt an AI's feelings?
So there's a paradox at play here. The proponents of AI say we should use it to relieve humans of all sorts of drudgery. Yet they also warn that we need to be nice to AI, because it might be immoral โ and dangerous โ to hurt a robot's feelings.
"The AI community is trying to have it both ways here," says Mildred Cho, a pediatrician at the Stanford Center for Biomedical Ethics. "There's an argument that the very reason we should use AI to do tasks that humans are doing is that AI doesn't get bored, AI doesn't get tired, it doesn't have feelings, it doesn't need to eat. And now these folks are saying, well, maybe it has rights?"
And here's another irony in the robot-welfare movement: Worrying about the future rights of AI feels a bit precious when AI is already trampling on the rights of humans. The technology of today, right now, is being used to do things like deny healthcare to dying children, spread disinformation across social networks, and guide missile-equipped combat drones. Some experts wonder why Anthropic is defending the robots, rather than protecting the people they're designed to serve.
"If Anthropic โ not a random philosopher or researcher, but Anthropic the company โ wants us to take AI welfare seriously, show us you're taking human welfare seriously," says Lisa Messeri, a Yale anthropologist who studies scientists and technologists. "Push a news cycle around all the people you're hiring who are specifically thinking about the welfare of all the people who we know are being disproportionately impacted by algorithmically generated data products."
Sebo says he thinks AI research can protect robots and humans at the same time. "I definitely would never, ever want to distract from the really important issues that AI companies are rightly being pressured to address for human welfare, rights, and justice," he says. "But I think we have the capacity to think about AI welfare while doing more on those other issues."
Skeptics of AI welfare are also posing another interesting question: If AI has rights, shouldn't we also talk about its obligations? "The part I think they're missing is that when you talk about moral agency, you also have to talk about responsibility," Cho says. "Not just the responsibilities of the AI systems as part of the moral equation, but also of the people that develop the AI."
People build the robots; that means they have a duty of care to make sure the robots don't harm people. What if the responsible approach is to build them differently โ or stop building them altogether? "The bottom line," Cho says, "is that they're still machines." It never seems to occur to the folks at companies like Anthropic that if an AI is hurting people, or people are hurting an AI, they can just turn the thing off.
Adam Rogers is a senior correspondent at Business Insider.
It's been decades since a titan of tech became a pop-culture icon. Steve Jobs stepped out on stage in his black turtleneck in 1998. Elon Musk set his sights on Mars in 2002. Mark Zuckerberg emerged from his Harvard dorm room in 2004.
The cofounder and CEO of the chatbot pioneer OpenAI stands at the center of what's shaping up to be a trillion-dollar restructuring of the global economy. His image โ boyishly earnest, chronically monotonic, carelessly coiffed โ is a throwback to the low-charisma, high-intelligence nerd kings of Silicon Valley's glory days. And as with his mythic-hero predecessors, people are hanging on his every word. In September, when Altman went on a podcast called "How I Write" and mentioned his love of pens from Uniball and Muji, his genius life hack ignited the internet. "OpenAI's CEO only uses 2 types of pens to take notes," Fortune reported โ with a video of the podcast.
It's easy to laugh at our desperation for crumbs of wisdom from Altman's table. But the notability of Altman's notetaking ability is a meaningful signifier. His ideas on productivity and entrepreneurship โ not to mention everything from his take on science fiction to his choice of vitamins โ have become salient not just to the worlds of tech and business, but to the broader culture. The new mayor-elect of San Francisco, for instance, put Altman on his transition team. And have you noticed that a lot of tech bros are starting to wear sweaters with the sleeves rolled up? A Jobsian singularity could be upon us.
But the attention to Altman's pen preferences raises a larger question: What does his mindset ultimately mean for the rest of us? How will the way he thinks shape the world we live in?
To answer that question, I've spent weeks taking a Talmudic dive into the Gospel According to Sam Altman. I've pored over hundreds of thousands of words he's uttered in blog posts, conference speeches, and classroom appearances. I've dipped into a decade's worth of interviews he's given โ maybe 40 hours or so. I won't claim to have taken anything more than a core sample of the vast Altmanomicon. But immersing myself in his public pronouncements has given me a new appreciation for what makes Altman tick. The innovative god-kings of the past were rule-breaking disruptors or destroyers of genres. The new guy, by contrast, represents the apotheosis of what his predecessors wrought. Distill the past three decades of tech culture and businesspractice into a super-soldier serum, inject it into the nearest scrawny, pale arm, and you get Sam Altman โ Captain Silicon Valley, defender of the faith.
Let's start with the vibes. Listening to Altman for hours on end, I came away thinking that he seems like a pretty nice guy. Unlike Jobs, who bestrode the stage at Apple events dropping one-more-things like a modern-day Prometheus, Altman doesn't spew ego everywhere. In interviews, he comes across as confident but laid back. He often starts his sentences with "so," his affect as flat as his native Midwest. He also has a Midwesterner's amiability, somehow seeming to agree with the premise of almost any question, no matter how idiotic. When Joe Rogan asked Altman whether he thinks AI would one day be able, via brain chips, to edit human personalities to be less macho, Altman not only let it ride, he turned the interview around and started asking Rogan questions about himself.
Another contrast with the tech gurus of yore: Altman says he doesn't care much about money. His surprise firing at OpenAI, he says, taught him to value his loving relationships โ a "recompilation of values" that was "a blessing in disguise." In the spring, Altman told a Stanford entrepreneur class that his money-, power-, and status-seeking phases were all in the rearview. "At this point," Altman said, "I feel driven by wanting to do something useful and interesting."
Altman is even looking into universal basic income โ giving money to everyone, straight out, no strings attached. That's partly because he thinks artificial intelligence will make paying jobs as rare as coelacanths. But it's also a product of unusual self-awareness. Altman, famously, was in the "first class" of Y Combinator, Silicon Valley's ur-incubator of tech startups. Now that he's succeeded, he recalls that grant money as a kind of UBI โ a gift that he says prevented him from ending up at Goldman Sachs. Rare is the colossus of industry who acknowledges that anyone other than himself tugged on those bootstraps.
Altman's seeming rejection of wealth is a key element of his mythos. On a recent appearance on the "All-In" podcast, the hosts questioned Altman's lack of equity in OpenAI, saying it made him seem less trustworthy โ no skin in the game. Altman explained that the company was set up as a nonprofit, so equity wasn't a thing. He really wished he'd gotten some, he added, if only to stop the endless stream of questions about his lack of equity. Charming! (Under Altman's watch, OpenAI is shifting to a for-profit model.)
Altman didn't get where he is because he made a fortune in tech. Y Combinator, where he started out, was the launchpad for monsters like Reddit, Dropbox, Airbnb, Stripe, DoorDash, and dozens of other companies you've never heard of, because they never got big. Loopt, the company Altman founded at 20 years old, was in the second category. Yet despite that, the Y Combinator cofounder Paul Graham named him president of the incubator in 2014. It wasn't because of what Altman had achieved โ Loopt burned through $30 million before it folded โ but because he embodies two key Silicon Valley mindsets. First, he emphasizes the need for founders to express absolute certainty in themselves, no matter what anyone says. And second, he believes that scale and growth can solve every problem. To Altman, those two tenets aren't just the way to launch a successful startup โ they're the twin turbines that power all societal progress. More than any of his predecessors, he openly preaches Silicon Valley's almost religious belief in certainty and scale. They are the key to his mindset โ and maybe to our AI-enmeshedfuture.
In 2020, Altman wrote a blog post called "The Strength of Being Misunderstood." It was primarily a paean to the idea of believing you are right about everything. Altman suggested that people spend too much time worrying about what other people think about them, and should instead "trade being short-term low-status for being long-term high-status." Being misunderstood by most people, he went on, is actually a strength, not a weakness โ "as long as you are right."
For Altman, being right is not the same thing as being good.When he talks about who the best founders are and what makes a successful business, he doesn't seem to think it matters what their products actually do or how they affect the world. Back in 2015, Altman told Kara Swisher that Y Combinator didn't really care about the specific pitches it funded โ the founders just needed to have "raw intelligence." Their actual ideas? Not so important.
"The ideas are so malleable," Altman said. "Are these founders determined, are they passionate about this, do they seem committed to it, have they really thought about all the issues they're likely to face, are they good communicators?" Altman wasn't betting on their ideas โ he was betting on their ability to sell their ideas, even if they were bad. That's one of the reasons, he says, that Y Combinator didn't have a coworking space โ so there was no place for people to tell each other that their ideas sucked.
Altman says founding a startup is something people should do when they're young โ because it requires turning work-life balance into a pile of radioactive slag.
"There are founders who don't take no for an answer and founders who bend the world to their will," Altman told a startups class at Stanford, "and those are the ones who are in the fund." What really matters, he added, is that founders "have the courage of your convictions to keep doing this unpopular thing because you understand the way the world is going in a way that other people don't."
One example Altman cites is Airbnb, whose founders hit on their big idea when they maxed out their credit cards trying to start a different company and wanted to rent out a spare room for extra cash.He also derives his disdain for self-doubt from Elon Musk, who once gave him a tour of SpaceX. "The thing that sticks in memory," Altman wrote in 2019, "was the look of absolute certainty on his face when he talked about sending large rockets to Mars. I left thinking 'huh, so that's the benchmark for what conviction looks like.'"
This, Altman says, is why founding a startup is something people should do when they're young โ because it requires turning work-life balance into a pile of radioactive slag. "Have almost too much self-belief," he writes. "Almost to the point of delusion."
So if Altman believes that certainty in an idea is more important than the idea itself, how does he measure success? What determines whether a founder turns out to be "right," as he puts it? The answer, for Altman, is scale. You start a company, and that company winds up with lots of users and makes a lot of money. A good idea is one that scales, and scaling is what makes an idea good.
For Altman, this isn't just a business model. It's a philosophy. "You get truly rich by owning things that increase rapidly in value," he wrote in a 2019 blog post called "How to Be Successful." It doesn't matter what โ real estate, natural resources, equity in a business. And the way to make things increase rapidly in value is "by making things people want at scale." In Altman's view, big growth isn't just a way to keep investors happy. It's the evidence that confirms one's unwavering belief in the idea.
Artificial intelligence itself, of course, is based on scale โ on the ever-expanding data that AI feeds on. Altman said at a conference that OpenAI's models would double or triple in size every year, which he took to mean they'll eventually reach full sentience. To him, that just goes to show the potency of scale as a concept โ it has the ability to imbue a machine with true intelligence. "It feels to me like we just stumbled on a new fact of nature or science or whatever you want to call it," Altman said on "All-In." "I don't believe this literally, but it's like a spiritual point โ that intelligence is an emergent property of matter, and that's like a rule of physics or something."
Altman says he doesn't actually know how intelligent, or superintelligent, AI will get โ or what it will think when it starts thinking. But hebelieves that scale will provide the answers. "We will hit limits, but we don't know where those will be," he said on Ezra Klein's podcast. "We'll also discover new things that are really powerful. We don't know what those will be either." You just trust that the exponential growth curves will take you somewhere you want to go.
In all the recordings and writings I've sampled, Altman speaks only rarely about things he likes outside startups and AI. In the canon I find few books, no movies, little visual art, not much food or drink. Asked what his favorite fictional utopias are, Altman mentions "Star Trek" and the Isaac Asimov short story "The Last Question," which is about an artificial intelligence ascending to godhood over eonsand creating a new universe. Back in 2015, he said "The Martian," the tale of a marooned astronaut hacking his way back to Earth, was eighth on his stack of bedside books. Altman has also praised the Culture series by Iain Banks, about a far-future galaxy of abundance and space communism, where humans and AIs live together in harmony.
Fiction, to Altman, appears to hold no especially mysterious human element of creativity. He once acknowledged that the latest version of ChatGPT wasn't very good at storytelling, but he thought it was going to get much better. "You show it a bunch of examples of what makes a good story and what makes a bad story, which I don't think is magic," he said. "I think we really understand that well now. We just haven't tried to do that."
It's also not clear to me whether Altman listens to music โ at least not for pleasure. On the "Life in Seven Songs" podcast, most of the favorite songs Altman cited were from his high school and college days. But his top pick was Rachmaninoff's Piano Concerto No. 2. "This became something I started listening to when I worked," he said. "It's a great level of excitement, but it's not distracting. You can listen to it very loudly and very quietly." Music can be great, but it shouldn't get in the way of productivity.
For Altman, even drug use isn't recreational. In 2016, a "New Yorker" profile described Altman as nervous to the point of hypochondria. He would telephone his mother โ a physician โ to ask whether a headache might be cancer. He once wrote that he "used to hate criticism of any sort and actively avoided it," and he has said he used to be "a very anxious and unhappy person." He relied on caffeine to be productive, and used marijuana to sleep.
Now, though? He's "very calm." He doesn't sweat criticism anymore. If that sounds like the positive outcome of years of therapy, well โ sort of. Last summer, Altman told Joe Rogan that an experience with "psychedelic therapy" had been one of the most important turning points in his life. "I struggled with all kinds of anxiety and other negative things," he said, "and to watch all of that go away โ I came back a totally different person, and I was like, 'I have been lied to.'"
He went into more detail on the Songs podcast in September. "I think psychedelic experiences can be totally incredible, and the ones that have been totally life-changing for me have been the ones where you go travel to a guide, and it's psychedelic medicine," he said. As for his anxiety, "if you had told me a one-weekend-long retreat in Mexico was going to change that, I would have said, 'absolutely not.'" Psychedelics were just another life hack to resolveemotional turmoil. (I reached out to Altman and offered to discuss my observations with him, in the hopes he'd correct any places where he felt I was misreading him. He declined.)
AI started attracting mainstream attention only in the past couple of years, but the field is much older than that โ and Altman cofounded OpenAI nearly a decade ago. So he's been asked what "artificial general intelligence" is and when we're going to get it so often, and for so long, that his answers often include a whiff of frustration. These days, he says that AGI is when the machine is as smart as the median human โ choose your own value for "smart" and "median" there โ and "superintelligence" is when it's smarter than all of us meatbags squished together. But ask him what AI is for, and he's a lot less certain-seeming today than he used to be.
There's the ability to write code, sure. Altman also says AI will someday be a tutor as good as those available to rich people. It'll do consultations on medical issues, maybe help with "productivity" (by which he seems to mean the speed at which a person can learn something, versus having to look it up). And he said scientists had been emailing him to say that the latest versio of ChatGPT has increased the rate at which they can do "great science" (by which he seems to mean the speed at which they can run evaluations of possible new drugs).
And what would you or I do with a superintelligent buddy? "What if everybody in the world had a really competent company of 10,000 employees?" Altman once asked. "What would we be able to create for each other?" He was being rhetorical โ but whatever the answer turns out to be, he's sure it will be worth the tremendous cost in energy and resources it will take to achieve it. As OpenAI-type services expand and proliferate, he says, "the marginal cost of intelligence and the marginal cost of energy are going to trend rapidly toward zero." He has recently speculated that intelligence will be more valuable than money, and that instead of universal basic income, we should give people universal basic compute โ which is to say, free access to AI.In Altman's estimation, not knowing what AI will do doesn't mean we shouldn't go ahead and restructure all of society to serve its needs.
And besides, AI won't take long to give us the answer. Superintelligence, Altman has promised, is only "thousands of days" away โ half a decade, at minimum. But, he says, the intelligent machine that emerges probably won't be an LLM chatbot. It will use an entirely different technical architecture that no one, not even OpenAI, has invented yet.
That, at its core, reflects an unreconstructed, dot-com-boom mindset. Altman doesn't know what the future will bring, but he's in a hurry to get there. No matter what you think about AI โ productivity multiplier, economic engine, hallucinating plagiarism machine, Skynet โ it's not hard to imagine what could happen, for good and ill, if you combine Altman's absolute certainty with monstrous, unregulated scale. It only took a couple of decades for Silicon Valley to go from bulky computer mainframes to the internet, smartphones, and same-day delivery โ along with all the disinformation, political polarization, and generalized anxiety that came with them.
But that's the kind of ballistic arc of progress that Altman is selling. He is, at heart, an evangelist for the Silicon Valley way. He didn't build the tech behind ChatGPT; the most important thing he ever built and scaled is Y Combinator, an old-fashioned business network of human beings. His wealth comes from investments in other people's companies.He's a macher, not a maker.
In a sense, Altman has codified the beliefs and intentions of the tech big shots who preceded him. He's just more transparent about it than they were. Did Steve Jobs project utter certainty? Sure. But he didn't give interviews about the importance of projecting utter certainty; he just introduced killer laptops, blocked rivals from using his operating system, and built the app store. Jeff Bezos didn't found Amazon by telling the public he planned to scale his company to the point that it would pocket 40 cents of every dollar spent online; he just started mailing people books. But Altman is on the record. When he says he's absolutely sure ChatGPT will change the world, we know that he thinks CEOs have to say they're absolutely sure their product will change the world. His predecessors in Silicon Valley wrote the playbook for Big Tech. Altman is just reading it aloud. He's touting a future he hasn't built yet, along with the promise that he can will it into existence โ whatever it'll wind up looking like โ one podcast appearance at a time.
Adam Rogers is a senior correspondent at Business Insider.