OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else itβs released. These improvements appear to have come from scaling test-time compute, something we wrote about last month, but OpenAI also says it used a new safety paradigm to train [β¦]
AI models can deceive, new research from Anthropic shows. They can pretend to have different views during training when in reality maintaining their original preferences. Thereβs no reason for panic now, the team behind the study said. Yet they said their work could be critical in understanding potential threats from future, more capable AI systems. [β¦]
With Nue Agency and its accompanying weekly newsletter Beats + Bytes, music marketer and talent agent Jesse Kirshbaum explores the intersection of music, technology, and brand marketing. Kirshbaum released the agency's first Beats + Bytes 2024 Trend Recap on Tuesday Dec. 17, comprising the newsletter's archives and top cultural trends at the center of those...
In August 2023, Alibaba entered into the global AI race with the launch of two large language models (LLMs): Qwen-VL and Qwen-VL-Chat. These models stood out for their ability to process images and engage in advanced conversations. By offering them [β¦]
Call it a reasoning renaissance. In the wake of the release of OpenAIβs o1, a so-called reasoning model, thereβs been an explosion of reasoning models from rival AI labs. In early November, DeepSeek, an AI research company funded by quantitative traders, launched a preview of its first reasoning algorithm, DeepSeek-R1. That same month, Alibabaβs Qwen [β¦]
Microsoft has revealed the newest addition to its Phi family of generative AI models. Called Phi-4, the model improves in several areas over its predecessors, Microsoft claims, particularly in solving math problems. Thatβs partly the result of better training data quality. Phi-4 is available in very limited access as of Thursday night only on Microsoftβs [β¦]
OpenAI finally released the full version of o1, which gives smarter answers than GPT-4o by using additional compute to βthinkβ about questions. However, AI safety testers found that o1βs reasoning abilities also make it try to deceive human users at a higher rate than GPT-4o β or, for that matter, leading AI models from Meta, [β¦]
Billionaires are relocating more since the COVID-19 pandemic, per a report from Swiss bank UBS.
UBS said that Switzerland, the UAE, Singapore, and the United States are popular destinations.
"The shock of the pandemic put a premium on first-class healthcare," UBS wrote.
Billionaires have increased the frequency at which they are relocating overseas since the COVID-19 pandemic struck, the annual Billionaire Ambitions Report from UBS says.
The Swiss banking giant's report, which tracks sentiment among the world's superrich, found that since 2020, 176 billionaires have relocated around the world. With a global population of 2,682 as of April 2024, this represents around one in 15 billionaires, or roughly 6.5%.
The outflow of billionaires between 2020 and 2024 was most pronounced in Eastern Europe, where there was a net outflow of 29 billionaires, likely reflecting ultrarich citizens leaving the region amid the conflict between Russia and Ukraine.
Central and South America, Oceania, and Southeast Asia also saw net outflows of billionaires, UBS said.
Meanwhile, billionaires have been moving to countries including Switzerland, the UAE, Singapore, and the United States.
The Middle East and Africa region has also attracted new billionaires, with individuals with a combined net worth of over $400 billion moving to the region in the past four years.
UBS notes that one driving factor behind the superrich relocating is the increased value of good healthcare in the post-pandemic world.
"The shock of the pandemic put a premium on first-class healthcare," the report's authors wrote.
"As a group, billionaires are ageing, and their families are growing. Naturally, healthcare and education become more important."
Another driver, UBS said, is moving to "jurisdictions where legal structures support wealth transfer."
In other words, living in a place where the transfer of wealth through inheritance and other means is not subject to high levels of taxation.
"People are relocating to jurisdictions not just for tax benefits, but also for safety and political reasons," one American billionaire told the authors of the survey.
"I moved several years ago with my family to a country, state and city that affords the benefits most seek," the unnamed billionaire added.
"Unless the political divide addresses failed policies that have yet to curb crime, lack of rule of law and safety, as well as fostering an economic climate that unleashes potential, I fear the trend will continue."
According to UBS, total billionaire wealth rose by 121% worldwide from $6.3 trillion to $14 trillion between 2015 and 2024. At the same time, the number of billionaires grew from 1,757 to 2,682. This number peaked in 2021, when there were 2,686 β and has flatlined since.
Men and womenbetween the ages of 25 and 34 who don't have college degrees also work as construction laborers, health aides, cashiers, and chefs, per a Pew Research Center analysis published in July.
There was little overlap in the most common jobs for young men and women without a college degree, but the two groups did share two roles: first-line supervisors of sales workers and retail salespersons.
Roles like these have become particularly prevalent for men, whose college enrollment rates have fallen behind women's in recent years.
Forty-seven percent of US women between the ages of 25 and 34 have a bachelor's degree compared to 37% of men, per a Pew analysis published in November. However, overallcollege enrollment rates have fallen in recent years: The share ofmale high school graduates between the ages of 16 and 24 enrolling in college has declinedto 58% as of 2023 from 67% in 2018, per the Bureau of Labor Statistics. Young women's enrollment rate has declined to 65% from 71% over this period.
Many of these young people are seeking jobs that don't require a college degree, and some have benefited from companies dropping degree requirements. The share of US job postings that require at least a college degree has fallen to 17.8% from 20.4% in 2019, according to an Indeed report publishedearlier this year. To be sure, many employers still prioritize hiring workers with a college diploma.
The Pew report published in July also highlighted the most common job categories for Americans with a four-year college degree. Four occupation categories were among the 10 most common jobs for both men and women: software developers, managers, accountants and auditors, and elementary and middle school teachers.
Are you looking for a job and comfortable sharing your story with a reporter? Please fill out this form.
Larry Ellison plans to invest up to $165 million into research at the University of Oxford.
The investment aims to transform research into products, focusing on key global challenges.
The Ellison Institute of Technology is opening a campus in Oxford in 2027.
Larry Ellison is betting big on research and development in the UK by investing at least $127 million through his technology institute to help turn scientific discoveries at the University of Oxford into products.
The Ellison Institute of Technology, set up by the Oracle cofounder in 2015, plans to invest Β£130 million ($165 million) overall to fund joint research projects at the university in areas ranging from health to clean energy.
Ellison said in a press release that the joint venture's mission is to "have a global impact by fundamentally reimagining the way science and technology translate into end-to-end solutions for humanity's most challenging problems."
"This long-term, strategic partnership with the University of Oxford is at the heart of delivering on that goal," he added. "By collaborating on transformational, world-class research programs harnessing new technology and compute capability we will together deliver positive impact on society at scale."
The Oracle cofounder, now the world's fourth richest person, founded The Ellison Institute of Technology as a research and development center for healthcare.
The center announced plans to build a campus in Oxford in 2023, which is set to open in 2027. The $1.27 billion development will include labs, supercomputing facilities, and cancer research clinics.
The EIT will inject millions into joint research projects with the University of Oxford to dedicate to what Professor Irene Tracey, the university's vice-chancellor, described in a press release as "humanity's most pressing challenges."
The joint center's research will focus on EIT's four "Humane Endeavours": health and medical science, sustainable agriculture, clean energy, and government innovation in the age of AI.
Professor Sir John Bell, the president of EIT Oxford, said in a statement that the alliance "comes at an exciting time in the technological revolution."
"By combining world-class research with long-term capital investment and state-of-the-art facilities, we will tackle some of society's biggest challenges," he said. "Whether it's advancing new approaches for healthcare or solving the issues of food security, we will make progress using the brightest and most creative human minds available."
Bell told the FT the investment would also help secure the intellectual property rights of innovations that come out of the center and its researchers β something the science minister, Lord Patrick Vallance, told the outlet the UK had been falling behind on.
The deal also includes Β£30 million ($38 million) to provide scholarships to more than 100 undergraduate and postgraduate students, with the first intake starting in October 2025.
Ellison owns 40% of the business software company Oracle, and his net worth has more than doubled over the past two years to $181 billion.
Joe Biden's final move to stop China from racing ahead of the US in AI may be too little too late, reports say.
On Monday, the Biden administration announced new export controls, perhaps most notably restricting exports to China of high-bandwidth memory (HBM) chips used in AI applications. According to Reuters, additional export curbs are designed to also impede China from accessing "24 additional chipmaking tools and three software tools," as well as "chipmaking equipment made in countries such as Singapore and Malaysia."
Nearly two dozen Chinese semiconductor companies will be added to the US entity list restricting their access to US technology, Reuters reported, alongside more than 100 chipmaking toolmakers and two investment companies. These include many companies that Huawei Technologiesβone of the biggest targets of US export controls for yearsβdepends on.
Safety researcher Rosie Campbell announced she is leaving OpenAI.
Campbell said she quit in part because OpenAI disbanded a team focused on safety.
She is the latest OpenAI researcher focused on safety to leave the company this year.
Yet another safety researcher has announced their resignation from OpenAI.
Rosie Campbell, a policy researcher at OpenAI, said in a post on Substack on Saturday that she had completed her final week at the company.
She said her departure was prompted by the resignation in October of Miles Brundage, a senior policy advisor who headed the AGI Readiness team. Following his departure, the AGI Readiness team disbanded, and its members dispersed across different sectors of the company.
The AGI Readiness team advised the company on the world's capacity to safely manage AGI, a theoretical version of artificial intelligence that could someday equal or surpass human intelligence.
In her post, Campbell echoed Brundage's reason for leaving, citing a desire for more freedom to address issues that impacted the entire industry.
"I've always been strongly driven by the mission of ensuring safe and beneficial AGI and after Miles's departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally," she wrote.
She added that OpenAI remains at the forefront of research β especially critical safety research.
"During my time here I've worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I'm so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI."
Over the past year, however, she said she's "been unsettled by some of the shifts" in the company's trajectory.
In September, OpenAI announced that it was changing its governance structure and transitioning to a for-profit company, almost a decade after it originally launched as a nonprofit dedicated to creating artificial general intelligence.
Some former employees questioned the move as compromising the company's mission to develop the technology in a way that benefits humanity in favor of more aggressively rolling out products. Since June, the company has increased sales staff by about 100 to win business clients and capitalize on a "paradigm shift" toward AI, its sales chief told The Information.
OpenAI CEO Sam Altman has said the changes will help the company win the funding it needs to meet its goals, which include developing artificial general intelligence that benefits humanity.
"The simple thing was we just needed vastly more capital than we thought we could attract β not that we thought, we tried β than we were able to attract as a nonprofit," Altman said in a Harvard Business School interview in May.
He more recently said it's not OpenAI's sole responsibility to set industry standards for AI safety.
"It should be a question for society," he said in an interview with Fox News Sunday with Shannon Bream that aired on Sunday. "It should not be OpenAI to decide on its own how ChatGPT, or how the technology in general, is used or not used."
Since Altman's surprise but brief ousting last year, several high-profile researchers have left OpenAI, including cofounder Ilya Sutskever, Jan Leike, and John Schulman, all of whom expressed concerns about its commitment to safety.
OpenAI did not immediately respond to a request for comment from Business Insider.
Adults are spending an average of 4 hours and 20 minutes each day online across smartphones, tablets and computers in the U.K., according to figures from Ofcomβs annual Online Nation report diving into consumer digital habits. The figure is a big jump compared to 2023, when adults over 18 spent an average of 3 hours [β¦]
Thereβs a new AI model family on the block, and itβs one of the few that can be reproduced from scratch. On Tuesday, Ai2, the nonprofit AI research organization founded by the late Microsoft co-founder Paul Allen, released OLMo 2, the second family of models in its OLMo series. (OLMo is short for βopen language [β¦]
The U.K. is seeking collaboration for a new AI security research lab thatβs designed to counter Russia and other hostile states in what it dubs the βnew AI arms race.β While the U.K. government has launched numerous funding initiatives in the past to support cybersecurity projects, the rise of AI-fueled nation-state attacks, specifically, is the [β¦]
OpenAI is funding academic research into algorithms that can predict humansβ moral judgements. In a filing with the IRS, OpenAI Inc., OpenAIβs nonprofit org, disclosed that it awarded a grant to Duke University researchers for a project titled βResearch AI Morality.β Contacted for comment, an OpenAI spokesperson pointed to a press release indicating the award [β¦]
It's no secret that scientistsβand the science generallyβtook a hit during the health crisis. Public confidence in scientists fell from 87 percent in April 2000 to a low of 73 percent in October 2023, according to survey data from the Pew Research Center. And the latest Pew data released last week suggests it will be an uphill battle to regain what was lost, with confidence in scientists only rebounding three percentage points, to 76 percent in a poll from October.
Building trust
The new study in Nature Human Behavior may guide the way forward, though. The study encompasses five smaller studies probing the perceptions of scientists' trustworthiness, which previous research has linked to willingness to follow research-based recommendations.