❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 18 April 2025Main stream
Yesterday β€” 17 April 2025Main stream
Before yesterdayMain stream

OpenAI’s latest AI models have a new safeguard to prevent biorisks

16 April 2025 at 14:12
OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report. O3 and o4-mini represent […]

OpenAI just gave itself wiggle room on safety if rivals release 'high-risk' models

16 April 2025 at 04:54
A photo illustration of Sam Altman speaking into a microphone and the OpenAI logo next to him.
Sam Altman has defended the company's shifting approach to AI safety.

Didem Mente/Getty Images

  • OpenAI says it could adjust safety standards if a rival launches a risky model without safeguards.
  • The company launched GPT-4.1 this week without a safety report.
  • Some former employees say OpenAI is scaling back safety promises to stay competitive.

OpenAI doesn't want its AI safeguards to hold it back if its rivals don't play by the same rules.

In a Tuesday blog post, OpenAI said it might change its safety requirements if "another frontier AI developer releases a high-risk system without comparable safeguards."

The company said it would only do so after confirming the risk landscape had changed, publicly acknowledging the decision, and ensuring it wouldn't meaningfully increase the chance of severe harm.

OpenAI shared the change in an update to its "Preparedness Framework," the company's process for preparing for AI that could introduce "new risks of severe harm." Its safety focus includes areas such as cybersecurity, chemical threats, and AI's ability to self-improve.

The shift comes as OpenAI has come under fire for taking different approaches to safety in recent months.

On Monday, it launched the new GPT-4.1 family of models without a model or system card β€” the safety document that typically accompanies new releases from the company. An OpenAI spokesperson told TechCrunch the model wasn't "frontier," so a report wasn't required.

In February, OpenAI launched its Deep Research tool weeks before publishing its system card detailing safety evaluations.

These instances have added to ongoing scrutiny of OpenAI's commitment to safety and transparency in its AI model releases.

"OpenAI is quietly reducing its safety commitments," Steven Adler, a former OpenAI safety researcher, posted on X Wednesday in response to the updated framework.

OpenAI is quietly reducing its safety commitments.

Omitted from OpenAI's list of Preparedness Framework changes:

No longer requiring safety tests of finetuned models https://t.co/oTmEiAtSjS

β€” Steven Adler (@sjgadler) April 15, 2025

Adler said that OpenAI's previous framework, published in December 2023, included a clear requirement to safety test fine-tuned models. He said the latest update only requires testing if the model is being released with open weights, which is when a model's parameters are made public.

"I'd like OpenAI to be clearer about having backed off this previous commitment," he added.

OpenAI did not immediately respond to a Business Insider request for comment.

Ex-OpenAI staff back Musk's lawsuit

Adler isn't the only former employee speaking out about safety concerns at OpenAI.

Last week, 12 former OpenAI employees filed a motion asking a judge to let them weigh in on Elon Musk's lawsuit against the company.

In a proposed amicus brief filed on Friday, they said that OpenAI's planned conversion to a for-profit entity could incentivize the company to cut corners on safety and concentrate power among shareholders.

The group includes former OpenAI staff who worked on safety, research, and policy.

Altman defends OpenAI's approach

Sam Altman, OpenAI's CEO, defended the company's evolving safety approach in a Friday interview at TED2025. He said OpenAI's framework outlines how it evaluates "danger moments" before releasing a model.

Altman also addressed the idea that OpenAI is moving too fast. He said that AI companies regularly pause or delay model releases over safety concerns, but acknowledged that OpenAI recently relaxed some restrictions on model behavior. "We've given users much more freedom on what we would traditionally think about as speech harms," he said.

He explained that the change reflects a "more permissive stance" shaped by user feedback. "People really don't want models to censor them in ways that they don't think make sense," he said.

Read the original article on Business Insider

OpenAI may β€˜adjust’ its safeguards if rivals release β€˜high-risk’ AI

15 April 2025 at 12:50
OpenAI has updated its Preparedness Framework β€” the internal system it uses to assess the safety of AI models and determine necessary safeguards during development and deployment. In the update, OpenAI stated that it may β€œadjust” its safety requirements if a competing AI lab releases a β€œhigh-risk” system without similar protections in place. The change […]

A Boston-based construction firm is leveraging AI to keep roughly 30,000 workers safe

15 April 2025 at 09:25
Photo of  road engineer in safety gear on abstract AI themed Background.
Β 

Getty Images; Karan Singh for BI

  • Worker and job-site safety are key factors that construction management firms must consider daily.
  • The Boston-based firm Shawmut does this with AI-powered software connected to workers' phones.
  • This article is part of "AI in Action," a series exploring how companies are implementing AI innovations.

As a $2 billion construction management firm overseeing more than 150 active worksites on any given day, Shawmut Design and Construction is responsible for keeping roughly 30,000 employees, contractors, and subcontractors safe.

The Boston-based firm has employed AI for about eight years, using it for data collection, risk evaluation, worker safety compliance, and more.

Shaun Carvalho, the company's chief safety officer, said this strategy had become invaluable for creating a safer operation overall while still prioritizing growth and efficiency.

"You're talking about a business that, quite literally, 20 or 30 years ago was driven by paper and clipboards," Carvalho told Business Insider. "Anything we can do to leverage technology, we'll do it."

Using data and AI to promote safety at construction sites

Shawmut uses artificial intelligence to predict safety-related incidents on its construction sites.

With the help of an AI tool created by a private vendor, Shawmut can pull various data points to help score the likelihood of something going awry.

Some of this data is from the National Weather Service β€” information about forecast temperatures, prospective weather events, and the associated fallout, such as freezing pipes. Other data is about personnel: If there's an influx of new people to a job site, but not an influx of new leaders, the AI system will flag it as potential for disaster.

"Not having enough leaders means we probably aren't getting enough eyeballs on the workers who are out there now," Carvalho said. "When your job is to keep everyone safe, eyeballs are a very good thing."

This data dump isn't in real time, at least not yet. The drops come at least once a day, empowering leaders to speak with teams and make appropriate tweaks, Carvalho said.

Pairing GPS data with AI capabilities

Shawmut has been using AI since 2017, Carvalho said. The program really took off during the COVID-19 pandemic, when the company devised innovative ways to leverage the technology to ensure worker safety.

The firm's chief people and administration officer, Marianne Monte, told BI that the company used GPS tracking software on workers' phones to monitor their location on job sites. An AI engine automatically sent alerts when workers got closer than 6 feet apart.

Gradually, the company has expanded this use case to track whether workers are tied off on buildings (so they don't fall) and whether workers are using scaffolding properly.

"These tools feel very benign to me, and they are making our job sites safer," Monte told SAP. "It's much less about the height of that ladder than it is about a person's mental state when they get on it."

Privacy rights are a key consideration in the age of AI

While AI can help construction businesses like Shawmut keep sites safe, some experts have raised ethical questions about using AI in this fashion.

Benjamin Lange, who studies the intersection of technology and ethics, said leveraging AI to essentially monitor people raises concerns about privacy, informed consent, and data security, which must be carefully managed to prevent misuse or overreach.

"Companies must be transparent about data collection practices, ensure that tracking is strictly limited to safety purposes, and provide workers with opt-in mechanisms to maintain trust and protect individual autonomy and workers' privacy rights," said Lange, a research assistant professor in the ethics of AI at the Ludwig Maximilian University of Munich and a researcher leader at the Munich Center for Machine Learning.

After hearing this feedback, Shawmut representatives decided to anonymize data from the jump.

What's next for AI in construction

AI in construction faces several challenges, including data reliability and lack of human oversight.

"There are also concerns about overreliance on automation, which may reduce human oversight and accountability in critical decision-making," Lange said.

AI systems rely on high-quality, accurate data, and any errors or biases in datasets can lead to costly mistakes or unsafe conditions, Carvalho said. Additionally, construction sites may vary widely in complexity, making it difficult for AI models trained on past projects to generalize effectively, he added.

Recent industry research indicates that bad data in the construction industry is a big hurdle for companies attempting to embrace AI. A recent report by Autodesk and FMI Consulting indicated that poor data β€” information that is "incomplete, inaccurate, inconsistent, or outdated," according to the report β€” costs the industry $1.8 trillion annually. That same report said that 95% of all construction data goes unused.

Still, Shawmut is planning to extend and amplify its AI programs over the next three to five months, Carvalho said.

One push: real-time response. Carvalho imagines a future where every Shawmut employee wears a badge linked to a digital map of a job site. In this world, the AI system would alert managers in real time the moment an employee stepped out near something dangerous.

"What we want isn't actually in the marketplace yet," Carvalho said.

He added that he'd like to see AI technology that accounts for the various rules and regulations that differ by state. If, for instance, a contractor is on a Shawmut job in California and then heads to a job in Utah, this dream system would automatically update the job site policies to reflect the rules and regulations in the new municipality.

A system like this one would also have the potential to contribute to job safety.

Read the original article on Business Insider

OpenAI ships GPT-4.1 without a safety report

15 April 2025 at 09:12
On Monday, OpenAI launched a new family of AI models, GPT-4.1, which the company said outperformed some of its existing models on certain tests, particularly benchmarks for programming. However, GPT-4.1 didn’t ship with the safety report that typically accompanies OpenAI’s model releases, known as a model or system card. As of Tuesday morning, OpenAI had […]

Researchers concerned to find AI models misrepresenting their β€œreasoning” processes

Remember when teachers demanded that you "show your work" in school? Some new types of AI models promise to do exactly that, but new research suggests that the "work" they show can sometimes be misleading or disconnected from the actual process used to reach the answer.

New research from Anthropicβ€”creator of the ChatGPT-like Claude AI assistantβ€”examines simulated reasoning (SR) models like DeepSeek's R1, and its own Claude series. In a research paper posted last week, Anthropic's Alignment Science team demonstrated that these SR models frequently fail to disclose when they've used external help or taken shortcuts, despite features designed to show their "reasoning" process.

(It's worth noting that OpenAI's o1 and o3 series SR models were excluded from this study.)

Read full article

Comments

Β© Malte Mueller via Getty Images

I called off my plans to visit the United States. As a European, traveling there just feels too risky.

10 April 2025 at 04:36
Person looking out of window at Empire State building
My concerns about visiting the United States have pushed me to change my travel plans.

Grant Faint/Getty Images

  • I'd been planning a trip to New York this year, but I'm worried about visiting the US.
  • As a European, recent stories of travelers facing scrutiny at the US border have made me nervous.
  • Many countries have travel warnings about visiting the US. For now, I'm going to Italy instead.

I'm based in the UK, but I've visited the United States three times in my life.

When my kids were small, I brought them to Florida for a once-in-a-lifetime trip exploring the state. On my family's second visit to the US, we headed to Texas after a cruise, stopping by the Houston Zoo and having the best Tex-Mex food I've ever eaten.

On my third, my partner and I spent a few days in Boston. Although it rained the whole time, we had a blast visiting coffee shops and enjoying the city's waterfront views.

As a European, I always felt welcome in the country and never worried about my visits beyond the usual travel inconveniences like long flights and tiredness.

However, this year, my concerns about visiting the US have increased β€” so much so that I called off my travel plans to see Niagara Falls in New York this summer.

Recent headlines have me worried about crossing the border

Author Samantha Priestley and her husband posing at airport next to flower arrangements in Boston
My partner and I have gone to Boston together.

Samantha Priestley

Following the Trump administration's orders to increase national security, I've seen more stories about travelers facing scrutiny at the US border and even having their devices searched by US Customs and Border Protection officers.

One headline-making incident involved a French researcher who was denied entry to the US in March after his phone was searched.

French officials said officers found the researcher's opinions about the Trump administration on his phone. The US later denied his removal was based on political beliefs β€” but this incident gave me pause.

Although these device searches have been legal for a long time and are still considered relatively rare, recent incidents like these set off my internal alarm bells.

When I shared my concerns with American friends, they advised me not to say that I'm a writer at the border and to pack a "burner phone" with me instead of my real one if I visit the US. (Folks in online travel forums and even Canadian immigration lawyers have suggested using "burner phones" while traveling to the US, too.)

I've also been reading about a number of tourists visiting the US who have been detained in the past few months.

One British tourist was detained for 19 days at a US Immigration and Customs Enforcement (ICE) processing center after trying to go from Canada to the US in February.

She said she was traveling on a tourist visa and staying with a host family in return for doing chores. She told the BBC that even though she was not paid, she was told she violated her visa.

Regardless of whether that tourist had legitimate visa violations or paperwork issues, I don't want to worry that one small mix-up could get me sent to a detention center in the US.

Attitudes and guidelines seem to constantly be shifting

Author Samantha Priestley and her husband taking a selfie in a hotel
I've had great trips to the US β€” for now, though, I'm going to hold off on making plans to go back.

Samantha Priestley

For now, my plans to travel to the US with or without my partner are on pause.

I'm not the only person reconsidering travel to the US, especially considering that flight bookings from Canada and Europe to the US have dipped in the past few months.

The UK, along with numerous other countries like Canada, Finland, France, Germany, and Portugal, has also issued travel warnings to its citizens who are considering visiting the US.

Right now, the future just seems so uncertain. If I were to visit the US a few weeks from now (as I'd originally planned) I don't feel confident I would be allowed into the country or that I wouldn't be detained over a small error or misstep.

If I did enter without issue, I'm not sure what the political situation or atmosphere regarding Europeans might be even just a few weeks from now. After all, in the UK, we've heard frequent reports about the ever-changing political landscape in the US.

I'm also worried after President Donald Trump's anti-Europe sentiments, repeated statements about wanting to make Canada the 51st state, and comments about taking Greenland from Denmark.

As an anxious person, all these changes and feelings of instability make me worry I won't feel welcome in the US. I know so many foreigners visit on a daily basis without any issues, and I could be completely fine on my trip, but I'd feel too stressed to enjoy it.

I look forward to a time when I'll feel comfortable going back to the US, maybe once relations between Europe and America feel less strained.

For now, this year, I'm planning to visit Italy instead.

Read the original article on Business Insider

Google is shipping Gemini models faster than its AI safety reports

3 April 2025 at 09:41
More than two years after Google was caught flat-footed by the release of OpenAI’s ChatGPT, the company has dramatically picked up the pace. In late March, Google launched an AI reasoning model, Gemini 2.5 Pro, that leads the industry on several benchmarks measuring coding and math capabilities. That launch came just three months after the […]

Roblox enables parents to block experiences and friends

2 April 2025 at 06:51
Roblox, the popular gaming platform geared toward preteens, has made substantial updates to its safety policy in the last year following accusations of insufficient safety measures for children, which could expose them to risks like grooming, explicit content, and violent material.Β  The company announced Wednesday three new parental control features, including the option for them […]

Chinese brands are racking up record safety warnings in the US

25 March 2025 at 15:03
Shipping container
Two-thirds of the products that received consumer safety warnings in 2024 came to the US from Chinese companies, according to the Public Interest Research Group.

Associated Press

  • US consumer safety watchdogs are issuing a rapidly growing number of product warnings.
  • Warnings are issued when companies don't voluntarily comply with recalls for products deemed hazardous.
  • A US official told the Public Interest Research Group the surge boils down to one word: China.

The explosion of international e-commerce brands is leading to a sharp rise in product safety warnings.

Warnings issued by the US Consumer Product Safety Commission reached a record high of 63 last year, according to a new analysis from the Public Interest Research Group Education Fund.

The US Consumer Product Safety Commission issues warnings when companies don't voluntarily comply with recalls for hazardous it deems hazardous.

Last year's figure is up dramatically from 38 in 2023 and 12 in 2022, according to the PIRG analysis. There were fewer than 10 warnings combined from 2015 to 2021, it added.

So, what's contributing to the rise?

"In a word, China," acting CPSC's chairman, Peter Feldman, told PIRG.

Two-thirds of the products that received warnings in 2024 came to the US from Chinese companies, the PIRG report found.

"The United States is facing a flood of Chinese consumer products that violate US safety laws," Feldman said. "When CPSC identifies illegal Chinese goods, the manufacturer is, more often than not, unreachable, unfindable, or uncooperative."

In other words, a warning is an indication that safety officials are not able to make progress with a company concerning a recall.

Some of the hazardous products included a foldable step stool that could collapse or tip over, a line of e-bike batteries prone to overheating or catching fire, infant loungers that posed a risk of suffocation or entrapment, and bike helmets that didn't meet US safety standards.

Many of the products that received warnings were sold by third parties on Amazon's and Walmart's marketplaces. In at least one instance each, both retailers agreed to contact customers about affected products.

An Amazon spokesperson told BI that all products offered on its site are required to comply with applicable laws and regulations. The company also notifies customers of product updates through a dedicated page in page in their Amazon accounts called "Your Recalls and Product Safety Alerts."

The PIRG analysis also found two items with warnings that were sold on Shein's website.

Among the recalled products, the report found that those sold exclusively online were twice as likely to violate a US safety standard than those sold in physical stores.

Interestingly, while eight products were recalled that were exclusively sold on Temu, the PIRG report also noted that no Temu items received warnings last year because all companies involved cooperated with the recall procedure.

Read the original article on Business Insider

Instagram partners with schools to prioritize reports of online bullying and student safety

25 March 2025 at 11:39
Instagram on Tuesday announced a new school partnership program aimed at expediting the handling of moderation reports submitted by verified school accounts. The program is currently available to all middle and high schools in the U.S. It allows schools to report posts or student accounts that may violate the app’s guidelines directly to Instagram. These […]

A Southwest plane almost took off from a taxiway instead of the runway

21 March 2025 at 06:36
A Southwest plane at the gate in Orlando airport.
A Southwest plane almost took off from a taxiway instead of a runway in Orlando on Thursday.

Paul Hennessy/SOPA Images/LightRocket via Getty Images

  • A Southwest plane inadvertently started to take off from a taxiway instead of a runway on Thursday.
  • Flight tracking data shows the jet was on a parallel taxiway and reached about 70 knots.
  • The incident follows a series of recent aviation close calls and crashes.

An air traffic controller instructed a Southwest Airlines jet to abort its takeoff on Thursday after it began to accelerate on a taxiway instead of a runway at Orlando International Airport.

The Federal Aviation Administration confirmed to Business Insider that the event took place at around 9:30 a.m. local time as the plane prepared to head to Albany, New York.

Southwest said "the crew mistook the surface for the nearby runway." A different plane later operated the flight.

The Orlando airport has two parallel taxiways between the terminal and the runway where the plane was meant to take off. Taxiways are used to navigate around airports, including to and from runways, and have specific markings, including yellow markings versus white markings on runways.

The aircraft appears to have mistakenly accelerated on Taxiway H, which runs parallel to Runway 17R. It reached 70 knots before slowing down.

Orlando airport map showing Taxiway H adjacent to Runway 17R.
A screenshot of Orlando airport showing Taxiway H adjacent to Runway 17R.

Flightradar24

According to weather reports, conditions were largely clear, with no rain or fog during the event, which occurred during daylight hours.

The FAA said no other aircraft was involved in the incident and that it is investigating.

The event is reminiscent of an Air Canada flight that nearly landed on a taxiway in San Francisco in 2017.

In that incident, ATC told the pilots to abort their landing, and the plane pulled up just in time β€” coming within feet of the fully loaded passenger airliners lined up on the taxiway.

Another close call in a string of recent incidents and crashes

While the Southwest event doesn't appear as dire as the one in 2017, it comes after a string of recent near-misses and air crashes that have stoked travelers' fears and heightened awareness of air safety.

In late February, a Southwest plane nearly collided with a private jet at Chicago Midway Airport. The Southwest pilots avoided the crash by conducting a go-around before landing.

In mid-February, a Delta Air Lines plane crash-landed in Toronto and flipped upside down. That followed the American Airlines midair collision over Washington, DC, in January.

Delta plane upside down in snow.
The Delta plane stopped upside down after crash-landing on Runway 23 at Toronto Pearson Airport in February.

Katherine KY Cheng/Getty Images

A preliminary report from the Transportation Safety Board of Canada published on Thursday says the Delta plane was descending quickly and that its landing flare was lower than required by the aircraft's manual.

The NTSB said factors in the DC crash included the design of helicopter flight paths around Ronald Reagan Washington National Airport, air traffic control communication, and the helicopter flying too high at the time of impact.

Both incidents are still under investigation.

This story is developing. Check back for updates.

Read the original article on Business Insider

Telegram hits 1 billion active users as CEO Pavel Durov takes swipe at Meta-owned rival WhatsApp

19 March 2025 at 18:18
Founder and CEO of Telegram Pavel Durov.
Founder and CEO of Telegram Pavel Durov.

Albert Gea/Reuters

  • Telegram reached 1 billion monthly active users, its CEO said.
  • CEO Pavel Durov took a shot at WhatsApp, calling it "a cheap, watered-down imitation of Telegram."
  • Durov has been allowed to leave France after being held on charges of failing to curb extremist content.

Telegram just hit the massive milestone of 1 billion monthly active users.

As CEO Pavel Durov shared the news on his personal Telegram channel, he also took a direct shot at his biggest competitor, Meta-owned WhatsApp, which has over 2 billion monthly active users.

β€œAhead of us stands WhatsApp β€” a cheap, watered-down imitation of Telegram,” he wrote. β€œFor years, they’ve desperately tried to copy our innovations while burning billions on lobbying and PR campaigns to slow us down. They failed. Telegram grew, became profitable, and β€”unlike our competitor β€” retained its independence.”

He added in the post that the company has also raked in $547 million in profit in 2024.

Durov’s swipe at his rival came just after he was allowed to return to Dubai after being held in France for months.

The CEO was arrested in August 2024 after landing in Paris on his private jet. French authorities charged him with multiple offenses, including "complicity" in the distribution of child sexual abuse material and drug trafficking on Telegram while refusing to cooperate with the authorities' investigation into the platform. He was released on a $5.6 million bail.

Durov has denied the accusations and wrote in a statement on his Telegram channel on September 5, 2024, that holding him responsible for crimes committed by third parties on the platform was both a "surprising" and "misguided approach."

Initially, Durov was barred from leaving France. But the Paris prosecutor’s office confirmed that his judicial restrictions were temporarily lifted from March 15 to April 7, allowing him to travel again. No details were given about the conditions of his release or if he will be required to return.

"The process is ongoing but it feels great to be home," said Durov in a post on his Telegram channel on Monday.

Since its launch in 2013 by brothers Pavel and Nikolai Durov, Telegram has positioned itself as a privacy-first alternative to mainstream messaging apps, with end-to-end encryption for video calls and secret chats that gained a following in countries where speech is more limited.

Since Durov’s arrest, Telegram has made some major policy changes, including joining the Internet Watch Foundation β€” a group that helps identify and remove child sex abuse content β€” and publishing transparency reports that detail how much content it removed, which is a common industry practice.

Telegram also announced in September 2024 that it will start sharing IP addresses and phone numbers of offenders with law enforcement when legally required to do so.

Telegram did not immediately respond to a request for comment.

Read the original article on Business Insider

Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks

19 March 2025 at 11:27

In a new report, a California-based policy group co-led by Fei-Fei Li, an AI pioneer, suggests that lawmakers should consider AI risks that β€œhave not yet been observed in the world” when crafting AI regulatory policies. The 41-page interim report released on Tuesday comes from the Joint California Policy Working Group on AI Frontier Models, […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Y Combinator’s police surveillance darling Flock Safety raises $275M at $7.5B valuation

13 March 2025 at 14:12

Flock Safety and one of its long-time VCs, Bedrock Capital, announced Thursday that the startup raised a fresh $275 million at a $7.5 billion valuation. Flock makes computer vision-enabled video surveillance technology used by law enforcement as well as businesses, property management companies, and so on. It’s best known for its automatic license plate recognition […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌