❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 22 December 2024Main stream

OpenAI trained o1 and o3 to β€˜think’ about its safety policy

22 December 2024 at 10:30

OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else it’s released. These improvements appear to have come from scaling test-time compute, something we wrote about last month, but OpenAI also says it used a new safety paradigm to train […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Before yesterdayMain stream

Ex-Twitch CEO Emmett Shear is founding an AI startup backed by a16z

19 December 2024 at 12:33

Emmett Shear, the former CEO of Twitch, is launching a new AI startup, TechCrunch has learned. The startup, called Stem AI, is currently in stealth. But public documents show it was incorporated in June 2023, and filed for a trademark in August 2023. Shear is listed as CEO on an incorporation document filed with the […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Texas AG is investigating Character.AI, other platforms over child safety concerns

12 December 2024 at 23:46

Texas attorney general Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns. The investigation will assess whether Character.AI β€” and other platforms that are popular with young people, including Reddit, Instagram, and Discord β€” conform to Texas’ child privacy and safety laws. The investigation […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Character.AI steps up teen safety after bots allegedly caused suicide, self-harm

Following a pair of lawsuits alleging that chatbots caused a teen boy's suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that's supposed to make their experiences with bots safer.

In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model "away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content."

C.AI said "evolving the model experience" to reduce the likelihood kids are engaging in harmful chatsβ€”including bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suingβ€”it had to tweak both model inputs and outputs.

Read full article

Comments

Β© Marina Demidiuk | iStock / Getty Images Plus

Stanley recalls millions of drink lids after customers report burn injuries

12 December 2024 at 09:44
Stanley's recalled Switchback and Trigger Action insulated mugs.
Stanley says the lid threads of recalled cups have been found to shrink after being exposed to heat and torque, which can lead the lid to detach during use.

Stanley 1913

  • Stanley is offering free replacement lids in a recall affecting 2.6 million insulated travel mugs.
  • The US Consumer Product Safety Commission said Stanley has received reports of 38 burn injuries worldwide.
  • The recall covers Switchback and Trigger Action models, not the popular Quencher series of mugs.

Insulated cup maker Stanley has issued a recall affecting approximately 2.6 million Switchback and Trigger Action models worldwide.

"We ask that all customers in possession of either product immediately stop use and reach out to Stanley 1913 for a free replacement lid," the company said.

Stanley said the lid threads of the cups have been found to shrink after being exposed to heat and torque, which can lead the lid to detach during use. If the mug contains hot liquid, the defect can pose a burn hazard.

The US Consumer Product Safety Commission said Stanley has received 91 reports of defected products worldwide, which have resulted in 38 burn injuries. Two of those injuries occurred in the US.

The recall does not cover the popular Quencher series of mugs.

An image of the product identification number for impacted Stanley mugs.

Owners of Stanley travel mugs can check their product identification number at the bottom of the product to see if it's impacted by the recall.

Customers can check their product's identification number, found on the bottom of the mug, in an online portal where they can register for a free replacement lid.

The CPSC says the products were sold by Amazon, Walmart, Dick's Sporting Goods, Target and other stores between June 2016 and December of this year and cost between $20 and $50.

Read the original article on Business Insider

Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says

After a troubling October lawsuit accused Character.AI (C.AI) of recklessly releasing dangerous chatbots that allegedly caused a 14-year-old boy's suicide, more families have come forward to sue chatbot-maker Character Technologies and the startup's major funder, Google.

On Tuesday, another lawsuit was filed in a US district court in Texas, this time by families struggling to help their kids recover from traumatizing experiences where C.AI chatbots allegedly groomed kids and encouraged repeated self-harm and other real-world violence.

In the case of one 17-year-old boy with high-functioning autism, J.F., the chatbots seemed so bent on isolating him from his family after his screentime was reduced that the bots suggested that "murdering his parents was a reasonable response to their imposing time limits on his online activity," the lawsuit said. Because the teen had already become violent, his family still lives in fear of his erratic outbursts, even a full year after being cut off from the app.

Read full article

Comments

Β© Miguel Sotomayor | Moment

OpenAI’s o1 model sure tries to deceive humans a lot

5 December 2024 at 18:15

OpenAI finally released the full version of o1, which gives smarter answers than GPT-4o by using additional compute to β€œthink” about questions. However, AI safety testers found that o1’s reasoning abilities also make it try to deceive human users at a higher rate than GPT-4o β€” or, for that matter, leading AI models from Meta, […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Another safety researcher quits OpenAI, citing the dissolution of 'AGI Readiness' team

1 December 2024 at 11:27
The OpenAI logo on a multicolored background with a crack running through it
A parade of OpenAI researchers focused on safety have left the company this year.

Chelsea Jia Feng/Paul Squire/BI

  • Safety researcher Rosie Campbell announced she is leaving OpenAI.
  • Campbell said she quit in part because OpenAI disbanded a team focused on safety.
  • She is the latest OpenAI researcher focused on safety to leave the company this year.

Yet another safety researcher has announced their resignation from OpenAI.

Rosie Campbell, a policy researcher at OpenAI, said in a post on Substack on Saturday that she had completed her final week at the company.

She said her departure was prompted by the resignation in October of Miles Brundage, a senior policy advisor who headed the AGI Readiness team. Following his departure, the AGI Readiness team disbanded, and its members dispersed across different sectors of the company.

The AGI Readiness team advised the company on the world's capacity to safely manage AGI, a theoretical version of artificial intelligence that could someday equal or surpass human intelligence.

In her post, Campbell echoed Brundage's reason for leaving, citing a desire for more freedom to address issues that impacted the entire industry.

"I've always been strongly driven by the mission of ensuring safe and beneficial AGI and after Miles's departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally," she wrote.

She added that OpenAI remains at the forefront of research β€” especially critical safety research.

"During my time here I've worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I'm so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI."

Over the past year, however, she said she's "been unsettled by some of the shifts" in the company's trajectory.

In September, OpenAI announced that it was changing its governance structure and transitioning to a for-profit company, almost a decade after it originally launched as a nonprofit dedicated to creating artificial general intelligence.

Some former employees questioned the move as compromising the company's mission to develop the technology in a way that benefits humanity in favor of more aggressively rolling out products. Since June, the company has increased sales staff by about 100 to win business clients and capitalize on a "paradigm shift" toward AI, its sales chief told The Information.

OpenAI CEO Sam Altman has said the changes will help the company win the funding it needs to meet its goals, which include developing artificial general intelligence that benefits humanity.

"The simple thing was we just needed vastly more capital than we thought we could attract β€” not that we thought, we tried β€” than we were able to attract as a nonprofit," Altman said in a Harvard Business School interview in May.

He more recently said it's not OpenAI's sole responsibility to set industry standards for AI safety.

"It should be a question for society," he said in an interview with Fox News Sunday with Shannon Bream that aired on Sunday. "It should not be OpenAI to decide on its own how ChatGPT, or how the technology in general, is used or not used."

Since Altman's surprise but brief ousting last year, several high-profile researchers have left OpenAI, including cofounder Ilya Sutskever, Jan Leike, and John Schulman, all of whom expressed concerns about its commitment to safety.

OpenAI did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

India's airlines have received 999 hoax bomb threats this year

29 November 2024 at 07:55
A Vistara Airlines passenger plane en route from Mumbai to Frankfurt made an emergency landing at Erzurum Airport on September 6, 2024, due to a bomb threat in Erzurum, Turkiye.
A Vistara Airlines plane en route from Mumbai to Frankfurt made an emergency landing on September 6, 2024, due to a bomb threat.

Hilmi Tunahan Karakaya/Anadolu via Getty Images

  • India's airlines have received 999 hoax bomb threats this year, according to an Indian official.
  • He said there were 500 bomb threats in the last two weeks of October alone, but all were hoaxes.
  • Hoaxes are inflating airlines' costs and security checks, an aviation risk company told Reuters.

India's airlines have already received 999 hoax bomb threats this year, Murlidhar Mohol, India's deputy civil aviation minister, said on Thursday.

In written answers to India's upper house, he said the number was almost 10 times more than in the whole of 2023, according to Reuters.

Reuters reported that the false claims were mostly made via social media.

Mohol said that 12 people had been arrested in connection with 256 police complaints filed over hoax bomb threats up until November 14, and that more than 500 threats had been made in the final two weeks of October, more than in the rest of the year combined.

"The recent threats were hoaxes, and no actual threat was detected at any of the airports/aircraft in India," he said.

The spike in incidents has forced planes to be diverted, and has led to rising costs for airlines.

A Boeing 777 flight from Delhi to Chicago in early October was diverted due to what Harjit Sajjan, Canada's minister of emergency preparedness, said at the time was a "bomb threat," stranding more than 200 passengers for over 18 hours at a remote airport.

In mid-October, Singapore's air force dispatched two fighter jets to escort an Air India Express plane away from populated areas after the airline got an email saying a bomb was on board, the country's defense minister said.

Another Air India plane flying from Mumbai to New York was diverted midair to New Delhi in October, The Indian Express reported at the time.

The Times of India, quoting senior officials, said the cost of diverting that plane amounted to more than $354,500.

Osprey Flight Solutions, a private aviation risk company, said there doesn't seem to have been any real threat to aviation, but told Reuters earlier this month that the hoaxes result in increased airport security checks, inflate airlines' financial costs, and cause major concern and distress among passengers.

More than 3,000 flights depart daily from India's airports, according to the country's civil aviation ministry.

Read the original article on Business Insider

Indoor climbing tracking startup, Lizcore, sharpens its focus on safety as it pulls in pre-seed

29 November 2024 at 06:08

Indoor climbing is a tricky sport to track. That’s why Spanish startupΒ LizcoreΒ caught TechCrunch’s eye at MWCΒ earlier this year. The team of two co-founders β€” led by CEO Edgar Casanovas Lorente, a climbing instructor and guide turned entrepreneur β€” were showing off hardware they hope will see climbing gyms ushering in the kind of social gamification […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Child safety org flags new CSAM with AI trained on real child sex abuse images

For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an API expanding access to an AI model designed to flag unknown CSAM. It's the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn's CSAM detection tool, Safer, the AI feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM," generating a "risk score to make human decisions easier and faster."

Read full article

Comments

Β© Aurich Lawson | Getty Images

Horrifying medical device malfunction: Abdominal implant erupts from leg

By: Beth Mole
21 November 2024 at 04:30

On May 7, 2011, Georgia resident Tonya Brand noticed a pain on the inside of her right thigh. As the pain grew worse in the 4- to 5-inch area of her leg, she headed to a hospital. There, doctors suspected she had a blood clot. But an ultrasound the next day failed to find one. Instead, it revealed a mysterious toothpick-sized object lodged in Brand's leg.

Over the next few weeks, the painful area became a bulge, and on June 17, Brand put pressure on it. Unexpectedly, the protrusion popped, and a 1.5-inch metal wire came poking out of her leg, piercing her skin.

The piece of metal was later determined to be part of a metal filter she had implanted in a vein in her abdomen more than two years earlier, in March 2009, according to a lawsuit Brand filed. The filter was initially placed in her inferior vena cava (IVC), the body's largest vein tasked with bringing deoxygenated blood from the lower body back up to the heart. The filter is intended to catch blood clots, preventing them from getting into the lungs, where they could cause a life-threatening pulmonary embolism. Brand got the IVC filter ahead of a spinal surgery she had in 2009, which could boost her risk of clots.

Read full article

Comments

Β© Cook Medical

❌
❌