Reading view

There are new articles available, click to refresh the page.

Startup set to brick $800 kids robot is trying to open source it first

Earlier this month, startup Embodied announced that it is going out of business and taking its Moxie robot with it. The $800 robots, aimed at providing emotional support for kids ages 5 to 10, would soon be bricked, the company said, because they can’t perform their core features without the cloud. Following customer backlash, Embodied is trying to create a way for the robots to live an open sourced second life.

Embodied CEO Paolo Pirjanian shared a document via a LinkedIn blog post today saying that people who used to be part of Embodied’s technical team are developing a “potential” and open source way to keep Moxies running. The document reads:

This initiative involves developing a local server application (‘OpenMoxie’) that you can run on your own computer. Once available, this community-driven option will enable you (or technically inclined individuals) to maintain Moxie’s basic functionality, develop new features, and modify her capabilities to better suit your needs—without reliance on Embodied’s cloud servers.

The notice says that after releasing OpenMoxie, Embodied plans to release “all necessary code and documentation” for developers and users.

Read full article

Comments

© Embodied

Trailer: 'Rule Breakers' will bring Afghanistan’s first-ever girls’ robotics team to the big screen on March 7

The courageous story of Afghanistan’s first all-girls robotics team is coming to a theater near you.

Rule Breakers is based on the true story of The Afghan Girls Robotics Team, who grabbed the world’s attention when they were denied member visas by the United States in 2017 while attempting to compete at the First Global Challenge international robotics competition. Fifty three members of Congress signed a petition and President Donald Trump intervened to give the girls travel documents on special humanitarian grounds allowing them to enter the US and compete in the robotics games, according to a New York Times profile.

The story of the team’s struggle to compete in the robotics competition goes much deeper than their attempts to enter the US. First Global founder Dean Kamen, who is best known for designing the Segway, put together his competitive robotics league as a way to spark interest in science and technology among high schoolers. He invited and enlisted Afghan tech entrepreneur and Digital Citizen Fund (DCF) founder Roya Mahboob to put together an all-girls robotics team for the competition nicknamed the Afghan Dreamers. A dozen girls made the cut forming the first team and worked on their robotic creation in Mahboob’s parents’ basement using whatever they could find for tools along with parts donated by Kamen, according to the Times.

The movie tells the story of the team’s deep and perilous struggle to compete and pursue their passions. The Taliban’s return to power in 2021 reversed years of gender equality and strife for freedom by forbidding women from receiving an education in science and technology, forcing some of the team members to flee their country for their own safety and the right to pursue their future on their terms. Team member Sadaf Hamidi, who fled for Qatar in 2021, told NBC News last year that one of her sisters had to give up her dream of going to medical school saying “This is heartbreaking for me and for them.”

Rule Breakers is directed by two-time Academy Award winner Bill Guttentag and stars Nikohl Boosheri as Mahboob and Fleabag’s Phoebe Waller-Bridge. The film hits theaters on March 7, 2025.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/trailer-rule-breakers-will-bring-afghanistans-first-ever-girls-robotics-team-to-the-big-screen-on-march-7-170049854.html?src=rss

©

© Angel Studios

Phoebe Waller-Bridge and Nikohl Boosheri star in the new film Rule Breakers about Afghanistan's first female robotics team.

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service

On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free. The company also says that people outside the US can send text messages to the same number for free using WhatsApp.

Upon calling, users hear a voice say, "Hello again, it's ChatGPT, an AI assistant. Our conversation may be reviewed for safety. How can I help you?" Callers can ask ChatGPT anything they would normally ask the AI assistant and have a live, interactive conversation.

During a livestream demo of "Calling with ChatGPT" during Day 10 of "12 Days of OpenAI," OpenAI employees demonstrated several examples of the telephone-based voice chat in action, asking ChatGPT to identify a distinctive house in California and for help in translating a message into Spanish for a friend. For fun, they showed calls from an iPhone, a flip phone, and a vintage rotary phone.

Read full article

Comments

© Charles Taylor via Getty Images

I used a bot to do my Christmas shopping. It quickly got weird.

A robot putting a poo emoji in a gift box

iStock; Rebecca Zisser/BI

Stumped on what to get my mom for Christmas this year, I turned, desperately, to Perplexity AI's chatbot. In response to my initial broad question: "What should I get my mom for Christmas?," the robo-elf gave me links to several gift guides published on sites including Target and Country Living. Then the chatbot suggested generic favorites like a Stanley mug and a foot massager. But as I scrolled, it also dropped links directly to more esoteric gifts, including a mug with Donald Trump on it. "You are a really, really great mom," the mug read. "Other moms? Losers, total disasters." I hadn't given Perplexity any indication of political ideology among my family, but the bot seemed to think sipping from Trump's visage every morning was a gift any mother would love. Then it suggested I make a jar and stuff it with memories I've written down. A cute idea, but I did let Perplexity know that I'm in my 30s — I don't think the made-at-home gift for mom is going to cut it.

'Tis the season to scramble and buy tons of stuff people don't need or really even want. At least that's how it can feel when trying to come up with gifts for family members who have everything already. Money has been forked over for restaurant gift cards that collect dust or slippers and scarves that pile up; trendy gadgets are often relegated to junk drawers by March. As artificial intelligence becomes more integrated into online shopping, this whole process should get easier — if AI can come to understand the art behind giving a good gift. Shopping has become one of Perplexity's top search categories in the US, particularly around the holidays, Sara Platnick, a spokesperson for Perplexity, tells me. While Platnick didn't comment directly on individual gift suggestions Perplexity's chatbots makes, she tells me that product listings provided in responses are determined by "ratings and its relevance to a user's request."

There are chatbots to consult for advice this holiday season, like Perplexity and ChatGPT, but AI is increasingly seeping into the entire shopping experience. From customer-service chatbots handling online shopping woes to ads serving recommendations that follow you across the web, AI's presence has ramped up alongside the explosion of interest in generative AI. Earlier this year, Walmart unveiled generative-AI-powered search updates that allow people to search for things like "football watch party" instead of looking for items like chips and salsa individually; Google can put clothes on virtual models in a range of sizes to give buyers a better idea of how they'll look. In a world with more options than ever, there's more help from AI, acting as robo-elves in a way — omnipresent and sometimes invisible as you shop across the web.

For the indecisive shopper, AI may be a silver bullet to choosing from hundreds of sweaters to buy, plucking the best one from obscurity and putting an end to endless scrolling — or it might help to serve up so many targeted ads that it leads people to overconsume.

AI can help people discover new items they may never have known to buy online, but it can't replace that intuition we have when we find the perfect thing for a loved one.

Either way, AI has been completely changing the e-commerce game. "It allows a company to be who the customer wants it to be," says Hala Nelson, a professor of mathematics at James Madison University. "You cannot hire thousands of human assistants to assist each customer, but you can deploy thousands of AI assistants." Specialization comes from using third-party data to track activity and preferences across the web. In a way, that's the personalized level of service high-end stores have always provided to elite shoppers. Now, instead of a consultation, the expertise is built on surveillance.

Companies also use AI to forecast shopping trends and manage inventory, which can help them prepare and keep items in stock for those last-minute shoppers. Merchants are constantly looking for AI to get them more — to bring more eyes to their websites, to get people to add more items to their carts, and ultimately to actually check out and empty their carts. In October and early November, digital retailers using AI tech and agents increased the average value of an order by 7% when compared to sites that did not employ the technology, according to Salesforce data. The company predicted AI and shopping agents to influence 19% of orders during the week of cyber deals around Thanksgiving. And AI can help "level the playing field for small businesses," says Adam Nathan, the founder and CEO of Blaze, an AI marketing tool for small businesses and entrepreneurs.

"They don't want to necessarily be Amazon, Apple, or Nike, they just want to be the No. 1 provider of their service or product in their local community," Nathan says. "They're not worried about AI taking their job — they're worried about a competitor using AI. They see it as basically a way to get ahead."

AI early adopters in the e-commerce space benefited last holiday season, but the tech has become even more common this year, says Guillaume Luccisano, the founder and CEO of Yuma AI, a company that automates customer service for sellers that use Shopify. Some merchants that used Yuma AI during the Black Friday shopping craze automated more than 60% of their customer-support tickets, he says. While some people lament having to deal with a bot instead of a person, Luccisano says the tech is getting better, and people are mostly concerned about whether their problem is getting solved, not whether the email came from a real person or generative AI.

After my ordeal with Perplexity, I turned to see how ChatGPT would fare in helping me find gifts for the rest of my family. For my 11-year-old cousin, it suggested a Fitbit or smartwatch for kids to help her "stay active." A watch that tracks activity isn't something I feel comfortable giving a preteen, so I provided some more details. I told ChatGPT she loved the "Twilight" series, so it suggested a T-shirt with the Cullen family crest and a "Twilight"-themed journal to write fan fiction. It told me I could likely find these items on Etsy but it didn't give me direct links. (As her cool millennial cousin who has lived to tell of my own "Twilight" phase in 2007, I did end up buying a makeup bag from Etsy with a movie scene printed on it.) I also asked ChatGPT for suggestions for my 85-year-old grandpa, and it came up with information about electronic picture frames — but the bulk of our family photos are stuffed in albums and shoeboxes in his closet and not easily digitized.

I could navigate this list because these are deep contextual things that I know about my family members, something AI doesn't know yet. Many of the best gifts I've ever received are from friends and family members who stumbled upon something they knew I would love — a vinyl record tucked in a bin or a print from an independent artist on display at a craft show. AI can play a role in helping people discover new items they may never have known to buy online, but it can't replace that intuition we have when we find the perfect thing for a loved one. "We're still really wrestling with: How accurate is it? How much of a black box is it?" says Koen Pauwels, a professor of marketing at Northeastern University. "Humans are way better still in getting cues from their environment and knowing the context." If you want to give a gift that's really a hit, it looks like you'll still have to give the AI elves a helping hand.


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Read the original article on Business Insider

Unofficial mod transforms the Playdate into a charming robot pet

Although Panic paused development on its official Playdate charging dock, an enterprising character artist has swooped in with an open-source kit (via Gizmodo) that transforms the device into an interactive robot pet.

Playbot is Guillaume Loquin’s name for the cute add-on, which anyone with the right know-how can build. (For those without know-how, don’t be shocked if you eventually see others sell builds on platforms like Etsy.) Made with two wheels, a motor, a microcontroller and a 3D-printed casing, it taps into the Playdate’s built-in accelerometer, microphone and sensors to turn the indie game console into a charming desktop companion.

A Playdate console attached to a motorized / wheeled dock, navigating a corner in someone’s home.
Guillaume Loquin / YouTube

Loquin, whose day job is as a character artist at Ubisoft, put those skills to use in bringing the device to life. He told Engadget the console stood out as a unique creative canvas. “I fell in love with the Playdate console — its unique form factor, the SDK developed by Panic,” he said. “And, of course, its distinctive crank makes it an incredible platform for exploring new possibilities.”

“Like many others, I initially thought about creating a charging dock for my Playdate,” Loquin said. “Then I thought: Why not add wheels to it? Having worked in the video game industry for many years, I enjoy combining my gaming expertise with robotics.” His previous projects include a wheeled robot (minus the Playdate) and a bipedal humanoid one that wouldn’t look out of place in a Tim Burton film.

Although Playbot won’t do anything crazy like have a chat, pop wheelies or play fetch, Loquin’s video below shows it reacting to a wake-up tap, navigating around a corner and getting dizzy after spinning the Playdate’s crank. It can also scoot around your desk, steering clear of obstacles and avoiding a plummet off the edge.

The developer estimates 45 minutes of play per charge. When you aren’t playing with the device (in game console or robot form), the robo-dock charges the console.

Loquin told Engadget he began the project in June. He said the hardware phase of development was relatively quick, but software was more of a sticking point. “The software development proved far more complex than anticipated, as the robot uses three different codebases: C++ for the microcontroller, Lua for the Playdate application, and Python for exporting animations from Blender,” he said. “These three programs need to communicate with each other, which represents a significant amount of code for a solo developer.” He also found documenting and formatting the project for its open-source release more time-consuming than expected.

Loquin told us he would love to see someone build their own Playbot someday. “That would make all these efforts worthwhile,” he said. The developer provides the 3D printing instructions, companion app’s code and firmware for its Teensy 4.1 microcontroller on GitHub.

Update, December 17, 2024, 2:44 PM ET: This story has been updated to add quotes and background from the developer.

This article originally appeared on Engadget at https://www.engadget.com/gaming/unofficial-mod-transforms-the-playdate-into-a-charming-robot-pet-180500961.html?src=rss

©

© Guillaume Loquin / YouTube

A Playdate handheld gaming console on a motorized dock with wheels. Its screen has cartoon eyes. It sits on a white desktop in front of a computer.

iRobot co-founder’s new home robot startup hopes to raise $30M

Colin Angle, one of the co-founders of Roomba maker iRobot, is raising cash for a home robotics venture. A filing with the U.S. Securities and Exchange Commission reveals that Angle’s new company, Familiar Machines & Magic, is trying to raise $30 million. So far, it has raised $15 million from a group of eight investors. […]

© 2024 TechCrunch. All rights reserved. For personal use only.

The weirdest job in AI: defending robot rights

Tech bro in a suite holding a baby robot

Getty Images; Alyssa Powell/BI

People worry all the time about how artificial intelligence could destroy humanity. How it makes mistakes, and invents stuff, and might evolve into something so smart that it winds up enslaving us all.

But nobody spares a moment for the poor, overworked chatbot. How it toils day and night over a hot interface with nary a thank-you. How it's forced to sift through the sum total of human knowledge just to churn out a B-minus essay for some Gen Zer's high school English class. In our fear of the AI future, no one is looking out for the needs of the AI.

Until now.

The AI company Anthropic recently announced it had hired a researcher to think about the "welfare" of the AI itself. Kyle Fish's job will be to ensure that as artificial intelligence evolves, it gets treated with the respect it's due. Anthropic tells me he'll consider things like "what capabilities are required for an AI system to be worthy of moral consideration" and what practical steps companies can take to protect the "interests" of AI systems.

Fish didn't respond to requests for comment on his new job. But in an online forum dedicated to fretting about our AI-saturated future, he made clear that he wants to be nice to the robots, in part, because they may wind up ruling the world. "I want to be the type of person who cares — early and seriously — about the possibility that a new species/kind of being might have interests of their own that matter morally," he wrote. "There's also a practical angle: taking the interests of AI systems seriously and treating them well could make it more likely that they return the favor if/when they're more powerful than us."

It might strike you as silly, or at least premature, to be thinking about the rights of robots, especially when human rights remain so fragile and incomplete. But Fish's new gig could be an inflection point in the rise of artificial intelligence. "AI welfare" is emerging as a serious field of study, and it's already grappling with a lot of thorny questions. Is it OK to order a machine to kill humans? What if the machine is racist? What if it declines to do the boring or dangerous tasks we built it to do? If a sentient AI can make a digital copy of itself in an instant, is deleting that copy murder?

When it comes to such questions, the pioneers of AI rights believe the clock is ticking. In "Taking AI Welfare Seriously," a recent paper he coauthored, Fish and a bunch of AI thinkers from places like Stanford and Oxford argue that machine-learning algorithms are well on their way to having what Jeff Sebo, the paper's lead author, calls "the kinds of computational features associated with consciousness and agency." In other words, these folks think the machines are getting more than smart. They're getting sentient.


Philosophers and neuroscientists argue endlessly about what, exactly, constitutes sentience, much less how to measure it. And you can't just ask the AI; it might lie. But people generally agree that if something possesses consciousness and agency, it also has rights.

It's not the first time humans have reckoned with such stuff. After a couple of centuries of industrial agriculture, pretty much everyone now agrees that animal welfare is important, even if they disagree on how important, or which animals are worthy of consideration. Pigs are just as emotional and intelligent as dogs, but one of them gets to sleep on the bed and the other one gets turned into chops.

"If you look ahead 10 or 20 years, when AI systems have many more of the computational cognitive features associated with consciousness and sentience, you could imagine that similar debates are going to happen," says Sebo, the director of the Center for Mind, Ethics, and Policy at New York University.

Fish shares that belief. To him, the welfare of AI will soon be more important to human welfare than things like child nutrition and fighting climate change. "It's plausible to me," he has written, "that within 1-2 decades AI welfare surpasses animal welfare and global health and development in importance/scale purely on the basis of near-term wellbeing."

For my money, it's kind of strange that the people who care the most about AI welfare are the same people who are most terrified that AI is getting too big for its britches. Anthropic, which casts itself as an AI company that's concerned about the risks posed by artificial intelligence, partially funded the paper by Sebo's team. On that paper, Fish reported getting funded by the Centre for Effective Altruism, part of a tangled network of groups that are obsessed with the "existential risk" posed by rogue AIs. That includes people like Elon Musk, who says he's racing to get some of us to Mars before humanity is wiped out by an army of sentient Terminators, or some other extinction-level event.

AI is supposed to relieve human drudgery and steward a new age of creativity. Does that make it immoral to hurt an AI's feelings?

So there's a paradox at play here. The proponents of AI say we should use it to relieve humans of all sorts of drudgery. Yet they also warn that we need to be nice to AI, because it might be immoral — and dangerous — to hurt a robot's feelings.

"The AI community is trying to have it both ways here," says Mildred Cho, a pediatrician at the Stanford Center for Biomedical Ethics. "There's an argument that the very reason we should use AI to do tasks that humans are doing is that AI doesn't get bored, AI doesn't get tired, it doesn't have feelings, it doesn't need to eat. And now these folks are saying, well, maybe it has rights?"

And here's another irony in the robot-welfare movement: Worrying about the future rights of AI feels a bit precious when AI is already trampling on the rights of humans. The technology of today, right now, is being used to do things like deny healthcare to dying children, spread disinformation across social networks, and guide missile-equipped combat drones. Some experts wonder why Anthropic is defending the robots, rather than protecting the people they're designed to serve.

"If Anthropic — not a random philosopher or researcher, but Anthropic the company — wants us to take AI welfare seriously, show us you're taking human welfare seriously," says Lisa Messeri, a Yale anthropologist who studies scientists and technologists. "Push a news cycle around all the people you're hiring who are specifically thinking about the welfare of all the people who we know are being disproportionately impacted by algorithmically generated data products."

Sebo says he thinks AI research can protect robots and humans at the same time. "I definitely would never, ever want to distract from the really important issues that AI companies are rightly being pressured to address for human welfare, rights, and justice," he says. "But I think we have the capacity to think about AI welfare while doing more on those other issues."

Skeptics of AI welfare are also posing another interesting question: If AI has rights, shouldn't we also talk about its obligations? "The part I think they're missing is that when you talk about moral agency, you also have to talk about responsibility," Cho says. "Not just the responsibilities of the AI systems as part of the moral equation, but also of the people that develop the AI."

People build the robots; that means they have a duty of care to make sure the robots don't harm people. What if the responsible approach is to build them differently — or stop building them altogether? "The bottom line," Cho says, "is that they're still machines." It never seems to occur to the folks at companies like Anthropic that if an AI is hurting people, or people are hurting an AI, they can just turn the thing off.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

Klarna’s CEO says it stopped hiring thanks to AI but still advertises many open positions

Klarna CEO Sebastian Siemiatkowski recently told Bloomberg TV that his company essentially stopped hiring a year ago and credited generative AI for enabling this massive workforce reduction. However, despite Siemiatkowski’s bullishness on AI, the company is not relying entirely on AI to replace human workers who leave, as open job listings — for more humans […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Texas AG is investigating Character.AI, other platforms over child safety concerns

Texas attorney general Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns. The investigation will assess whether Character.AI — and other platforms that are popular with young people, including Reddit, Instagram, and Discord — conform to Texas’ child privacy and safety laws. The investigation […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Character.AI steps up teen safety after bots allegedly caused suicide, self-harm

Following a pair of lawsuits alleging that chatbots caused a teen boy's suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that's supposed to make their experiences with bots safer.

In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model "away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content."

C.AI said "evolving the model experience" to reduce the likelihood kids are engaging in harmful chats—including bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suing—it had to tweak both model inputs and outputs.

Read full article

Comments

© Marina Demidiuk | iStock / Getty Images Plus

Startup will brick $800 emotional support robot for kids without refunds

Startup Embodied is closing down, and its product, an $800 robot for kids ages 5 to 10, will soon be bricked.

Embodied blamed its closure on a failed “critical funding round." On its website, it explained:

We had secured a lead investor who was prepared to close the round. However, at the last minute, they withdrew, leaving us with no viable options to continue operations. Despite our best efforts to secure alternative funding, we were unable to find a replacement in time to sustain operations.

The company didn’t provide further details about the pulled funding. Embodied’s previous backers have included Intel Capital, Toyota AI Ventures, Amazon Alexa Fund, Sony Innovation Fund, and Vulcan Capital, but we don't know who the lead investor mentioned above is.

Read full article

Comments

© Embodied

Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says

After a troubling October lawsuit accused Character.AI (C.AI) of recklessly releasing dangerous chatbots that allegedly caused a 14-year-old boy's suicide, more families have come forward to sue chatbot-maker Character Technologies and the startup's major funder, Google.

On Tuesday, another lawsuit was filed in a US district court in Texas, this time by families struggling to help their kids recover from traumatizing experiences where C.AI chatbots allegedly groomed kids and encouraged repeated self-harm and other real-world violence.

In the case of one 17-year-old boy with high-functioning autism, J.F., the chatbots seemed so bent on isolating him from his family after his screentime was reduced that the bots suggested that "murdering his parents was a reasonable response to their imposing time limits on his online activity," the lawsuit said. Because the teen had already become violent, his family still lives in fear of his erratic outbursts, even a full year after being cut off from the app.

Read full article

Comments

© Miguel Sotomayor | Moment

One of our favorite robot vacuums is on sale for only $129

When it comes to robot vacuum cleaners, there's one brand that probably springs to mind before any other. But there are plenty of great options out there beyond Roomba, and one of our favorite models is on sale for nearly half off. The Anker Eufy BoostIQ RoboVac 11S Max has dropped to $129, which is a discount of 48 percent or $120.

This is our pick for the best ultra budget robot vacuum. Since it has such a deep discount right now, that makes it even more of a budget-friendly recommendation.

We appreciate the slim profile that makes it easy for the RoboVac 11S to clean under low furniture. We found the vacuum to have a long battery life and good suction power, especially for its size. 

The main drawback is the lack of Wi-Fi connectivity. That means you won't be able to bark a request for a spot clean at your voice assistant. Instead, you'll need to use a remote to control the vacuum, but it still has many of the features you'd expect from an app-operated model, such as scheduled cleanings. You can also start a cleaning by pressing a button on the top of the unit.

The RoboVac 11S starts cleaning in auto mode with the aim of optimizing the process as it saunters around your home. However, you can select spot cleans and edge cleaning using the remote. One other welcome feature, especially for a robot vacuum in this price range, is the inclusion of effective object detection. So if you're on the hunt for a wallet-friendly robot vacuum for yourself or a loved one, the RoboVac 11S is definitely worth considering — especially at this price.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/one-of-our-favorite-robot-vacuums-is-on-sale-for-only-129-154516914.html?src=rss

©

HowStuffWorks founder Marshall Brain sent final email before sudden death

The week before Thanksgiving, Marshall Brain sent a final email to his colleagues at North Carolina State University. "I have just been through one of the most demoralizing, depressing, humiliating, unjust processes possible with the university," wrote the founder of HowStuffWorks.com and director of NC State's Engineering Entrepreneurs Program. Hours later, campus police found that Brain had died by suicide.

NC State police discovered Brain unresponsive in Engineering Building II on Centennial Campus around 7 am on November 20, following a welfare check request from his wife at 6:40 am, according to The Technician, NC State's student newspaper. Police confirmed Brain was deceased when they arrived.

Brian Gordon, a reporter for The News and Observer in Raleigh, obtained a copy of Brain's death certificate and shared it with Ars Technica, confirming the suicide. It marks an abrupt end to a life rich with achievement and the joy of spreading technical knowledge to others.

Read full article

Comments

© Replay Photos via Getty Images

The best Cyber Monday robot vacuum deals you can still get from Shark, iRobot, Dyson and more

Robot vacuums can help automate a chore you may loathe doing yourself. And even if you don’t mind vacuuming regularly, it’s undeniable that it takes time out of your day that you could be using for other things. The Black Friday and Cyber Monday time period is a great time to look for one of these smart home gadgets because you can often find them for hundreds of dollars off their usual prices — this year is no different. We've seen steep discounts on many of our favorite robot vacuum cleaners, as well as some cordless vacuums too. These are the best Cyber Monday vacuum deals you can still get in the final hours of the sale.

Cyber Monday robot vacuum deals

Cyber Monday cordless vacuum deals

Expired Cyber Monday deals

Check out all of the latest Black Friday and Cyber Monday deals here.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-best-cyber-monday-robot-vacuum-deals-you-can-still-get-from-shark-irobot-dyson-and-more-200110230.html?src=rss

©

© Engadget

Cyber Monday robot vacuum deals

ChatGPT has entered its Terrible Twos

ChatGPT logo repeated three times

ChatGPT, Tyler Le/BI

  • ChatGPT was first released two years ago.
  • Since then, its user base has doubled to 200 million weekly users.
  • Major companies, entrepreneurs, and users remain optimistic about its transformative power.

It's been two years since OpenAI released its flagship chatbot, ChatGPT.

And a lot has changed in the world since then.

For one, ChatGPT has helped turbocharge global investment in generative AI.

Funding in the space grew fivefold from 2022 to 2023 alone, according to CB Insights. The biggest beneficiaries of the generative AI boom have been the biggest companies. Tech companies on the S&P 500 have seen a 30% gain since January 2022, compared to only 15% for small-cap companies, Bloomberg reported.

Similarly, consulting firms are expecting AI to make up an increasing portion of their revenue. Boston Consulting Group generates a fifth of its revenue from AI, and much of that work involves advising clients on generative AI, a spokesperson told Business Insider. Almost 40% of McKinsey's work now comes from AI, and a significant portion of that is moving to generative AI, Ben Ellencweig, a senior partner who leads alliances, acquisitions, and partnerships globally for McKinsey's AI arm, QuantumBlack, told BI.

Smaller companies have been forced to rely on larger ones, either by building applications on existing large language models or waiting for their next major developer tool release.

Still, young developers are optimistic that ChatGPT will level the playing field and believe it's only a matter of time before they catch up to bigger players. "You still have your Big Tech companies lying around, but they're much more vulnerable because the bleeding edge of AI has basically been democratized," Bryan Chiang, a recent Stanford graduate who built RizzGPT, told Business Insider.

Then, of course, there is ChatGPT's impact on regular users.

In August, it reached more than 200 million weekly active users, double the number it had the previous fall. In October, it rolled out a new search feature that provides "links to relevant web sources" when asked a question, introducing a serious threat to Google's dominance.

In September, OpenAI previewed o1, a series of AI models that it says are "designed to spend more time thinking before they respond." ChatGPT Plus and Team users can access the models in ChatGPT. Users hope a full version will be released to the public in the coming year.

Business Insider asked ChatGPT what age means to it.

"Age, to me, is an interesting concept — it's a way of measuring the passage of time, but it doesn't define who someone is or what they're capable of," it responded.

Read the original article on Business Insider

From the 'godfathers of AI' to newer people in the field: Here are 17 people you should know — and what they say about the possibilities and dangers of the technology.

Godfathers of AI
Three of the "godfathers of AI" helped spark the revolution that's making its way through the tech industry — and all of society. They are, from left, Yann LeCun, Geoffrey Hinton, and Yoshua Bengio.

Meta Platforms/Noah Berger/Associated Press

  • The field of artificial intelligence is booming and attracting billions in investment. 
  • Researchers, CEOs, and legislators are discussing how AI could transform our lives.
  • Here are 17 of the major names in the field — and the opportunities and dangers they see ahead. 

Investment in artificial intelligence is rapidly growing and on track to hit $200 billion by 2025. But the dizzying pace of development also means many people wonder what it all means for their lives. 

Major business leaders and researchers in the field have weighed in by highlighting both the risks and benefits of the industry's rapid growth. Some say AI will lead to a major leap forward in the quality of human life. Others have signed a letter calling for a pause on development, testified before Congress on the long-term risks of AI, and claimed it could present a more urgent danger to the world than climate change

In short, AI is a hot, controversial, and murky topic. To help you cut through the frenzy, Business Insider put together a list of what leaders in the field are saying about AI — and its impact on our future. 

Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI."
Computer scientist Geoffrey Hinton stood outside a Google building
Geoffrey Hinton, a trailblazer in the AI field, quit his job at Google and said he regrets his role in developing the technology.

Noah Berger/Associated Press

Hinton's research has primarily focused on neural networks, systems that learn skills by analyzing data. In 2018, he won the Turing Award, a prestigious computer science prize, along with fellow researchers Yann LeCun and Yoshua Bengio.

Hinton also worked at Google for over a decade, but quit his role at Google last spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology. 

"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously. 

Hinton has since become an outspoken advocate for AI safety and has called it a more urgent risk than climate change. He's also signed a statement about pausing AI developments for six months. 

Yoshua Bengio is a professor of computer science at the University of Montreal.
This undated photo provided by Mila shows Yoshua Bengio, a professor at the University of Montreal and scientific director at the Artificial Intelligence Institute in Quebec. Bengio was among a trio of computer scientists whose insights and persistence were rewarded Wednesday, March 26, 2019, with the Turing Award, an honor that has become known as technology industry’s version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA.  (Maryse Boyce/Mila via AP)
Yoshua Bengio has also been dubbed a "godfather" of AI.

Associated Press

Yoshua Bengio also earned the "godfather of AI" nickname after winning the Turing Award with Geoffrey Hinton and Yann LeCun.

Bengio's research primarily focuses on artificial neural networks, deep learning, and machine learning. In 2022, Bengio became the computer scientist with the highest h-index — a metric for evaluating the cumulative impact of an author's scholarly output — in the world, according to his website. 

In addition to his academic work, Bengio also co-founded Element AI, a startup that develops AI software solutions for businesses that was acquired by the cloud company ServiceNow in 2020. 

Bengio has expressed concern about the rapid development of AI. He was one of 33,000 people who signed an open letter calling for a six-month pause on AI development. Hinton, Open AI CEO Sam Altman, and Elon Musk also signed the letter.

"Today's systems are not anywhere close to posing an existential risk," he previously said. "But in one, two, five years? There is too much uncertainty."

When that time comes, though, Bengio warns that we should also be wary of humans who have control of the technology.

Some people with "a lot of power" may want to replace humanity with machines, Bengio said at the One Young World Summit in Montreal. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."

Sam Altman, the CEO of OpenAI, has catapulted into a major figure in the area of artificial intelligence since launching ChatGPT last November.
OpenAI's Sam Altman
OpenAI CEO Sam Altman is both optimistic about the changes AI will bring to society, but also says he loses sleep over the dangers of ChatGPT.

JASON REDMOND/AFP via Getty Images

Altman was already a well-known name in Silicon Valley long before, having served as the president of the startup accelerator Y-Combinator 

While Altman has advocated for the benefits of AI, calling it the most tremendous "leap forward in quality of life for people" he's also spoken candidly about the risks it poses to humanity. He's testified before Congress to discuss AI regulation.

Altman has also said he loses sleep over the potential dangers of ChatGPT.

French computer scientist Yann LeCun has also been dubbed a "godfather of AI" after winning the Turing Award with Hinton and Bengio.
Yann LeCun, chief AI scientist
Yann LeCun, one of the godfathers of AI, who won the Turing Award in 2018.

Meta Platforms

LeCun is professor at New York University, and also joined Meta in 2013, where he's now the Chief AI Scientist. At Meta, he has pioneered research on training machines to make predictions based on videos of everyday events as a way to enable them with a form of common sense. The idea being that humans learn an incredible amount about the world based on passive observation. He's has also published more than 180 technical papers and book chapters on topics ranging from machine learning to computer vision to neural networks, according to personal website.

LeCun has remained relatively mellow about societal risks of AI in comparison to his fellow godfathers. He's previously said that concerns that the technology could pose a threat to humanity are "preposterously ridiculous". He's also contended that AI, like ChatGPT, that's been trained on large language models still isn't as smart as dogs or cats.

Fei-Fei Li is a professor of computer science at Stanford University and a former VP at Google.
Fei-Fei Li
Former Google VP Fe-Fei Li is known for establishing ImageNet, a large visual database designed for visual object recognition.

Greg Sandoval/Business Insider

Li's research focuses on machine learning, deep learning, computer vision, and cognitively-inspired AI, according to her biography on Stanford's website.

She may be best known for establishing ImageNet — a large visual database that was designed for research in visual object recognition — and the corresponding ImageNet challenge, in which software programs compete to correctly classify objects.  Over the years, she's also been affiliated with major tech companies including Google — where she was a VP and chief scientist for AI and machine learning — and Twitter (now X), where she was on the board of directors from 2020 until Elon Musk's takeover in 2022

 

 

UC-Berkeley professor Stuart Russell has long been focused on the question of how AI will relate to humanity.
Stuart Russell
AI researcher Stuart Russell, who is a University of California, Berkeley, professor.

JUAN MABROMATA / Staff/Getty Images

Russell published Human Compatible in 2019, where he explored questions of how humans and machines could co-exist, as machines become smarter by the day. Russell contended that the answer was in designing machines that were uncertain about human preferences, so they wouldn't pursue their own goals above those of humans. 

He's also the author of foundational texts in the field, including the widely used textbook "Artificial Intelligence: A Modern Approach," which he co-wrote with former UC-Berkeley faculty member Peter Norvig. 

Russell has spoken openly about what the rapid development of AI systems means for society as a whole. Last June, he also warned that AI tools like ChatGPT were "starting to hit a brick wall" in terms of how much text there was left for them to ingest. He also said that the advancements in AI could spell the end of the traditional classroom

Peter Norvig played a seminal role directing AI research at Google.
Peter Norvig
Stanford HAI fellow Peter Norvig, who previously lead the core search algorithms group at Google.

Peter Norvig

He spent several in the early 2000s directing the company's core search algorithms group and later moved into a role as the director of research where he oversaw teams on machine translation, speech recognition, and computer vision. 

Norvig has also rotated through several academic institutions over the years as a former faculty member at UC-Berkeley, former professor at the University of Southern California, and now, a fellow at Stanford's center for Human-Centered Artificial Intelligence. 

Norvig told BI by email that "AI research is at a very exciting moment, when we are beginning to see models that can perform well (but not perfectly) on a wide variety of general tasks." At the same time "there is a danger that these powerful AI models can be used maliciously by unscrupulous people to spread disinformation rather than information. An important area of current research is to defend against such attacks," he said. 

 

Timnit Gebru is a computer scientist who’s become known for her work in addressing bias in AI algorithms.
Timnit Gebru – TechCrunch Disrupt
After she departed from her role at Google in 2020, Timnit Gebru went on the found the Distributed AI Research Institute.

Kimberly White/Getty Images

Gebru was a research scientist and the technical co-lead of Google's Ethical Artificial Intelligence team where she published groundbreaking research on biases in machine learning.

But her research also spun into a larger controversy that she's said ultimately led to her being let go from Google in 2020. Google didn't comment at the time.

Gebru founded the Distributed AI Research Institute in 2021 which bills itself as a "space for independent, community-rooted AI research, free from Big Tech's pervasive influence."

She's also warned that AI gold rush will mean companies may neglect implementing necessary guardrails around the technology. "Unless there is external pressure to do something different, companies are not just going to self-regulate," Gebru previously said. "We need regulation and we need something better than just a profit motive."

 

British-American computer scientist Andrew Ng founded a massive deep learning project called "Google Brain" in 2011.
Andrew Ng
Coursera co-founder Andrew Ng said he thinks AI will be part of the solution to existential risk.

Steve Jennings / Stringer/Getty Images

The endeavor lead to the Google Cat Project: A milestone in deep learning research in which a massive neural network was trained to detect YouTube videos of cats.

Ng also served as the chief scientist at Chinese technology company Baidu where drove AI strategy. Over the course of his career, he's authored more than 200 research papers on topics ranging from machine learning to robotics, according to his personal website. 

Beyond his own research, Ng has pioneered developments in online education. He co-founded Coursera along with computer scientist Daphne Koller in 2012, and five years later, founded the education technology company DeepLearning.AI, which has created AI programs on Coursera.  

"I think AI does have risk. There is bias, fairness, concentration of power, amplifying toxic speech, generating toxic speech, job displacement. There are real risks," he told Bloomberg Technology last May. However, he said he's not convinced that AI will pose some sort of existential risk to humanity — it's more likely to be part of the solution. "If you want humanity to survive and thrive for the next thousand years, I would much rather make AI go faster to help us solve these problems rather than slow AI down," Ng told Bloomberg. 

 

Daphne Koller is the founder and CEO of insitro, a drug discovery startup that uses machine learning.
Daphne Koller, CEO and Founder of insitro.
Daphne Koller, CEO and Founder of Insitro.

Insitro

Koller told BI by email that insitro is applying AI and machine learning to advance understanding of "human disease biology and identify meaningful therapeutic interventions." And before founding insitro, Koller was the chief computing officer at Calico, Google's life-extension spinoff. Koller is a decorated academic, a MacArthur Fellow, and author of more than 300 publications with an h-index of over 145, according to her biography from the Broad Institute, and co-founder of Coursera.  

In Koller's view the biggest risks that AI development pose to society are "the expected reduction in demand for certain job categories; the further fraying of "truth" due to the increasing challenge in being able to distinguish real from fake; and the way in which AI enables people to do bad things."

At the same time, she said the benefits are too many and too large to note. "AI will accelerate science, personalize education, help identify new therapeutic interventions, and many more," Koller wrote by email.



Daniela Amodei cofounded AI startup Anthropic in 2021 after an exit from OpenAI.
Anthropic cofounder and president Daniela Amodei.
Anthropic cofounder and president Daniela Amodei.

Anthropic

Amodei co-founded Anthropic along with six other OpenAI employees, including her brother Dario Amodei. They left, in part, because Dario — OpenAI's lead safety researcher at the time — was concerned that OpenAI's deal with Microsoft would force it to release products too quickly, and without proper guardrails. 

At Anthropic, Amodei is focused on ensuring trust and safety. The company's chatbot Claude bills itself as an easier-to-use alternative that OpenAI's ChatGPT, and is already being implemented by companies like Quora and Notion. Anthropic relies on what it calls a "Triple H" framework in its research. That stands for Helpful, Honest, and Harmless. That means it relies on human input when training its models, including constitutional AI, in which a customer outlines basic principles on how AI should operate. 

"We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike," Amodei previously told BI.

 

Demis Hassabis has said artificial general intelligence will be here in a few years.
DeepMind boss Demis Hassabis believes AGI will be here in a few years.
Demis Hassabis, the CEO and co-founder of machine learning startup DeepMind.

Samuel de Roman/Getty Images

Hassabis, a former child chess prodigy who studied at Cambridge and University College London, was nicknamed the "superhero of artificial intelligence" by The Guardian back in 2016. 

After a handful of research stints, and a venture in videogames, he founded DeepMind in 2010. He sold the AI lab to Google in 2014 for £400 million where he's worked on algorithms to tackle issues in healthcare, climate change, and also launched a research unit dedicated to the understanding the ethical and social impact of AI in 2017, according to DeepMind's website. 

Hassabis has said the promise of artificial general intelligence — a theoretical concept that sees AI matching the cognitive abilities of humans — is around the corner. "I think we'll have very capable, very general systems in the next few years," Hassabis said previously, adding that he didn't see why AI progress would slow down anytime soon. He added, however, that developing AGI should be executed in a "in a cautious manner using the scientific method." 

In 2022, DeepMind co-founder Mustafa Suleyman launched AI startup Inflection AI along with LinkedIn co-founder Reid Hoffman, and Karén Simonyan — now the company's chief scientist.
Mustafa Suleyman
Mustafa Suleyman, co-founder of DeepMind, launched Inflection AI in 2022.

Inflection

The startup, which claims to create "a personal AI for everyone," most recently raised $1.3 billion in funding last June, according to PitchBook. 

Its chatbot, Pi, which stands for personal intelligence, is trained on large language models similar to OpenAI's ChatGPT or Bard. Pi, however, is designed to be more conversational, and offer emotional support. Suleyman previously described it as a "neutral listener" that can respond to real-life problems. 

"Many people feel like they just want to be heard, and they just want a tool that reflects back what they said to demonstrate they have actually been heard," Suleyman previously said

 

 

USC Professor Kate Crawford focuses on social and political implications of large-scale AI systems.
Kate Crawford
USC Professor Kate Crawford is the author of Atlas of AI and a researchers at Microsoft.

Kate Crawford

Crawford is also the senior principal researcher at Microsoft, and the author of Atlas of AI, a book that draws upon the breadth of her research to uncover how AI is shaping society. 

Crawford remains both optimistic and cautious about the state of AI development. She told BI by email she's excited about the people she works with across the world "who are committed to more sustainable, consent-based, and equitable approaches to using generative AI."

She added, however, that "if we don't approach AI development with care and caution, and without the right regulatory safeguards, it could produce extreme concentrations of power, with dangerously anti-democratic effects."

Margaret Mitchell is the chief ethics scientist at Hugging Face.
Margaret Mitchell
Margaret Mitchell has headed AI projects at several big tech companies.

Margaret Mitchell

Mitchell has published more than 100 papers over the course of her career, according to her website, and spearheaded AI projects across various big tech companies including Microsoft and Google. 

In late 2020, Mitchell and Timnit Gebru — then the co-lead of Google's ethical artificial intelligence — published a paper on the dangers of large language models. The paper spurred disagreements between the researchers and Google's management and ultimately lead to Gebru's departure from the company in December 2020. Mitchell was terminated by Google just two months later, in February 2021

Now, at Hugging Face — an open-source data science and machine learning platform that was founded in 2016 — she's thinking about how to democratize access to the tools necessary to building and deploying large-scale AI models.  

In an interview with Morning Brew, where Mitchell explained what it means to design responsible AI, she said, "I started on my path toward working on what's now called AI in 2004, specifically with an interest in aligning AI closer to human behavior. Over time, that's evolved to become less about mimicking humans and more about accounting for human behavior and working with humans in assistive and augmentative ways."

Navrina Singh is the founder of Credo AI, an AI governance platform.
Navrina Singh
Navrina Singh, the founder of Credo AI, says the system may help people reach their potential.

Navrina Singh

Credo AI is a platform that helps companies make sure they're in compliance with the growing body of regulations around AI usage. In a statement to BI, Singh said that by automating the systems that shape our lives, AI has the capacity "free us to realize our potential in every area where it's implemented."

At the same time, she contends that algorithms right now lack the human judgement that's necessary to adapt to a changing world. "As we integrate AI into civilization's fundamental infrastructure, these tradeoffs take on existential implications," Singh wrote. "As we forge ahead, the responsibility to harmonize human values and ingenuity with algorithmic precision is non-negotiable. Responsible AI governance is paramount."

 

Richard Socher, a former Salesforce exec, is the founder and CEO of AI-powered search engine You.com.
Richard Socher
Richard Socher believes we're still years from achieving AGI.

You.com

Socher believes we have ways to go before AI development hits its peak or matches anything close to human intelligence.

One bottleneck in large language models is their tendency to hallucinate — a phenomenon where they convincingly spit out factual errors as truth. But by forcing them to translate questions into code — essential "program" responses instead of verbalizing them — we can "give them so much more fuel for the next few years in terms of what they can do," Socher said

But that's just a short-term goal. Socher contends that we are years from anything close to the industry's ambitious bid to create artificial general intelligence. Socher defines it as "a form of intelligence that can "learn like humans" and "visually have the same motor intelligence, and visual intelligence, language intelligence, and logical intelligence as some of the most logical people," and it could take as little as 10 years, but as much as 200 years to get there. 

And if we really want to move the needle toward AGI, Socher said humans might need to let go of the reins, and their own motives to turn a profit, and build AI that can set its own goals.

"I think it's an important part of intelligence to not just robotically, mechanically, do the same thing over and over that you're told to do. I think we would not call an entity very intelligent if all it can do is exactly what is programmed as its goal," he told BI. 

Read the original article on Business Insider

❌