Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Google's Gemini Deep Research tool is now available globally

20 December 2024 at 13:01

A little more than a week after announcing Gemini Deep Research, Google is making the tool available to more people. As of today, the feature, part of the company’s paid Gemini Advanced suite, is available in every country and language where Google offers Gemini. In practice, that means Gemini Advanced users in more than 100 countries globally can start using Deep Research right now. Previously, it was only available in English.

As a refresher, Deep Research takes advantage of Gemini 1.5 Pro’s ability to reason through “long context windows” to create comprehensive but easy-to-read reports on complex topics. Once you provide the tool a prompt, it will generate a research plan for you to approve and tweak as you see fit. After it has your go-ahead, Gemini 1.5 Pro will search the open web for information related to your query. That process can sometimes take several minutes, but once Gemini is done, you’ll have a multi-page report you can export to Google Docs for later viewing.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-deep-research-tool-is-now-available-globally-210151873.html?src=rss

©

© Google

Gemini Deep Research

OpenAI's next-generation o3 model will arrive early next year

20 December 2024 at 11:17

After nearly two weeks of announcements, OpenAI capped off its 12 Days of OpenAI livestream series with a preview of its next-generation frontier model. “Out of respect for friends at Telefónica (owner of the O2 cellular network in Europe), and in the grand tradition of OpenAI being really, truly bad at names, it’s called o3,” OpenAI CEO Sam Altman told those watching the announcement on YouTube.

The new model isn’t ready for public use just yet. Instead, OpenAI is first making o3 available to researchers who want help with safety testing. OpenAI also announced the existence of o3-mini. Altman said the company plans to launch that model “around the end of January,” with o3 following “shortly after that.”

As you might expect, o3 offers improved performance over its predecessor, but just how much better it is than o1 is the headline feature here. For example, when put through this year's American Invitational Mathematics Examination, o3 achieved an accuracy score of 96.7 percent. By contrast, o1 earned a more modest 83.3 percent rating. “What this signifies is that o3 often misses just one question,” said Mark Chen, senior vice president of research at OpenAI. In fact, o3 did so well on the usual suite of benchmarks OpenAI puts its models through that the company had to find more challenging tests to benchmark it against.   

An ARC AGI test.
ARC AGI

One of those is ARC-AGI, a benchmark that tests an AI algorithm's ability to intuite and learn on the spot. According to the test's creator, the non-profit ARC Prize, an AI system that could successfully beat ARC-AGI would represent "an important milestone toward artificial general intelligence." Since its debut in 2019, no AI model has beaten ARC-AGI. The test consists of input-output questions that most people can figure out intuitively. For instance, in the example above, the correct answer would be to create squares out of the four polyominos using dark blue blocks.       

On its low-compute setting, o3 scored 75.7 percent on the test. With additional processing power, the model achieved a rating of 87.5 percent. "Human performance is comparable at 85 percent threshold, so being above this is a major milestone," according to Greg Kamradt, president of ARC Prize Foundation.

A graph comparing o3-mini's performance against o1, and the cost of that performance.
OpenAI

OpenAI also showed off o3-mini. The new model uses OpenAI's recently announced Adaptive Thinking Time API to offer three different reasoning modes: Low, Medium and High. In practice, this allows users to adjust how long the software "thinks" about a problem before delivering an answer. As you can see from the above graph, o3-mini can achieve results comparable to OpenAI's current o1 reasoning model, but at a fraction of the compute cost. As mentioned, o3-mini will arrive for public use ahead of o3. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-next-generation-o3-model-will-arrive-early-next-year-191707632.html?src=rss

©

© REUTERS / Reuters

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI brings ChatGPT to WhatsApp

18 December 2024 at 10:46

ChatGPT is now available on WhatsApp. Starting today, if you add 1 (800) CHAT-GPT to your contacts — that's 1 (800) 242-8478 — you can start using the chatbot over Meta's messaging app. In this iteration, ChatGPT is limited to text-only input, so there's no Advanced Voice Mode or visual input on offer, but you still get all the smarts of the o1-mini model

What's more, over WhatsApp ChatGPT is available everywhere OpenAI offers its chatbot, with no account necessary. OpenAI is working on a way to authenticate existing users over WhatApp, though the company did not share a timeline for when that feature might launch. It's worth noting Meta offers its own chatbot in WhatsApp.   

Separately, OpenAI is launching a ChatGPT hotline in the US. Once again, the number for that is 1 (800) 242-8478. As can probably imagine, the toll-free number works with any phone, be it a smartphone or old flip phone. OpenAI will offer 15 minutes of free ChatGPT usage through the hotline, though you can log into your account to get more time. 

"We’re only just getting started on making ChatGPT more accessible to everyone," said Kevin Weil, chief product officer at OpenAI, during the company's most recent 12 Days of OpenAI livestream. According to Weil, the two features were born from a recent hack week the company held. Other recent livestreams have seen OpenAI make ChatGPT Search available to all free users and bring its Sora video generation out of private preview.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-brings-chatgpt-to-whatsapp-184653703.html?src=rss

©

© OpenAI

Three OpenAI employees demo ChatGPT's WhatsApp integration.

The best cheap phones for 2025

It may be fashionable to spend $1,000 on the latest flagship smartphone when they first get released, but it's not necessary. You don't even have to spend $500 today to get a decent handset, whether it’s a refurbished iPhone or an affordable Android phone, as there are plenty of options as low as $160 that could fit your needs.

But navigating the budget phone market can be tricky; options that look good on paper may not be in practice, and some handsets will end up costing you more when you consider many come with restrictive storage. While we at Engadget spend most of our time testing and reviewing mid- to high-end handsets, we've tested a number of the latest budget-friendly phones on the market to see which are actually worth your money.

What to look for in a cheap phone

For this guide, our top picks cost between $100 and $300. Anything less and you might as well go buy a dumb phone or high-end calculator instead. Since they’re meant to be more affordable than flagship phones and even midrange handsets, budget smartphones involve compromises; the cheaper a device, the lower your expectations around specs, performance and experience should be. For that reason, the best advice I can give is to spend as much as you can afford. In this price range, even $50 or $100 more can get you a dramatically better product.

Second, you should know what you want most from a phone. When buying a budget smartphone, you may need to sacrifice a decent main camera for long battery life, or trade a high-resolution display for a faster CPU. That’s just what comes with the territory, but knowing your priorities will make it easier to find the right phone.

It’s also worth noting some features can be hard to find on cheap handsets. For instance, you won’t need to search far for a device with all-day battery life — but if you want a great camera phone with excellent camera quality, you’re better off shelling out for one of the recommendations in our midrange smartphone guide, which all come in at $600 or less. Wireless charging and waterproofing also aren’t easy to find in this price range and forget about the fastest chipset. On the bright side, all our recommendations come with headphone jacks, so you won’t need to get wireless headphones.

iOS is also off the table, since the $400 Apple iPhone SE is the most affordable iPhone in the lineup. That leaves Android OS as the only option. Thankfully today, there’s little to complain about Google’s OS – and you may even prefer it to iOS. Lastly, keep in mind most Android manufacturers typically offer far less robust software features and support for their budget devices. In some cases, your new phone may only receive one major software update and a year or two of security patches beyond that. That applies to the OnePlus and Motorola recommendations on our list.

If you’d like to keep your phone for as long as possible, Samsung has the best software policy of any Android manufacturer in the budget space, offering four years of security updates on all of its devices. That said, if software support (or device longevity overall) is your main focus, consider spending a bit more the $500 Google Pixel 7a, which is our favorite midrange smartphone and has planned software updates through mid-2026.

Best cheap phones

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/best-cheap-phones-130017793.html?src=rss

©

© Engadget

The best cheap phones

ChatGPT Search is now available to everyone

16 December 2024 at 10:44

If you've been waiting patiently to try ChatGPT Search, you won't have to wait much longer. After rolling out to paid subscribers this fall, OpenAI announced Monday it's making the tool available to everyone, no Plus or Pro membership necessary

At that point, all you need before you can start using ChatGPT Search is an OpenAI account. Once you're logged in, and if your query calls for it, ChatGPT will automatically search the web for the latest information to answer your question. You can also force it to search the web, thanks to a handy new icon located right in the prompt bar. OpenAI has also added the option to make ChatGPT Search your browser's default search engine.    

At the same time, OpenAI is integrating ChatGPT Search and Advanced Voice mode together. As you might have guessed, the former allows ChatGPT's audio persona to search the web for answers to your questions and answer them in a natural, conversational way. For example, say you're traveling to a different city for vacation. You could ask ChatGPT what the weather will be like once you arrive, with the Search functionality built-in, the chatbot can answer that question with the most up-to-date information. 

To facilitate this functionality, OpenAI says it has partnered with leading news and data providers. As a result, you'll also see widgets for stocks, sports scores, the weather and more. Basically, ChatGPT Search is becoming a full-fledged Google competitor before our eyes.     

OpenAI announced the expanded availability during its most recent "12 Days of OpenAI" livestream. In previous live streams, the company announced the general availability of Sora and ChatGPT Pro, a new $200 subscription for its chatbot. With four more days to go, it's hard to see the company topping that announcement, but at this point, OpenAI likely has a surprise or two up its sleeve.

Correction 12/1 7/2024: A previous version of this article incorrectly stated OpenAI would roll out ChatGPT Search "over the coming months." The tool is now available to all logged-in users. We regret the error. 

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-is-getting-ready-to-roll-its-search-tool-out-to-everyone-184442971.html?src=rss

©

© OpenAI

A mouse pointer hovers over the "Search" icon in ChatGPT's prompt bar.

Google's new AI video model sucks less at physics

16 December 2024 at 09:00

Google may have only recently begun rolling out its Veo generative AI to enterprise customers, but the company is not wasting any time getting a new version of the video tool out to early testers. On Monday, Google announced a preview of Veo 2. According to the company, Veo 2 “understands the language of cinematography.” In practice, that means you can reference a specific genre of film, cinematic effect or lens when prompting the model.

Additionally, Google says the new model has a better understanding of real-world physics and human movement. Correctly modeling humans in motion is something all generative models struggle to do. So the company’s claim that Veo 2 is better when it comes to both of those trouble points is notable. Of course, the samples the company provided aren’t enough to know for sure; the true test of Veo 2’s capabilities will come when someone prompts it to generate a video of a gymnast's routine. Oh, and speaking of things video models struggle with, Google says Veo will produce artifacts like extra fingers “less frequently.”

A sample image of a squirrel Google's Imagen 3 generated.
Google

Separately, Google is rolling out improvements to Imagen 3. Of its text-to-image model, the company says the latest version generates brighter and better-composed images. Additionally, it can render more diverse art styles with greater accuracy. At the same time, it’s also better at following prompts more faithfully. Prompt adherence was an issue I highlighted when the company made Imagen 3 available to Google Cloud customers earlier this month, so if nothing else, Google is aware of the areas where its AI models need work.

Veo 2 will gradually roll out to Google Labs users in the US. For now, Google will limit testers to generating up to eight seconds of footage at 720p. For context, Sora can generate up to 20 seconds of 1080p footage, though doing so requires a $200 per month ChatGPT Pro subscription. As for the latest enhancements to Imagen 3, those are available to Google Labs users in more than 100 countries through ImageFX.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-new-ai-video-model-sucks-less-at-physics-170041204.html?src=rss

©

© Google

Google's Veo 2 generated a Pixar-like animation of a young girl in a kitchen.

Ragebound is a new Ninja Gaiden game from the team behind Blasphemous

12 December 2024 at 17:56

Resurrecting a beloved gaming series like Ninja Gaiden is always a tricky proposition. Anyone who might have worked on the franchise in its heyday has likely moved on to other projects or left the industry entirely. But judging by the talent working on Ninja Gaiden Ragebound, the new series entry revealed at the Game Awards, I think it's safe to say the franchise is in safe hands. That's because Ragebound unites two companies who know a thing or two about making quality games. 

The Game Kitchen — the Spanish studio behind Blasphemous and its excellent sequel, Blasphemous 2 — is developing the game, with Dotemu (Teenage Mutant Ninja Turtles: Shredder’s Revenge and Streets of Rage 4) on publishing duties. 

Right away, you can see the influence of The Game Kitchen. The studio's signature pixel art style looks gorgeous in the back half of the reveal trailer, and it looks like the game will reward tight, coordinated play from players. As for the story, it's set during the events of the NES version of Ninja Gaiden and stars a new protagonist, Kenji Mozu. It's up to him to save Hayabusa Village while Ryu is away in America.  

Ninja Gaiden Ragebound will arrive in the summer of 2025 on PC, Nintendo Switch, PlayStation 5, PlayStation 4, Xbox Series X/S and Xbox One.  

This article originally appeared on Engadget at https://www.engadget.com/gaming/ragebound-is-a-new-ninja-gaiden-game-from-the-team-behind-blasphemous-015621718.html?src=rss

©

© The Game Kitchen

Ninja Gaiden Ragebound protagonist Kenji Mozu faces off against a winged demo in a burning dojo.

The first Witcher 4 trailer sees Ciri kicking butt

12 December 2024 at 17:41

Well, let's be honest: I don't think any of us expected to see CD Projekt Red preview The Witcher 4 anytime soon, and yet the studio did just that, sharing a lengthy cinematic trailer for the upcoming sequel at the Game Awards. Even if there's no gameplay footage to be found, fans of the series will love what they see. 

That's because the clip reveals the protagonist of the game, and it's none other than Ciri, the adopted daughter of everyone's favorite witcher, Geralt of Rivia. Thematically, the clip is similar to The Witcher 3's excellent "Killing Monsters" trailer. Ciri arrives to save a young woman from a horrible monster, only for human ignorance and superstition to undo her good deed.

CD Projekt did not share a release date for The Witcher 4, nor did the studio say what platforms the game would arrive on. However, it has previously said it was making the game in Unreal Engine 5, and if you look hard enough, a footnote at the bottom says the trailer was pre-rendered in UE5 on an unannounced "NVIDIA GeForce RTX GPU." 

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-first-witcher-4-trailer-sees-ciri-kicking-butt-014137326.html?src=rss

©

© CD Project Red

Ciri walks toward a campfire.

MasterClass On Call gives you on-demand access to AI facsimiles of its experts

11 December 2024 at 13:50

MasterClass is expanding beyond pre-recorded video lessons to offer on-demand mentorship from some of its most popular celebrity instructors. And if you’re wondering how the company has gotten some of the busiest people on the planet to field your questions, guess what? The answer is generative AI.

On Wednesday, MasterClass debuted On Call, a new web and iOS app that allows people to talk with AI versions of its instructors. As of today, On Call is limited to two personas representing the expertise of former FBI hostage negotiator Chris Voss and University of Berkeley neuroscientist Dr. Matt Walker. In the future, MasterClass says it will offer many more personas, with Gordon Ramsay, Mark Cuban, Bill Nye and LeVar Burton among some of the more notable experts sharing their voices and knowledge in this way.

“This isn’t just another generic AI chatbot pulling data from the internet,” David Rogier, the CEO of MasterClass, said on X. “We’ve built this with our experts — training the AI on proprietary data sets (e.g. unpublished notes, private research, their lessons, emails, [and] expertise they’ve never shared before).”

Per Inc., MasterClass signed deals with each On Call instructor to license their voice and expertise. Judging from the sample voice clips MasterClass has up on its website, the interactions aren’t as polished as the one shown in the ad the company shared on social media. In particular, the “voice” of Chris Voss sounds robotic and not natural at all. On Call is also a standalone product with a separate subscription from the company’s regular offering. To use On Call, users will need to pay $10 per month or $84 annually.

This article originally appeared on Engadget at https://www.engadget.com/ai/masterclass-on-call-gives-you-on-demand-access-to-ai-facsimiles-of-its-experts-215022938.html?src=rss

©

© MasterClass

A screenshot of MasterClass' new On Call app.

Jarvis, Google's web-browsing AI, is now officially known as Project Mariner

11 December 2024 at 11:16

Earlier today, Google debuted Gemini 2.0. The company says its new machine learning model won’t just enhance its existing products and services. It will also power entirely new experiences. To that point, Google previewed Project Mariner, an AI agent that can navigate within a web browser. Mariner is an experimental Chrome extension that is currently available to select “trusted testers.”

As you can see from the video Google shared, the pitch for Mariner is a tool that can automate certain rote tasks. In the demo, Mariner assists Google’s Jaclyn Konzelmann with finding the contact information of four outdoor companies.

Clearly, there’s more work Google needs to do before the software is ready for public use. Notice that Konzelmann is very specific when prompting Mariner, instructing the agent to “memorize” and “remember” parts of her instructions. It also takes Mariner close to 12 minutes to complete the task given to it.

“As a research prototype, it’s able to understand and reason across information in your browser screen, including pixels and web elements like text, code, images and forms,” Google says of Mariner.

If Project Mariner sounds familiar, it’s because The Information reported in October that Google was working on something called Project Jarvis. The publication described it as a “computer-using agent” that Google designed to assist with tasks like booking flights. In November, an early version of Jarvis was briefly available on the Chrome Web Store. A person familiar with the matter confirmed that Jarvis and Mariner are the same project.

The confirmation of Mariner’s existence comes after Anthropic introduced a similar but more expansive feature for its Claude AI, which the company says can “use a wide range of standard tools and software programs designed for people.” That tool is currently available in public beta.

This article originally appeared on Engadget at https://www.engadget.com/ai/jarvis-googles-web-browsing-ai-is-now-officially-known-as-project-mariner-191603929.html?src=rss

©

© Google

Project Mariner is an experimental Chrome extension that can help you navigate websites.

Gemini 2.0 is Google's most capable AI model yet and available to preview today

11 December 2024 at 09:03

The battle for AI supremacy is heating up. Almost exactly a week after OpenAI made its o1 model available to the public, Google today is offering a preview of its next-generation Gemini 2.0 model. In a blog post attributed to Google CEO Sundar Pichai, the company says 2.0 is its most capable model yet, with the algorithm offering native support for image and audio output. “It will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” says Pichai.

Google is doing something different with Gemini 2.0. Rather than starting today’s preview by first offering its most advanced version of the model, Gemini 2.0 Pro, the search giant is instead kicking things off with 2.0 Flash. As of today, the more efficient (and affordable) model is available to all Gemini users. If you want to try it yourself, you can enable Gemini 2.0 from the dropdown menu in the Gemini web client, with availability within the mobile app coming soon.

Moving forward, Google says its main focus is adding 2.0’s smarts to Search (no surprise there), beginning with AI Overviews. According to the company, the new model will allow the feature to tackle more complex and involved questions, including ones involving multi-step math and coding problems. At the same time, following a broad expansion in October, Google plans to make AI Overviews available in more languages and countries.

Looking forward, Gemini 2.0 is already powering enhancements to some of Google’s more moonshot AI applications, including Project Astra, the multi-modal AI agent the company previewed at I/O 2024. Thanks to the new model, Google says the latest version of Astra can converse in multiple languages and even switch between them on the fly. It can also “remember” things for longer, offers improved latency, and can access tools like Google Lens and Maps.

As you might expect, Gemini 2.0 Flash offers significantly better performance than its predecessor. For instance, it earned a 63 percent score on HiddenMath, a benchmark that tests the ability of AI models to complete competition-level math problems. By contrast, Gemini 1.5 Flash earned a score of 47.2 percent on that same test. But the more interesting thing here is that the experimental version of Gemini 2.0 even beats Gemini 1.5 Pro in many areas; in fact, according to data Google shared, the only domains where it lags behind are in long-context understanding and automatic speech translation.

It’s for that reason that Google is keeping the older model around, at least for a little while longer. Alongside today's announcement of Gemini 2.0, the company also debuted Deep Research, a new tool that uses Gemini 1.5 Pro's long-context capabilities to write comprehensive reports on complicated subjects.  

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-20-is-googles-most-capable-ai-model-yet-and-available-to-preview-today-170329180.html?src=rss

©

© Google

A graphic of Google's Gemini 2.0 model.

Google's Gemini Deep Research tool is here to answer your most complicated questions

11 December 2024 at 07:43

When Google debuted Gemini 1.5 Pro in February, the company touted the model’s ability to reason through what it called “long context windows.” It said, for example, the algorithm could provide details about a 402-page Apollo 11 mission transcript. Now, Google is giving people a practical way to take advantage of those capabilities with a tool called Deep Research. Starting today, Gemini Advanced users can use Deep Research to create comprehensive but easy-to-read reports on complex topics.

Aarush Selvan, a senior product manager on the Gemini team, gave Engadget a preview of the tool. At first glance, it looks to work like any other AI chatbot. All interactions start with a prompt. In the demo I saw, Selvan asked Gemini to help him find scholarship programs for students who want to enter public service after school. But things diverge from there. Before answering a query, Gemini first produces a multi-step research plan for the user to approve.

For example, say you want Gemini to provide you with a report on heat pumps. In the planning stage, you could tell the AI agent to prioritize information on government rebates and subsidies or omit those details altogether. Once you give Gemini the go-ahead, it will then scour the open web for information related to your query. This process can take a few minutes. In user testing, Selvan said Google found most people were happy to wait for Gemini to do its thing since the reports the agent produces through Deep Research are so detailed.

In the example of the scholarship question, the tool produced a multi-page report complete with charts. Throughout, there were citations with links to all of the sources Gemini used. I didn’t get a chance to read over the reports in detail, but they appeared to be more accurate than some of Google’s less helpful and flattering AI Overviews.  

According to Selvan, Deep Research uses some of the same signals Google Search does to determine authority. That said, sourcing is definitely “a product of the query.” The more complicated a question you ask of the agent, the more likely it is to produce a useful answer since its research is bound to lead it to more authoritative sources. You can export a report to Google Docs once you're happy with Gemini's work.

If you want to try Deep Research for yourself, you’ll need to sign up for Google’s One AI Premium Plan, which includes access to Gemini Advanced. The plan costs $20 per month following a one-month free trial. It's also only available in English at the moment. 

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-deep-research-tool-is-here-to-answer-your-most-complicated-questions-154354424.html?src=rss

©

© Google

A bearded man looks at a tablet, examining a series of charts.

iOS 18.2 is here with Apple Intelligence image generation features in tow

11 December 2024 at 05:00

Apple has begun rolling iOS 18.2 and iPadOS 18.2 to iPhones and iPads. The updates bring with them major enhancements to the company’s suite of AI features, and are likely the final software releases Apple has planned for 2024. More Apple Intelligence features are available through macOS 15.2. However, note access to all of the AI features mentioned below is limited to users in the US, Australia, Canada, New Zealand, South Africa and the UK for now, with support additionally limited to devices with their language set to English. 

Provided you own an iPhone 15 Pro, 16 or 16 Pro, one of the highlights of iOS 18.2 is Image Playground, which is available both as a standalone app and Messages extension. If you go through the latter, the software will generate image suggestions based on the contents of your conversations. Naturally, you can also write your own prompts. It’s also possible to use a photo from your iPhone’s camera roll as a starting point. However, one limitation of Image Playground is that it can’t produce photorealistic images of people. That’s by design so that the resulting images don’t cause confusion. You can also import any pictures you generate with Image Playground to Freeform, Pages and Keynote.  

Another new feature, Genmoji, allows you to create custom emoji. From your iPhone’s emoji keyboard, tap the new Genmoji button and then enter a description of the character you want to make. Apple Intelligence will generate a few different options, which you can swipe through to select the one you want to send. It’s also possible to use pictures of your friends as the starting point for a Genmoji.

The new update also brings enhancements to Siri and Writing Tools, both of which can now call on ChatGPT for assistance. For example, if you ask the digital assistant to create an itinerary or workout plan for you, it will ask for your permission to use ChatGPT to complete the task. You don’t need a ChatGPT account to use the chatbot in this way, though based on information from the iOS 18.2 beta, there will be a daily limit on how many queries iPhone users can send through to OpenAI’s servers.

Those are just some of the more notable Apple Intelligence features arriving with iOS 18.2 and iPadOS 18.2. If you don’t own a recent iPhone or iPad, the good news is that both releases offer more than just new AI tools. One nifty addition is the inclusion of new AirTag features that allow you to share the location of your lost item trackers with friends and airlines. If you’re a News+ subscriber, you also get access to daily Sodoku puzzles. Also new to iOS 18.2 is a feature Apple removed with iOS 16. A new menu item in the operating system’s Settings app allows you to add volume controls to the lock screen.

If you don’t see a notification to download iOS 18.2 on your iPhone and iPadOS 18.2 on your iPad, you can manually check for the updates by opening the Settings app on your device and navigating to “General,” then “Software Update.” The same goes for macOS — just open the System Settings app, navigate to "Software Update" and start the download. 

If you live outside of one of the countries mentioned at the top, support for additional countries and languages, including Chinese, French, German, Italian, Japanese, Korean, Spanish and Portuguese, will roll out throughout next year, with an initial update slated for April. 

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/ios-182-is-here-with-apple-intelligence-image-generation-features-in-tow-130029173.html?src=rss

©

© Apple

Renders of two iPhones are shown against a white background. One of the phones displays a yellow smiley face emoji with cucumber slices on its eyes, while they other shows an AI-generated image of a cat wearing a chef hat

Sony's WH-1000XM5 headphones are back on sale for $100 off

10 December 2024 at 09:45

The Thanksgiving holiday might have come and gone, but one of the best pair of wireless headphones you can buy right now are back to their Black Friday price. Amazon has discounted Sony’s excellent WH-1000XM5 headphones. All four colorways — black, midnight blue, silver and smoky pink — are currently $298, or 25 percent off their usual $400 price.

At this point, the WH-1000XM5 likely need no introduction, but for the uninitiated, they strike a nearly perfect balance between features, performance and price; in fact, they’re the Bluetooth headphones Billy Steele, Engadget’s resident audio guru, recommends for most people

With the Sony WH-1000XM5, Sony redesigned its already excellent 1000X line to make the new model more comfortable. The company also improved the XM4’s already superb active noise cancelation capabilities, adding four additional ANC mics. That enhancement makes the WH5 even better at blocking out background noise, including human voices.

Other notable features include 30-hour battery life, clear and crisp sound and a combination of handy physical and touch controls. The one downside of the XM5 are that they cost more than Sony’s previous flagship Bluetooth headphones. Thankfully, this sale addresses that fault nicely.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/sonys-wh-1000xm5-headphones-are-back-on-sale-for-100-off-174551119.html?src=rss

©

© Billy Steele/Engadget

WH-1000XM5 review

Apple's updated MagSafe wireless charger is on sale for $35

13 December 2024 at 05:17

New iPhone 16 owners can pick up an Apple charger for cheap right now. The latest, more powerful MagSafe charger has dropped to as low as $30. You'll get that price on the 1m (3.3 ft) model, but the better discount is on the 2m (6.6 ft) model, which is 29 percent off and on sale for $35. That’s a return to the same low price we saw for the accessory during Black Friday.

Apple refreshed its MagSafe charger alongside the iPhone 16 lineup this fall. Provided you own an iPhone 16 and an adapter that offers at least 30W of power, the charger reaches charging speeds of up to 25W. According to Apple, that’s enough to allow iPhone 16 Pro users to charge their device to 50 percent in 30 minutes. With older iPhones and Qi-compatible accessories, power delivery speeds are limited to 15W. 

Apple’s official MageSafe charger is one of our favorite iPhone accessories. Even at full price, it’s a great purchase for getting a little more out of your new smartphone. With Amazon’s current discount, there’s no reason not to buy one if you prefer wireless charging.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/apples-updated-magsafe-wireless-charger-is-on-sale-for-35-154041854.html?src=rss

©

© Apple

An Apple MagSafe charger connected to the back of an iPhone.

The best SSDs for PS5 in 2024

When Sony first released the PlayStation 5, the console came with a paltry 1TB of storage. At the time, it wasn’t possible to add more space through the system’s SSD expansion slot. Thankfully, that quickly changed with a software update that arrived less than a year later. Since July 2023, Sony has allowed PS5 users to add up to 8TB of additional storage to their system. It’s a good thing too, with just how big some game installs are this console generation. Even when considering the PS5 Pro and its 2TB of built-in storage, it won’t take long for a dedicated gamer to fill up the included SSD. It will take even less time with the standard model’s 667GB of available storage.

The good news is a standard PCIe Gen4 x4 M.2 NVMe SSD can solve all of your PS5 storage woes. If that mess of acronyms has you recoiling, don’t worry: you’ll see that it’s not all that complicated. And if all you want to know is what the best PS5 SSDs are, they’re right at the top.

Read more: These are the best SSDs in 2024

Best PS5 SSDs in 2024

How we test PS5 SSDs

I’ve tested most of the SSDs recommended on this list, either for PS5 or computer use. I also bought and used our top pick in my own PS5.

How much PS5 storage do I need?

The PlayStation 5 and PlayStation 5 Pro can accept internal drives with between 250GB and 8TB of storage capacity. If you already own a PS5, chances are you have a reasonable idea of how much storage you need ​​for your game library. If you’re buying an SSD with a new PS5 or PS5 Pro, or buying for someone else, it’s more difficult to tell what you might need for a high-performance experience.

PS5 games are smaller on average than their PS4 equivalents, typically taking up between 30GB and 100GB, with some notable (and very popular) exceptions. If you’re a fan of the Call of Duty series, installing Black Ops 6 and Warzone 2.0 can eat up to 240GB. In other words, a full Call of Duty install will take up more than one-third of the PS5’s internal storage. If you’re not a CoD fan, though, chances are you’ll be good to store between six to 10 games on a regular PS5 internally before running into problems.

You also need to consider your internet speed. If you live in an area with slow broadband, the “you can just download it again” rationale doesn’t really work. At my old home, a 100GB download took me around eight hours, during which time it was difficult to simultaneously watch Twitch or, say, publish articles about upgrading PS5 SSDs. Keeping games around on the off-chance you’ll want to play them at some point makes sense.

Sony PlayStation 5 gaming console.
Aaron Souppouris / Engadget

Off the bat, there's basically no point in going for a 250GB PS5 SSD. Economically, 250GB drives aren’t that much cheaper than 500GB ones — and practically, that really isn’t a lot of space for modern games to live on. 500GB drives can be a decent option, but after years of declining prices, I think the sweet spot for most people is to opt for a high-capacity 1TB or 2TB drive, which should run you at most $200. The latter will more than double the PS5 Pro’s storage without breaking the bank.

Unless you’re rolling in cash and want to flex, 4TB and 8TB models should mostly be avoided, as you’ll end up paying more per gigabyte than you would with a 1TB or 2TB drive.

While the 825GB PS5 only provides 667GB of storage, that’s largely due to storage being reserved for the operating system and caching. If you install a 1TB PS5 SSD, you'll have, within a margin of error, 1TB of storage available for games. Out of the box, the PS5 Pro offers 1.86TB of storage for games, though you can eke out more if you delete the pre-installed Astro’s Playroom (gasp).

Since neither the PS5 Slim nor PS5 Pro feature updated CPU architecture, all of our recommendations will work with whatever PS5 model you own.

Can you play PS5 games on an external SSD?

External hard drives tend to cost less than internal SSD counterparts (and there’s a good chance you might own one already). Unfortunately, there are restrictions on what you can do with them. An external SSD connects to your PS5 via USB, and is only suitable for playing PlayStation 4 games, or storing PS5 titles. That’s useful if you have anything but the best high-speed internet — it’s faster to move a PS5 game out of “cold storage” on an external drive than it is to re-download it — or want to keep your PS4 library on hand.

Due to the limitations here, you don’t need the highest-performing model, although you should opt for SSDs over HDDs for improved transfer speeds and load times. Any basic portable drive from a reputable brand will do, with the Crucial X9 Pro and Samsung T7 being options we’ve tried and can recommend.

Which SSD cards are compatible with the PS5?

The official answer to this question is an “M.2 Socket 3 (Key M) Gen4 x4 NVME SSD.” But even within that seemingly specific description, there are additional factors to consider. The main requirements Sony has laid out for compatibility come down to speed, cooling and physical dimensions.

For speed, Sony says drives should be able to handle sequential reads at 5,500MB/s. Early testing showed that the PS5 would accept drives as slow as 4,800MB/s, and that games that tap into the SSD regularly — such as Ratchet & Clank: Rift Apart — would cause no issues. Pretty much the only thing the PS5 will outright reject is one that doesn't match the Gen4 x4 spec.

In our opinion, though, using a drive slower than the specification is a risk that, if you don’t already have that drive lying around, is not worth taking. Just because we haven’t found issues yet doesn’t mean there won’t be games that could be problematic in the future. The price difference between these marginally slower Gen4 drives and the ones that meet Sony’s spec isn’t huge, and you might as well cover all your bases.

Slightly more complicated than speed is cooling and size. Most new SSDs are going to be just fine; the PS5 can fit 22mm-wide SSDs of virtually any length (30mm, 40mm, 60mm, 80mm or 110mm, to be precise). The vast majority of drives you find will be 22mm wide and 80mm long, so no problem there.

It should be noted that the system can fit a 25mm-wide drive, but that width must include the cooling solution. Speaking of, Sony says SSDs require “effective heat dissipation with a cooling structure, such as a heatsink.” The maximum height supported by Sony’s slot is 11.25mm, of which only 2.45mm can be “below” the drive.

This previously meant some of the most popular heatsinked Gen4 SSDs, including Corsair’s MP600 Pro LP, would not fit within the PS5’s storage expansion slot. Since Engadget first published this guide in 2021, most NVMe makers, including Samsung, have come out with PlayStation-specific models that meet those requirements. That said, if you want to save some money, bare drives are often cheaper and it’s trivial to find a cooling solution that will work for the PS5.

The only component in an NVMe SSD that really requires cooling is the controller, which without a heatsink will happily sear a (very small) steak. Most SSDs have chips on only one side, but even on double-sided SSDs, the controller is likely to be on top, as manufacturers know it needs to be positioned there to better dissipate heat.

So, head to your PC component seller of choice and pick up basically anything that meets the recommended dimensions. A good search term is “laptop NVME heatsink,” as these will be designed to fit in the confines of gaming laptops, which are even more restrictive than a PS5. They’re also typically cheaper than the ones labeled as “PS5 heatsinks.”

One recommendation is this $6 copper heatsink, which attaches to the PS5 SSD with sticky thermal interface material. It works just fine, and in performing stress tests on a PC, we couldn’t find anything metal that didn’t keep temperatures under control. When you’re searching, just make sure the solution you go for measures no more than 25mm wide or 8mm tall (including the thermal interface material) and has a simple method of installation that’s not going to cause any headaches.

One last thing: When shopping for a PS5 NVMe, there’s no reason to buy a Gen5 model over a more affordable Gen4 model. As things stand, Sony’s console can’t take advantage of the new standard, and though Gen5 drives are backward compatible, they’re more expensive than their Gen4 counterparts. Just buy the fastest and highest-capacity Gen4 model you can afford.

How to install an SSD into your PS5

If you need guidance on how to install your new NVMe into your PS5 or PS5 Pro, we have a separate guide detailing all the steps here. Installation is pretty straightforward, but our how-to can help you if you're stuck. Just make note: Before attempting to add more storage via a PS5 SSD, ensure that you have Sony’s latest software installed.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/best-ps5-ssd-expansion-upgrade-150052315.html?src=rss

©

© Engadget

The best SSDs for PS5

Google's Willow quantum chip breakthrough is hidden behind a questionable benchmark

9 December 2024 at 14:47

Google debuted Willow, its latest quantum chip, on Monday, and if you’ve spent any time online since, you’ve undoubtedly run into some breathless reporting about it. Willow “crushes classical computers on a cosmic timescale,” proclaims one headline; Google “unveils ‘mind-boggling’ quantum computer chip,” reads another. It’s all anchored by a claim that Willow can complete a computation that would theoretically take a classical computer significantly more time than the 14 billion years the universe has existed. But, as you can probably guess, what the chip represents is not so simple.

First, with Willow, Google makes no claim of quantum supremacy, something the company did when it publicly debuted its previous generation quantum computer, Sycamore, back in 2019. You may recall that, at the time, Google publicized how it took Sycamore just 200 seconds to perform a calculation that would have theoretically taken the world’s then-fastest supercomputer 10,000 years to complete. That feat, the company said, demonstrated that it had created a quantum computer that could solve problems the best classical computers could not even attempt. In other words, Google had achieved quantum supremacy.

However, that claim quickly ended in controversy, with one researcher calling the company’s announcement “indefensible” and “just plain wrong,” and Google has since avoided talking about quantum supremacy. Instead, it just says it has achieved “beyond classical computation.” Part of the issue was that Sycamore was not a general-purpose quantum computer; instead, it was designed to surpass classical computers in a single task known as random circuit sampling or RCS. The thing about RCS is that, in Google’s own words, it has “no known real-world applications.” Yet, here again, the company is touting RCS performance.

Google says Willow can complete its latest RCS benchmark in under five minutes. By contrast, the company estimates it would take Frontier, currently the world’s second most powerful supercomputer, 10 septillion years to complete the same task. That number, Google says, “lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse.”

A chart showing that no quantum computer has outperformed a classical computer on a commercially-relevant application.
Google

More practically, Google tries to make the case that RCS performance should be the metric by which all quantum computers are judged. According to Hartmut Neven, the founder of Google Quantum AI, “it’s an entry point. If you can’t win on random circuit sampling, you can’t win on any other algorithm either.” He adds RCS is “now widely used as a standard in the field.“ 

However, other companies, including IBM and Honeywell, instead use a metric called quantum volume to tout their breakthroughs. They claim it points to a more holistic understanding of a machine’s capabilities by factoring in how its qubits interact with one another. Unfortunately, you won’t find any mention of quantum volume in the spec sheet Google shared for Willow, making comparisons difficult.

To that point, the far more impressive claim Google is making today is that Willow is “below the threshold.” To date, the problem that has plagued every attempt to build a useful quantum computer is that the quantum bits they’re based on are difficult to control. They only hold their quantum state for fractions of a second, and the more qubits are added to a system, the more likely it is to produce errors. However, with Willow, Google says it has found a way to reduce errors as it adds more qubits to the system. According to the company, Willow is the first time this has been done.

“As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built,” says Neven. “Willow brings us closer to running practical, commercially-relevant algorithms that can’t be replicated on conventional computers.”

That’s the real breakthrough here, and one that points to a future where quantum computers could solve problems that have tangible effects on people's lives. That future, however, isn't here just yet, and even Google admits it has more work to do before it gets there.   

This article originally appeared on Engadget at https://www.engadget.com/computing/googles-willow-quantum-chip-breakthrough-is-hidden-behind-a-questionable-benchmark-224707174.html?src=rss

©

© Google

An illustration of Google's Willow chip.

OpenAI's Sora video generation AI model arrives globally later today

9 December 2024 at 10:26

Following an early preview at the start of the year, Sora, OpenAI's long-awaited video generation model, is ready for public use. If you're a ChatGPT Plus or Pro subscriber in the US or "most other countries" where the chatbot is available, you can begin experimenting with the tool starting later today, OpenAI announced on Monday. A more powerful model powers the product than the one OpenAI showed off in February. Sora Turbo is significantly faster, according to the company, though OpenAI cautions the new model still has limitations. "It often generates unrealistic physics and struggles with complex actions over long durations," says the company. 

When users first visit the dedicated landing page OpenAI has set up for Sora, they'll be greeted with a feed of videos the model has created for other people. By clicking on a video, you'll be able to see the exact prompt someone gave Sora to generate the footage you see. From here, you can also decide to re-cut a video, blend it into a clip you're working on, or remix it. In this initial release, OpenAI is limiting Sora to generating videos that are up to 1080p and 20 seconds long. 

ChatGPT Plus subscribers can use Sora to create up to 50 videos at 480p per month. Alternatively, Plus users can generate fewer (and shorter) videos at 720p. OpenAI says the Pro plan affords 10 times as much usage, at higher resolutions and longer durations. "We’re working on tailored pricing for different types of users, which we plan to make available early next year," the company adds.

For safety purposes, each video features a visible watermark by default and contains C2PA metadata to assist with identification. OpenAI says it will block users from using Sora to create child sexual abuse materials (CSAM) and sexual deepfakes. More broadly, the company plans to limit uploads of people until it has time to refine its safeguards against deepfakes. 

Even if you don't have a ChatGPT subscription, you can still visit the Sora website to see what other people are using the tool to create. During today's livestream, OpenAI CEO Sam Altman said it may take some time before Sora arrives in Europe and the UK. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-sora-video-generation-ai-model-arrives-globally-later-today-182613208.html?src=rss

©

© Sora

OpenAI's Sora AI video generator

How to install a PS5 SSD

So, you just bought yourself an NVMe drive to add more storage to your PlayStation 5. Don’t worry. If you’re unsure of how to install the SSD, you have come to the right place. Not only is the process relatively simple, but this guide will take you through every step, including all the tools you need. If you came here looking for a recommendation on what NVMe to buy for your PS5 or PS5 Pro, check out our dedicated guide to the best SSDs for the PS5.

How to install a PS5 SSD

1. Power everything down to remove the stand

Before attempting to add more storage via an NVMe, ensure that you have Sony’s latest software installed. Once you're up-to-date, installation of a PS5 SSD is fairly straightforward. Sony recommends a #1 Phillips or crosshead screwdriver, but this isn't rocket science. Any crossed screwdriver of a similar size will do fine. If you don’t own a screwdriver, the DIY heroes from iFixit sell a great set for $20.

Begin by powering down your PS5 or PS5 Pro, unplugging everything, removing the stand and flipping it over to its underside. If you have a launch PS5, that’s the side with the disc drive; if you have a launch Digital Edition, it’s the side without the PlayStation logo cutout. As for the PS5 Slim and PS5 Pro, the expansion slot is in the same place: behind the smaller of the two panels.

Sony has a video guide to popping off the outside cover here, but the gist is you gently lift up the opposing corners and slide the panel toward the flat end of the console. There’s a knack to this, and it requires very little effort or strength. If you’re not getting it, rather than force things, readjust your grip and try again.

How to install a PS5 SSD
Engadget

2. Access the drive bay

Once you’ve got everything open, you’ll see a rectangular piece of metal with a screw holding it in place. Remove that screw, and you’ll be able to access the drive bay.

You’ll see five holes inside, each number corresponding to standard SSD drive lengths. The one numbered 110 will have a metal insert and screw inside. You need to remove the screw with a screwdriver, and then unscrew the insert with your fingers and move it to the relevant hole. For most drives, it’s going to be 80.

How to install a PS5 SSD
Engadget

3. Slot in the SSD

Then take your SSD and slot it in. The slot is at the edge closest to the number “30,” and SSDs are keyed to only fit in one way, so no force is required. If it’s not sliding in, don’t force it. You’ll notice the SSD doesn’t sit flat. That’s fine and is as intended.

How to install a PS5 SSD
Engadget

4. Screw the drive bay back in

Once the SSD is seated, take the screw you removed from the insert, line it up with the little notch at the end of your SSD, and push down so it meets the insert. Give the screw a few turns — it doesn’t need to be very tight — and you’re done.

Replace the metal cover and screw it down, and then slide the plastic outer shell back on.

When you first turn on the PS5, it’ll prompt you to format the drive. Do that! You have now successfully expanded your console’s storage, and can go about downloading and moving games to it.

How to install a PS5 SSD
Engadget

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/how-to-install-a-ps5-ssd-130010846.html?src=rss

©

© Engadget

PlayStation 5

Crypto evangelist David Sacks will serve as Trump's AI and cryptocurrency advisor

6 December 2024 at 09:05

Donald Trump has picked a crypto bull to advise him on AI and cryptocurrency policy. On Thursday evening, the president-elect took to Truth Social to share that he was appointing former PayPal COO David Sacks to serve as his “White House A.I. & Crypto Czar.” Trump said Sacks would also lead the Presidential Council of Advisors for Science and Technology.

“David will guide policy for the Administration in Artificial Intelligence and Cryptocurrency, two areas critical to the future of American competitiveness. David will focus on making America the clear global leader in both areas,” Trump wrote, adding Sacks would “safeguard Free Speech online, and steer us away from Big Tech bias and censorship.”

As an appointee to one of the president’s advisory councils, Sacks does not need to go through the usual Senate confirmation process required for cabinet picks and federal agency leads. Sacks does not have previous government experience. Trump instead highlighted his business credentials, pointing to his tenure at PayPal and later Yammer, which Sacks founded in 2008 and Microsoft acquired in 2012 for $1.2 billion. Sacks is also a close confidant of Elon Musk and provided part of the funding Musk used to buy Twitter for $44 billion in 2022. Sacks has broadly advocated for smaller government and less regulation, though he also pushed hard for the Biden administration to intervene when Silicon Valley Bank became insolvent in 2023.

congrats to czar @DavidSacks!

— Sam Altman (@sama) December 6, 2024

“Where is Powell? Where is Yellen?” Sacks tweeted before regulators moved to fully protect deposits at SVB. “Stop this crisis NOW. Announce that all depositors will be safe. Place SVB with a Top 4 bank. Do this before Monday open or there will be contagion and the crisis will spread.”

Alongside Paul Atkins, who Trump picked to lead the US Securities and Exchange Commission, Sacks is likely to reshape US policy on cryptocurrency and AI. Under the Biden administration, the federal government sought to regulate the crypto industry. Sacks, however, is a vocal proponent of the industry. He is also a major investor in Solana and other crypto-related ventures such as Multicoin Capital.

As for Trump, appointing Sacks to his advisory council shows just how much his stance on crypto has changed. As recently as 2021, he said he thought Bitcoin seemed “like a scam,” and advocated for “very, very high” government regulation of the currency. That was before the crypto industry funneled $131 million during the 2024 election to get 274 pro-crypto candidates elected to the House of Representatives and 20 candidates to the Senate. During his campaign, Trump promised to make the United States “the crypto capital of the planet.”

This article originally appeared on Engadget at https://www.engadget.com/ai/crypto-evangelist-david-sacks-will-serve-as-trumps-ai-and-cryptocurrency-advisor-170522273.html?src=rss

©

© Reuters / Reuters

FOTO DE ARCHIVO: David Sacks, ex CEO de Yammer habla durante el primer día de la Convención Nacional Republicana (RNC), en el Fiserv Forum en Milwaukee, Wisconsin, Estados Unidos. 15 de julio de 2024. REUTERS/Mike Segar/Archivo
❌
❌