Reading view

There are new articles available, click to refresh the page.

Google's Gemini Deep Research tool is now available globally

A little more than a week after announcing Gemini Deep Research, Google is making the tool available to more people. As of today, the feature, part of the company’s paid Gemini Advanced suite, is available in every country and language where Google offers Gemini. In practice, that means Gemini Advanced users in more than 100 countries globally can start using Deep Research right now. Previously, it was only available in English.

As a refresher, Deep Research takes advantage of Gemini 1.5 Pro’s ability to reason through “long context windows” to create comprehensive but easy-to-read reports on complex topics. Once you provide the tool a prompt, it will generate a research plan for you to approve and tweak as you see fit. After it has your go-ahead, Gemini 1.5 Pro will search the open web for information related to your query. That process can sometimes take several minutes, but once Gemini is done, you’ll have a multi-page report you can export to Google Docs for later viewing.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-deep-research-tool-is-now-available-globally-210151873.html?src=rss

©

© Google

Gemini Deep Research

OpenAI's next-generation o3 model will arrive early next year

After nearly two weeks of announcements, OpenAI capped off its 12 Days of OpenAI livestream series with a preview of its next-generation frontier model. “Out of respect for friends at Telefónica (owner of the O2 cellular network in Europe), and in the grand tradition of OpenAI being really, truly bad at names, it’s called o3,” OpenAI CEO Sam Altman told those watching the announcement on YouTube.

The new model isn’t ready for public use just yet. Instead, OpenAI is first making o3 available to researchers who want help with safety testing. OpenAI also announced the existence of o3-mini. Altman said the company plans to launch that model “around the end of January,” with o3 following “shortly after that.”

As you might expect, o3 offers improved performance over its predecessor, but just how much better it is than o1 is the headline feature here. For example, when put through this year's American Invitational Mathematics Examination, o3 achieved an accuracy score of 96.7 percent. By contrast, o1 earned a more modest 83.3 percent rating. “What this signifies is that o3 often misses just one question,” said Mark Chen, senior vice president of research at OpenAI. In fact, o3 did so well on the usual suite of benchmarks OpenAI puts its models through that the company had to find more challenging tests to benchmark it against.   

An ARC AGI test.
ARC AGI

One of those is ARC-AGI, a benchmark that tests an AI algorithm's ability to intuite and learn on the spot. According to the test's creator, the non-profit ARC Prize, an AI system that could successfully beat ARC-AGI would represent "an important milestone toward artificial general intelligence." Since its debut in 2019, no AI model has beaten ARC-AGI. The test consists of input-output questions that most people can figure out intuitively. For instance, in the example above, the correct answer would be to create squares out of the four polyominos using dark blue blocks.       

On its low-compute setting, o3 scored 75.7 percent on the test. With additional processing power, the model achieved a rating of 87.5 percent. "Human performance is comparable at 85 percent threshold, so being above this is a major milestone," according to Greg Kamradt, president of ARC Prize Foundation.

A graph comparing o3-mini's performance against o1, and the cost of that performance.
OpenAI

OpenAI also showed off o3-mini. The new model uses OpenAI's recently announced Adaptive Thinking Time API to offer three different reasoning modes: Low, Medium and High. In practice, this allows users to adjust how long the software "thinks" about a problem before delivering an answer. As you can see from the above graph, o3-mini can achieve results comparable to OpenAI's current o1 reasoning model, but at a fraction of the compute cost. As mentioned, o3-mini will arrive for public use ahead of o3. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-next-generation-o3-model-will-arrive-early-next-year-191707632.html?src=rss

©

© REUTERS / Reuters

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI brings ChatGPT to WhatsApp

ChatGPT is now available on WhatsApp. Starting today, if you add 1 (800) CHAT-GPT to your contacts — that's 1 (800) 242-8478 — you can start using the chatbot over Meta's messaging app. In this iteration, ChatGPT is limited to text-only input, so there's no Advanced Voice Mode or visual input on offer, but you still get all the smarts of the o1-mini model

What's more, over WhatsApp ChatGPT is available everywhere OpenAI offers its chatbot, with no account necessary. OpenAI is working on a way to authenticate existing users over WhatApp, though the company did not share a timeline for when that feature might launch. It's worth noting Meta offers its own chatbot in WhatsApp.   

Separately, OpenAI is launching a ChatGPT hotline in the US. Once again, the number for that is 1 (800) 242-8478. As can probably imagine, the toll-free number works with any phone, be it a smartphone or old flip phone. OpenAI will offer 15 minutes of free ChatGPT usage through the hotline, though you can log into your account to get more time. 

"We’re only just getting started on making ChatGPT more accessible to everyone," said Kevin Weil, chief product officer at OpenAI, during the company's most recent 12 Days of OpenAI livestream. According to Weil, the two features were born from a recent hack week the company held. Other recent livestreams have seen OpenAI make ChatGPT Search available to all free users and bring its Sora video generation out of private preview.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-brings-chatgpt-to-whatsapp-184653703.html?src=rss

©

© OpenAI

Three OpenAI employees demo ChatGPT's WhatsApp integration.

The best cheap phones for 2025

It may be fashionable to spend $1,000 on the latest flagship smartphone when they first get released, but it's not necessary. You don't even have to spend $500 today to get a decent handset, whether it’s a refurbished iPhone or an affordable Android phone, as there are plenty of options as low as $160 that could fit your needs.

But navigating the budget phone market can be tricky; options that look good on paper may not be in practice, and some handsets will end up costing you more when you consider many come with restrictive storage. While we at Engadget spend most of our time testing and reviewing mid- to high-end handsets, we've tested a number of the latest budget-friendly phones on the market to see which are actually worth your money.

What to look for in a cheap phone

For this guide, our top picks cost between $100 and $300. Anything less and you might as well go buy a dumb phone or high-end calculator instead. Since they’re meant to be more affordable than flagship phones and even midrange handsets, budget smartphones involve compromises; the cheaper a device, the lower your expectations around specs, performance and experience should be. For that reason, the best advice I can give is to spend as much as you can afford. In this price range, even $50 or $100 more can get you a dramatically better product.

Second, you should know what you want most from a phone. When buying a budget smartphone, you may need to sacrifice a decent main camera for long battery life, or trade a high-resolution display for a faster CPU. That’s just what comes with the territory, but knowing your priorities will make it easier to find the right phone.

It’s also worth noting some features can be hard to find on cheap handsets. For instance, you won’t need to search far for a device with all-day battery life — but if you want a great camera phone with excellent camera quality, you’re better off shelling out for one of the recommendations in our midrange smartphone guide, which all come in at $600 or less. Wireless charging and waterproofing also aren’t easy to find in this price range and forget about the fastest chipset. On the bright side, all our recommendations come with headphone jacks, so you won’t need to get wireless headphones.

iOS is also off the table, since the $400 Apple iPhone SE is the most affordable iPhone in the lineup. That leaves Android OS as the only option. Thankfully today, there’s little to complain about Google’s OS – and you may even prefer it to iOS. Lastly, keep in mind most Android manufacturers typically offer far less robust software features and support for their budget devices. In some cases, your new phone may only receive one major software update and a year or two of security patches beyond that. That applies to the OnePlus and Motorola recommendations on our list.

If you’d like to keep your phone for as long as possible, Samsung has the best software policy of any Android manufacturer in the budget space, offering four years of security updates on all of its devices. That said, if software support (or device longevity overall) is your main focus, consider spending a bit more the $500 Google Pixel 7a, which is our favorite midrange smartphone and has planned software updates through mid-2026.

Best cheap phones

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/best-cheap-phones-130017793.html?src=rss

©

© Engadget

The best cheap phones

ChatGPT Search is now available to everyone

If you've been waiting patiently to try ChatGPT Search, you won't have to wait much longer. After rolling out to paid subscribers this fall, OpenAI announced Monday it's making the tool available to everyone, no Plus or Pro membership necessary

At that point, all you need before you can start using ChatGPT Search is an OpenAI account. Once you're logged in, and if your query calls for it, ChatGPT will automatically search the web for the latest information to answer your question. You can also force it to search the web, thanks to a handy new icon located right in the prompt bar. OpenAI has also added the option to make ChatGPT Search your browser's default search engine.    

At the same time, OpenAI is integrating ChatGPT Search and Advanced Voice mode together. As you might have guessed, the former allows ChatGPT's audio persona to search the web for answers to your questions and answer them in a natural, conversational way. For example, say you're traveling to a different city for vacation. You could ask ChatGPT what the weather will be like once you arrive, with the Search functionality built-in, the chatbot can answer that question with the most up-to-date information. 

To facilitate this functionality, OpenAI says it has partnered with leading news and data providers. As a result, you'll also see widgets for stocks, sports scores, the weather and more. Basically, ChatGPT Search is becoming a full-fledged Google competitor before our eyes.     

OpenAI announced the expanded availability during its most recent "12 Days of OpenAI" livestream. In previous live streams, the company announced the general availability of Sora and ChatGPT Pro, a new $200 subscription for its chatbot. With four more days to go, it's hard to see the company topping that announcement, but at this point, OpenAI likely has a surprise or two up its sleeve.

Correction 12/1 7/2024: A previous version of this article incorrectly stated OpenAI would roll out ChatGPT Search "over the coming months." The tool is now available to all logged-in users. We regret the error. 

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-is-getting-ready-to-roll-its-search-tool-out-to-everyone-184442971.html?src=rss

©

© OpenAI

A mouse pointer hovers over the "Search" icon in ChatGPT's prompt bar.

Google's new AI video model sucks less at physics

Google may have only recently begun rolling out its Veo generative AI to enterprise customers, but the company is not wasting any time getting a new version of the video tool out to early testers. On Monday, Google announced a preview of Veo 2. According to the company, Veo 2 “understands the language of cinematography.” In practice, that means you can reference a specific genre of film, cinematic effect or lens when prompting the model.

Additionally, Google says the new model has a better understanding of real-world physics and human movement. Correctly modeling humans in motion is something all generative models struggle to do. So the company’s claim that Veo 2 is better when it comes to both of those trouble points is notable. Of course, the samples the company provided aren’t enough to know for sure; the true test of Veo 2’s capabilities will come when someone prompts it to generate a video of a gymnast's routine. Oh, and speaking of things video models struggle with, Google says Veo will produce artifacts like extra fingers “less frequently.”

A sample image of a squirrel Google's Imagen 3 generated.
Google

Separately, Google is rolling out improvements to Imagen 3. Of its text-to-image model, the company says the latest version generates brighter and better-composed images. Additionally, it can render more diverse art styles with greater accuracy. At the same time, it’s also better at following prompts more faithfully. Prompt adherence was an issue I highlighted when the company made Imagen 3 available to Google Cloud customers earlier this month, so if nothing else, Google is aware of the areas where its AI models need work.

Veo 2 will gradually roll out to Google Labs users in the US. For now, Google will limit testers to generating up to eight seconds of footage at 720p. For context, Sora can generate up to 20 seconds of 1080p footage, though doing so requires a $200 per month ChatGPT Pro subscription. As for the latest enhancements to Imagen 3, those are available to Google Labs users in more than 100 countries through ImageFX.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-new-ai-video-model-sucks-less-at-physics-170041204.html?src=rss

©

© Google

Google's Veo 2 generated a Pixar-like animation of a young girl in a kitchen.

Ragebound is a new Ninja Gaiden game from the team behind Blasphemous

Resurrecting a beloved gaming series like Ninja Gaiden is always a tricky proposition. Anyone who might have worked on the franchise in its heyday has likely moved on to other projects or left the industry entirely. But judging by the talent working on Ninja Gaiden Ragebound, the new series entry revealed at the Game Awards, I think it's safe to say the franchise is in safe hands. That's because Ragebound unites two companies who know a thing or two about making quality games. 

The Game Kitchen — the Spanish studio behind Blasphemous and its excellent sequel, Blasphemous 2 — is developing the game, with Dotemu (Teenage Mutant Ninja Turtles: Shredder’s Revenge and Streets of Rage 4) on publishing duties. 

Right away, you can see the influence of The Game Kitchen. The studio's signature pixel art style looks gorgeous in the back half of the reveal trailer, and it looks like the game will reward tight, coordinated play from players. As for the story, it's set during the events of the NES version of Ninja Gaiden and stars a new protagonist, Kenji Mozu. It's up to him to save Hayabusa Village while Ryu is away in America.  

Ninja Gaiden Ragebound will arrive in the summer of 2025 on PC, Nintendo Switch, PlayStation 5, PlayStation 4, Xbox Series X/S and Xbox One.  

This article originally appeared on Engadget at https://www.engadget.com/gaming/ragebound-is-a-new-ninja-gaiden-game-from-the-team-behind-blasphemous-015621718.html?src=rss

©

© The Game Kitchen

Ninja Gaiden Ragebound protagonist Kenji Mozu faces off against a winged demo in a burning dojo.

The first Witcher 4 trailer sees Ciri kicking butt

Well, let's be honest: I don't think any of us expected to see CD Projekt Red preview The Witcher 4 anytime soon, and yet the studio did just that, sharing a lengthy cinematic trailer for the upcoming sequel at the Game Awards. Even if there's no gameplay footage to be found, fans of the series will love what they see. 

That's because the clip reveals the protagonist of the game, and it's none other than Ciri, the adopted daughter of everyone's favorite witcher, Geralt of Rivia. Thematically, the clip is similar to The Witcher 3's excellent "Killing Monsters" trailer. Ciri arrives to save a young woman from a horrible monster, only for human ignorance and superstition to undo her good deed.

CD Projekt did not share a release date for The Witcher 4, nor did the studio say what platforms the game would arrive on. However, it has previously said it was making the game in Unreal Engine 5, and if you look hard enough, a footnote at the bottom says the trailer was pre-rendered in UE5 on an unannounced "NVIDIA GeForce RTX GPU." 

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-first-witcher-4-trailer-sees-ciri-kicking-butt-014137326.html?src=rss

©

© CD Project Red

Ciri walks toward a campfire.

MasterClass On Call gives you on-demand access to AI facsimiles of its experts

MasterClass is expanding beyond pre-recorded video lessons to offer on-demand mentorship from some of its most popular celebrity instructors. And if you’re wondering how the company has gotten some of the busiest people on the planet to field your questions, guess what? The answer is generative AI.

On Wednesday, MasterClass debuted On Call, a new web and iOS app that allows people to talk with AI versions of its instructors. As of today, On Call is limited to two personas representing the expertise of former FBI hostage negotiator Chris Voss and University of Berkeley neuroscientist Dr. Matt Walker. In the future, MasterClass says it will offer many more personas, with Gordon Ramsay, Mark Cuban, Bill Nye and LeVar Burton among some of the more notable experts sharing their voices and knowledge in this way.

“This isn’t just another generic AI chatbot pulling data from the internet,” David Rogier, the CEO of MasterClass, said on X. “We’ve built this with our experts — training the AI on proprietary data sets (e.g. unpublished notes, private research, their lessons, emails, [and] expertise they’ve never shared before).”

Per Inc., MasterClass signed deals with each On Call instructor to license their voice and expertise. Judging from the sample voice clips MasterClass has up on its website, the interactions aren’t as polished as the one shown in the ad the company shared on social media. In particular, the “voice” of Chris Voss sounds robotic and not natural at all. On Call is also a standalone product with a separate subscription from the company’s regular offering. To use On Call, users will need to pay $10 per month or $84 annually.

This article originally appeared on Engadget at https://www.engadget.com/ai/masterclass-on-call-gives-you-on-demand-access-to-ai-facsimiles-of-its-experts-215022938.html?src=rss

©

© MasterClass

A screenshot of MasterClass' new On Call app.

Jarvis, Google's web-browsing AI, is now officially known as Project Mariner

Earlier today, Google debuted Gemini 2.0. The company says its new machine learning model won’t just enhance its existing products and services. It will also power entirely new experiences. To that point, Google previewed Project Mariner, an AI agent that can navigate within a web browser. Mariner is an experimental Chrome extension that is currently available to select “trusted testers.”

As you can see from the video Google shared, the pitch for Mariner is a tool that can automate certain rote tasks. In the demo, Mariner assists Google’s Jaclyn Konzelmann with finding the contact information of four outdoor companies.

Clearly, there’s more work Google needs to do before the software is ready for public use. Notice that Konzelmann is very specific when prompting Mariner, instructing the agent to “memorize” and “remember” parts of her instructions. It also takes Mariner close to 12 minutes to complete the task given to it.

“As a research prototype, it’s able to understand and reason across information in your browser screen, including pixels and web elements like text, code, images and forms,” Google says of Mariner.

If Project Mariner sounds familiar, it’s because The Information reported in October that Google was working on something called Project Jarvis. The publication described it as a “computer-using agent” that Google designed to assist with tasks like booking flights. In November, an early version of Jarvis was briefly available on the Chrome Web Store. A person familiar with the matter confirmed that Jarvis and Mariner are the same project.

The confirmation of Mariner’s existence comes after Anthropic introduced a similar but more expansive feature for its Claude AI, which the company says can “use a wide range of standard tools and software programs designed for people.” That tool is currently available in public beta.

This article originally appeared on Engadget at https://www.engadget.com/ai/jarvis-googles-web-browsing-ai-is-now-officially-known-as-project-mariner-191603929.html?src=rss

©

© Google

Project Mariner is an experimental Chrome extension that can help you navigate websites.

Gemini 2.0 is Google's most capable AI model yet and available to preview today

The battle for AI supremacy is heating up. Almost exactly a week after OpenAI made its o1 model available to the public, Google today is offering a preview of its next-generation Gemini 2.0 model. In a blog post attributed to Google CEO Sundar Pichai, the company says 2.0 is its most capable model yet, with the algorithm offering native support for image and audio output. “It will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” says Pichai.

Google is doing something different with Gemini 2.0. Rather than starting today’s preview by first offering its most advanced version of the model, Gemini 2.0 Pro, the search giant is instead kicking things off with 2.0 Flash. As of today, the more efficient (and affordable) model is available to all Gemini users. If you want to try it yourself, you can enable Gemini 2.0 from the dropdown menu in the Gemini web client, with availability within the mobile app coming soon.

Moving forward, Google says its main focus is adding 2.0’s smarts to Search (no surprise there), beginning with AI Overviews. According to the company, the new model will allow the feature to tackle more complex and involved questions, including ones involving multi-step math and coding problems. At the same time, following a broad expansion in October, Google plans to make AI Overviews available in more languages and countries.

Looking forward, Gemini 2.0 is already powering enhancements to some of Google’s more moonshot AI applications, including Project Astra, the multi-modal AI agent the company previewed at I/O 2024. Thanks to the new model, Google says the latest version of Astra can converse in multiple languages and even switch between them on the fly. It can also “remember” things for longer, offers improved latency, and can access tools like Google Lens and Maps.

As you might expect, Gemini 2.0 Flash offers significantly better performance than its predecessor. For instance, it earned a 63 percent score on HiddenMath, a benchmark that tests the ability of AI models to complete competition-level math problems. By contrast, Gemini 1.5 Flash earned a score of 47.2 percent on that same test. But the more interesting thing here is that the experimental version of Gemini 2.0 even beats Gemini 1.5 Pro in many areas; in fact, according to data Google shared, the only domains where it lags behind are in long-context understanding and automatic speech translation.

It’s for that reason that Google is keeping the older model around, at least for a little while longer. Alongside today's announcement of Gemini 2.0, the company also debuted Deep Research, a new tool that uses Gemini 1.5 Pro's long-context capabilities to write comprehensive reports on complicated subjects.  

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-20-is-googles-most-capable-ai-model-yet-and-available-to-preview-today-170329180.html?src=rss

©

© Google

A graphic of Google's Gemini 2.0 model.

Google's Gemini Deep Research tool is here to answer your most complicated questions

When Google debuted Gemini 1.5 Pro in February, the company touted the model’s ability to reason through what it called “long context windows.” It said, for example, the algorithm could provide details about a 402-page Apollo 11 mission transcript. Now, Google is giving people a practical way to take advantage of those capabilities with a tool called Deep Research. Starting today, Gemini Advanced users can use Deep Research to create comprehensive but easy-to-read reports on complex topics.

Aarush Selvan, a senior product manager on the Gemini team, gave Engadget a preview of the tool. At first glance, it looks to work like any other AI chatbot. All interactions start with a prompt. In the demo I saw, Selvan asked Gemini to help him find scholarship programs for students who want to enter public service after school. But things diverge from there. Before answering a query, Gemini first produces a multi-step research plan for the user to approve.

For example, say you want Gemini to provide you with a report on heat pumps. In the planning stage, you could tell the AI agent to prioritize information on government rebates and subsidies or omit those details altogether. Once you give Gemini the go-ahead, it will then scour the open web for information related to your query. This process can take a few minutes. In user testing, Selvan said Google found most people were happy to wait for Gemini to do its thing since the reports the agent produces through Deep Research are so detailed.

In the example of the scholarship question, the tool produced a multi-page report complete with charts. Throughout, there were citations with links to all of the sources Gemini used. I didn’t get a chance to read over the reports in detail, but they appeared to be more accurate than some of Google’s less helpful and flattering AI Overviews.  

According to Selvan, Deep Research uses some of the same signals Google Search does to determine authority. That said, sourcing is definitely “a product of the query.” The more complicated a question you ask of the agent, the more likely it is to produce a useful answer since its research is bound to lead it to more authoritative sources. You can export a report to Google Docs once you're happy with Gemini's work.

If you want to try Deep Research for yourself, you’ll need to sign up for Google’s One AI Premium Plan, which includes access to Gemini Advanced. The plan costs $20 per month following a one-month free trial. It's also only available in English at the moment. 

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-deep-research-tool-is-here-to-answer-your-most-complicated-questions-154354424.html?src=rss

©

© Google

A bearded man looks at a tablet, examining a series of charts.

iOS 18.2 is here with Apple Intelligence image generation features in tow

Apple has begun rolling iOS 18.2 and iPadOS 18.2 to iPhones and iPads. The updates bring with them major enhancements to the company’s suite of AI features, and are likely the final software releases Apple has planned for 2024. More Apple Intelligence features are available through macOS 15.2. However, note access to all of the AI features mentioned below is limited to users in the US, Australia, Canada, New Zealand, South Africa and the UK for now, with support additionally limited to devices with their language set to English. 

Provided you own an iPhone 15 Pro, 16 or 16 Pro, one of the highlights of iOS 18.2 is Image Playground, which is available both as a standalone app and Messages extension. If you go through the latter, the software will generate image suggestions based on the contents of your conversations. Naturally, you can also write your own prompts. It’s also possible to use a photo from your iPhone’s camera roll as a starting point. However, one limitation of Image Playground is that it can’t produce photorealistic images of people. That’s by design so that the resulting images don’t cause confusion. You can also import any pictures you generate with Image Playground to Freeform, Pages and Keynote.  

Another new feature, Genmoji, allows you to create custom emoji. From your iPhone’s emoji keyboard, tap the new Genmoji button and then enter a description of the character you want to make. Apple Intelligence will generate a few different options, which you can swipe through to select the one you want to send. It’s also possible to use pictures of your friends as the starting point for a Genmoji.

The new update also brings enhancements to Siri and Writing Tools, both of which can now call on ChatGPT for assistance. For example, if you ask the digital assistant to create an itinerary or workout plan for you, it will ask for your permission to use ChatGPT to complete the task. You don’t need a ChatGPT account to use the chatbot in this way, though based on information from the iOS 18.2 beta, there will be a daily limit on how many queries iPhone users can send through to OpenAI’s servers.

Those are just some of the more notable Apple Intelligence features arriving with iOS 18.2 and iPadOS 18.2. If you don’t own a recent iPhone or iPad, the good news is that both releases offer more than just new AI tools. One nifty addition is the inclusion of new AirTag features that allow you to share the location of your lost item trackers with friends and airlines. If you’re a News+ subscriber, you also get access to daily Sodoku puzzles. Also new to iOS 18.2 is a feature Apple removed with iOS 16. A new menu item in the operating system’s Settings app allows you to add volume controls to the lock screen.

If you don’t see a notification to download iOS 18.2 on your iPhone and iPadOS 18.2 on your iPad, you can manually check for the updates by opening the Settings app on your device and navigating to “General,” then “Software Update.” The same goes for macOS — just open the System Settings app, navigate to "Software Update" and start the download. 

If you live outside of one of the countries mentioned at the top, support for additional countries and languages, including Chinese, French, German, Italian, Japanese, Korean, Spanish and Portuguese, will roll out throughout next year, with an initial update slated for April. 

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/ios-182-is-here-with-apple-intelligence-image-generation-features-in-tow-130029173.html?src=rss

©

© Apple

Renders of two iPhones are shown against a white background. One of the phones displays a yellow smiley face emoji with cucumber slices on its eyes, while they other shows an AI-generated image of a cat wearing a chef hat

Sony's WH-1000XM5 headphones are back on sale for $100 off

The Thanksgiving holiday might have come and gone, but one of the best pair of wireless headphones you can buy right now are back to their Black Friday price. Amazon has discounted Sony’s excellent WH-1000XM5 headphones. All four colorways — black, midnight blue, silver and smoky pink — are currently $298, or 25 percent off their usual $400 price.

At this point, the WH-1000XM5 likely need no introduction, but for the uninitiated, they strike a nearly perfect balance between features, performance and price; in fact, they’re the Bluetooth headphones Billy Steele, Engadget’s resident audio guru, recommends for most people

With the Sony WH-1000XM5, Sony redesigned its already excellent 1000X line to make the new model more comfortable. The company also improved the XM4’s already superb active noise cancelation capabilities, adding four additional ANC mics. That enhancement makes the WH5 even better at blocking out background noise, including human voices.

Other notable features include 30-hour battery life, clear and crisp sound and a combination of handy physical and touch controls. The one downside of the XM5 are that they cost more than Sony’s previous flagship Bluetooth headphones. Thankfully, this sale addresses that fault nicely.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/sonys-wh-1000xm5-headphones-are-back-on-sale-for-100-off-174551119.html?src=rss

©

© Billy Steele/Engadget

WH-1000XM5 review

Apple's updated MagSafe wireless charger is on sale for $35

New iPhone 16 owners can pick up an Apple charger for cheap right now. The latest, more powerful MagSafe charger has dropped to as low as $30. You'll get that price on the 1m (3.3 ft) model, but the better discount is on the 2m (6.6 ft) model, which is 29 percent off and on sale for $35. That’s a return to the same low price we saw for the accessory during Black Friday.

Apple refreshed its MagSafe charger alongside the iPhone 16 lineup this fall. Provided you own an iPhone 16 and an adapter that offers at least 30W of power, the charger reaches charging speeds of up to 25W. According to Apple, that’s enough to allow iPhone 16 Pro users to charge their device to 50 percent in 30 minutes. With older iPhones and Qi-compatible accessories, power delivery speeds are limited to 15W. 

Apple’s official MageSafe charger is one of our favorite iPhone accessories. Even at full price, it’s a great purchase for getting a little more out of your new smartphone. With Amazon’s current discount, there’s no reason not to buy one if you prefer wireless charging.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/apples-updated-magsafe-wireless-charger-is-on-sale-for-35-154041854.html?src=rss

©

© Apple

An Apple MagSafe charger connected to the back of an iPhone.

Google's Willow quantum chip breakthrough is hidden behind a questionable benchmark

Google debuted Willow, its latest quantum chip, on Monday, and if you’ve spent any time online since, you’ve undoubtedly run into some breathless reporting about it. Willow “crushes classical computers on a cosmic timescale,” proclaims one headline; Google “unveils ‘mind-boggling’ quantum computer chip,” reads another. It’s all anchored by a claim that Willow can complete a computation that would theoretically take a classical computer significantly more time than the 14 billion years the universe has existed. But, as you can probably guess, what the chip represents is not so simple.

First, with Willow, Google makes no claim of quantum supremacy, something the company did when it publicly debuted its previous generation quantum computer, Sycamore, back in 2019. You may recall that, at the time, Google publicized how it took Sycamore just 200 seconds to perform a calculation that would have theoretically taken the world’s then-fastest supercomputer 10,000 years to complete. That feat, the company said, demonstrated that it had created a quantum computer that could solve problems the best classical computers could not even attempt. In other words, Google had achieved quantum supremacy.

However, that claim quickly ended in controversy, with one researcher calling the company’s announcement “indefensible” and “just plain wrong,” and Google has since avoided talking about quantum supremacy. Instead, it just says it has achieved “beyond classical computation.” Part of the issue was that Sycamore was not a general-purpose quantum computer; instead, it was designed to surpass classical computers in a single task known as random circuit sampling or RCS. The thing about RCS is that, in Google’s own words, it has “no known real-world applications.” Yet, here again, the company is touting RCS performance.

Google says Willow can complete its latest RCS benchmark in under five minutes. By contrast, the company estimates it would take Frontier, currently the world’s second most powerful supercomputer, 10 septillion years to complete the same task. That number, Google says, “lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse.”

A chart showing that no quantum computer has outperformed a classical computer on a commercially-relevant application.
Google

More practically, Google tries to make the case that RCS performance should be the metric by which all quantum computers are judged. According to Hartmut Neven, the founder of Google Quantum AI, “it’s an entry point. If you can’t win on random circuit sampling, you can’t win on any other algorithm either.” He adds RCS is “now widely used as a standard in the field.“ 

However, other companies, including IBM and Honeywell, instead use a metric called quantum volume to tout their breakthroughs. They claim it points to a more holistic understanding of a machine’s capabilities by factoring in how its qubits interact with one another. Unfortunately, you won’t find any mention of quantum volume in the spec sheet Google shared for Willow, making comparisons difficult.

To that point, the far more impressive claim Google is making today is that Willow is “below the threshold.” To date, the problem that has plagued every attempt to build a useful quantum computer is that the quantum bits they’re based on are difficult to control. They only hold their quantum state for fractions of a second, and the more qubits are added to a system, the more likely it is to produce errors. However, with Willow, Google says it has found a way to reduce errors as it adds more qubits to the system. According to the company, Willow is the first time this has been done.

“As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built,” says Neven. “Willow brings us closer to running practical, commercially-relevant algorithms that can’t be replicated on conventional computers.”

That’s the real breakthrough here, and one that points to a future where quantum computers could solve problems that have tangible effects on people's lives. That future, however, isn't here just yet, and even Google admits it has more work to do before it gets there.   

This article originally appeared on Engadget at https://www.engadget.com/computing/googles-willow-quantum-chip-breakthrough-is-hidden-behind-a-questionable-benchmark-224707174.html?src=rss

©

© Google

An illustration of Google's Willow chip.

OpenAI's Sora video generation AI model arrives globally later today

Following an early preview at the start of the year, Sora, OpenAI's long-awaited video generation model, is ready for public use. If you're a ChatGPT Plus or Pro subscriber in the US or "most other countries" where the chatbot is available, you can begin experimenting with the tool starting later today, OpenAI announced on Monday. A more powerful model powers the product than the one OpenAI showed off in February. Sora Turbo is significantly faster, according to the company, though OpenAI cautions the new model still has limitations. "It often generates unrealistic physics and struggles with complex actions over long durations," says the company. 

When users first visit the dedicated landing page OpenAI has set up for Sora, they'll be greeted with a feed of videos the model has created for other people. By clicking on a video, you'll be able to see the exact prompt someone gave Sora to generate the footage you see. From here, you can also decide to re-cut a video, blend it into a clip you're working on, or remix it. In this initial release, OpenAI is limiting Sora to generating videos that are up to 1080p and 20 seconds long. 

ChatGPT Plus subscribers can use Sora to create up to 50 videos at 480p per month. Alternatively, Plus users can generate fewer (and shorter) videos at 720p. OpenAI says the Pro plan affords 10 times as much usage, at higher resolutions and longer durations. "We’re working on tailored pricing for different types of users, which we plan to make available early next year," the company adds.

For safety purposes, each video features a visible watermark by default and contains C2PA metadata to assist with identification. OpenAI says it will block users from using Sora to create child sexual abuse materials (CSAM) and sexual deepfakes. More broadly, the company plans to limit uploads of people until it has time to refine its safeguards against deepfakes. 

Even if you don't have a ChatGPT subscription, you can still visit the Sora website to see what other people are using the tool to create. During today's livestream, OpenAI CEO Sam Altman said it may take some time before Sora arrives in Europe and the UK. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-sora-video-generation-ai-model-arrives-globally-later-today-182613208.html?src=rss

©

© Sora

OpenAI's Sora AI video generator

How to install a PS5 SSD

So, you just bought yourself an NVMe drive to add more storage to your PlayStation 5. Don’t worry. If you’re unsure of how to install the SSD, you have come to the right place. Not only is the process relatively simple, but this guide will take you through every step, including all the tools you need. If you came here looking for a recommendation on what NVMe to buy for your PS5 or PS5 Pro, check out our dedicated guide to the best SSDs for the PS5.

How to install a PS5 SSD

1. Power everything down to remove the stand

Before attempting to add more storage via an NVMe, ensure that you have Sony’s latest software installed. Once you're up-to-date, installation of a PS5 SSD is fairly straightforward. Sony recommends a #1 Phillips or crosshead screwdriver, but this isn't rocket science. Any crossed screwdriver of a similar size will do fine. If you don’t own a screwdriver, the DIY heroes from iFixit sell a great set for $20.

Begin by powering down your PS5 or PS5 Pro, unplugging everything, removing the stand and flipping it over to its underside. If you have a launch PS5, that’s the side with the disc drive; if you have a launch Digital Edition, it’s the side without the PlayStation logo cutout. As for the PS5 Slim and PS5 Pro, the expansion slot is in the same place: behind the smaller of the two panels.

Sony has a video guide to popping off the outside cover here, but the gist is you gently lift up the opposing corners and slide the panel toward the flat end of the console. There’s a knack to this, and it requires very little effort or strength. If you’re not getting it, rather than force things, readjust your grip and try again.

How to install a PS5 SSD
Engadget

2. Access the drive bay

Once you’ve got everything open, you’ll see a rectangular piece of metal with a screw holding it in place. Remove that screw, and you’ll be able to access the drive bay.

You’ll see five holes inside, each number corresponding to standard SSD drive lengths. The one numbered 110 will have a metal insert and screw inside. You need to remove the screw with a screwdriver, and then unscrew the insert with your fingers and move it to the relevant hole. For most drives, it’s going to be 80.

How to install a PS5 SSD
Engadget

3. Slot in the SSD

Then take your SSD and slot it in. The slot is at the edge closest to the number “30,” and SSDs are keyed to only fit in one way, so no force is required. If it’s not sliding in, don’t force it. You’ll notice the SSD doesn’t sit flat. That’s fine and is as intended.

How to install a PS5 SSD
Engadget

4. Screw the drive bay back in

Once the SSD is seated, take the screw you removed from the insert, line it up with the little notch at the end of your SSD, and push down so it meets the insert. Give the screw a few turns — it doesn’t need to be very tight — and you’re done.

Replace the metal cover and screw it down, and then slide the plastic outer shell back on.

When you first turn on the PS5, it’ll prompt you to format the drive. Do that! You have now successfully expanded your console’s storage, and can go about downloading and moving games to it.

How to install a PS5 SSD
Engadget

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/how-to-install-a-ps5-ssd-130010846.html?src=rss

©

© Engadget

PlayStation 5

Crypto evangelist David Sacks will serve as Trump's AI and cryptocurrency advisor

Donald Trump has picked a crypto bull to advise him on AI and cryptocurrency policy. On Thursday evening, the president-elect took to Truth Social to share that he was appointing former PayPal COO David Sacks to serve as his “White House A.I. & Crypto Czar.” Trump said Sacks would also lead the Presidential Council of Advisors for Science and Technology.

“David will guide policy for the Administration in Artificial Intelligence and Cryptocurrency, two areas critical to the future of American competitiveness. David will focus on making America the clear global leader in both areas,” Trump wrote, adding Sacks would “safeguard Free Speech online, and steer us away from Big Tech bias and censorship.”

As an appointee to one of the president’s advisory councils, Sacks does not need to go through the usual Senate confirmation process required for cabinet picks and federal agency leads. Sacks does not have previous government experience. Trump instead highlighted his business credentials, pointing to his tenure at PayPal and later Yammer, which Sacks founded in 2008 and Microsoft acquired in 2012 for $1.2 billion. Sacks is also a close confidant of Elon Musk and provided part of the funding Musk used to buy Twitter for $44 billion in 2022. Sacks has broadly advocated for smaller government and less regulation, though he also pushed hard for the Biden administration to intervene when Silicon Valley Bank became insolvent in 2023.

congrats to czar @DavidSacks!

— Sam Altman (@sama) December 6, 2024

“Where is Powell? Where is Yellen?” Sacks tweeted before regulators moved to fully protect deposits at SVB. “Stop this crisis NOW. Announce that all depositors will be safe. Place SVB with a Top 4 bank. Do this before Monday open or there will be contagion and the crisis will spread.”

Alongside Paul Atkins, who Trump picked to lead the US Securities and Exchange Commission, Sacks is likely to reshape US policy on cryptocurrency and AI. Under the Biden administration, the federal government sought to regulate the crypto industry. Sacks, however, is a vocal proponent of the industry. He is also a major investor in Solana and other crypto-related ventures such as Multicoin Capital.

As for Trump, appointing Sacks to his advisory council shows just how much his stance on crypto has changed. As recently as 2021, he said he thought Bitcoin seemed “like a scam,” and advocated for “very, very high” government regulation of the currency. That was before the crypto industry funneled $131 million during the 2024 election to get 274 pro-crypto candidates elected to the House of Representatives and 20 candidates to the Senate. During his campaign, Trump promised to make the United States “the crypto capital of the planet.”

This article originally appeared on Engadget at https://www.engadget.com/ai/crypto-evangelist-david-sacks-will-serve-as-trumps-ai-and-cryptocurrency-advisor-170522273.html?src=rss

©

© Reuters / Reuters

FOTO DE ARCHIVO: David Sacks, ex CEO de Yammer habla durante el primer día de la Convención Nacional Republicana (RNC), en el Fiserv Forum en Milwaukee, Wisconsin, Estados Unidos. 15 de julio de 2024. REUTERS/Mike Segar/Archivo

The best gaming monitors in 2024

Let’s be honest: shopping for a gaming monitor can feel like wading through mud. As soon as you decide to buy a display for gaming instead of regular productivity use, a whole host of new considerations come into the equation. Should you go for an LCD or OLED monitor? What about the differences between NVIDIA G-Sync and AMD FreeSync? How about refresh rates?

Those are just some of the questions this guide aims to answer. In the process, my hope is to help you find the perfect gaming monitor for your budget.

Best gaming monitors for 2024

How we test gaming monitors

While I’ve not used every product recommended in our list, I have extensively tested dozens of gaming monitors in the past, including models with WOLED and QD-OLED panels. In the case of the Alienware monitor I highlight above, I bought one for myself with my own money. Separately, I spent dozens of hours over a two-year period researching gaming monitors to write the current version of this guide.

Factors to consider before buying a gaming monitor

LCD vs OLED

When shopping for a gaming monitor, you first need to decide if you want to go with a screen that has an LCD or OLED panel. For most people, that choice will come down to price; OLED gaming monitors are more expensive than their LCD counterparts. Even if money isn’t a concern, the choice might not be as straightforward as you think; both LCD and OLED panels come in a few different flavors, and knowing the differences between each type is important to making an informed decision.

LCD monitors come in three different varieties: twisted nematic (TN), vertical alignment (VA) or in-plane switching (IPS). For the most part, you want to avoid TN monitors unless you’re strapped for cash or want a monitor with the fastest possible refresh rate. TN screens feature the worst viewing angles, contrast ratios and colors of the group.

The differences between VA and IPS panels are more subtle. Historically, VA gaming monitors featured slower pixel response times than their TN and IPS counterparts, leading to unsightly image smearing. However, that’s improved in recent years. VA panels also frequently sport better contrast ratios than both TN and IPS screens. They’re not dramatically better than their IPS siblings on that front, but when contrast ratios aren’t an inherent strength of LCDs, every bit helps.

On the other hand, IPS panels excel at color accuracy and many offer refresh rates and response times that are as fast as the fastest TN panels. The majority of LCD gaming monitors on the market today feature IPS panels, though you will frequently find VA screens on ultrawide monitors.

What about OLED?

If you can afford one, OLED screens make for the best gaming monitors. The ability of organic light-emitting diodes to produce true blacks is transformational. Simply put, every game looks better when there isn’t a backlight to wash out shadow detail. Plus, you can experience true HDR with an OLED screen, something that LCDs aren’t known for.

Today, OLED screens come in two different flavors: WOLED and QD-OLED, with LG producing the former and Samsung the latter. I won’t bore you with the technical details of how the two panel types differ from one another other than to note both technologies broadly offer the same set of shortcomings.

Most notably, OLED monitors don’t get very bright. At best, the most capable models peak at around 250 nits when measuring brightness across the entire screen. I didn’t find this to be an issue in my testing, but your experience may vary depending on the ambient light in your gaming room.

If brightness is important to you, note that due to manufacturer tunings, different models can perform better than others, even if they feature the same panel from LG or Samsung. It’s worth comparing monitors in the same class to find the model that’s right for you.

Separately, almost all OLEDs feature sub-pixel layouts that produce text fringing in Windows. The latest generation of OLED panels from both LG and Samsung are much better in this regard, to the point where modern OLEDs are good enough for reading and image editing. However, it’s still worth going to your local Micro Center or Best Buy to see the model you want in person, as the text fringing issue is hard to capture in photos and videos.

Another (potentially more serious) issue is burn-in. Organic light-emitting diodes can get “stuck” if they display the same image for long periods of time. Every OLED gaming monitor you can buy today comes with features designed to prevent burn-in and other image retention issues. Provided you don’t use your new OLED monitor for eight hours of daily productivity work, I don’t think you need to worry about burn-in too much.

Screen size, resolution and aspect ratio

After deciding where you fall on the LCD vs OLED debate, you can start thinking about the size of your future gaming monitor. Personal preference and the limitations of your gaming space will play a big part here, but there are also a few technical considerations. You should think about size in conjunction with resolution and aspect ratio.

A 1440p monitor has 78 percent more pixels than a 1080p screen, and a 4K display has more than twice as many pixels as a QHD panel. As the size of a monitor increases, pixel density decreases unless you also increase resolution. For that reason, there are sweet spots between size and resolution. For instance, I wouldn’t recommend buying an FHD monitor that is larger than 24 inches or a QHD one bigger than 27 inches. Conversely, text and interface elements on a 4K monitor can look tiny without scaling on panels smaller than 32 inches.

You also need to consider the performance costs of running games at higher resolutions. The latest entry-level GPUs can comfortably run most modern games at 1080p and 60 frames per second. They can even render some competitive titles at 120 frames per second and higher — but push them to run those same games at 1440p and beyond, and you’re bound to run into problems. And as you’ll see in a moment, a consistently high frame rate is vital to getting the most out of the latest gaming monitors.

If your budget allows for it, 1440p offers the best balance between visual clarity and gaming performance. As for 1080p and 4K, I would only consider the former if you’re on a tight budget or you exclusively play competitive shooters like Valorant and Overwatch 2. For most people, the user experience and productivity benefits of QHD far outweigh the performance gains you get from going with a lower resolution screen.

Just a few years ago, 4K was not a viable resolution for PC gaming, but then NVIDIA came out with its 40 series GPUs. With those video cards offering the company’s DLSS 3 frame generation technology, there’s a case to be made that the technology is finally there to play 4K games at a reasonable frame rate, particularly if you exclusively play big, AAA single-player games like Alan Wake 2 and Cyberpunk 2077 or enjoy strategy games like the Total War series. However, even with frame generation, you will need a GPU like the $999 RTX 4080 Super or $1,599 RTX 4090 to drive a 4K display. Plus, 4K gaming monitors tend to cost more than their 1440p counterparts.

If you want an ultrawide, note that not every game supports the 21:9 aspect ratio, and fewer still support 32:9. When shopping for a curved monitor, a lower Radius, or ‘R’ number, indicates a more aggressive curve. So, a 1000R monitor is more curved than an 1800R one.

The best gaming monitor
Photo by Igor Bonifacic / Engadget

Refresh rates and response times

And now, finally, for the fun stuff. The entire reason to buy a gaming monitor is for its ability to draw more images than a traditional PC display. As you shop for a new screen, you will see models advertising refresh rates like 120Hz, 240Hz and 360Hz. The higher the refresh rate of a monitor, the more times it can update the image it displays on screen every second, thereby producing a smoother moving image. When it comes to games like Overwatch, Valorant and League of Legends, a faster refresh rate can give you a competitive edge, but even immersive single-player games can benefit.

A monitor with a 360Hz refresh rate will look better in motion than one with a 240Hz or 120Hz refresh rate, but there are diminishing returns. At 60Hz, the image you see on your monitor is updated every 16.67ms. At 120Hz, 240Hz and 360Hz, the gap between new frames shortens to 8.33ms, 4.17ms and 2.78ms, respectively. Put another way, although a 360Hz monitor can display 50 percent more frames than a 240Hz screen in a given time period, you will only see a speedup of 1.14ms between frame intervals. And all that depends on your GPU’s ability to render a consistent 360 frames per second.

Ultimately, a fast monitor will do you no good if you don't have a graphics card that can keep up. For example, with a 1440p 360Hz monitor, you realistically need a GPU like the RTX 4070 Super or RTX 4080 Super to saturate that display while playing competitive games like Overwatch 2 and Valorant.

There’s also more to motion clarity than refresh rates alone. Just as important are response times, or the amount of time it takes for pixels to transition from one color to another and then back again. Monitors with slow response times tend to produce smearing that is distracting no matter what kind of game you’re playing. Unfortunately, response times are also one of the more opaque aspects of picking the best gaming monitor for your needs.

Many LCD monitor manufacturers claim their products feature 1ms gray-to-gray (GtG) response times, yet they don’t handle motion blur to the same standard. One of the reasons for that is that many companies tend to cherry-pick GtG results that make their monitors look better on paper. The Video Electronics Standards Association (VESA) recently created a new certification program to address that problem, but the grading system is unwieldy and, as far as I can tell, hasn’t had a lot of pickup from manufacturers.

For now, your best bet is to turn to resources like Rtings and Monitors Unboxed when shopping for a new gaming monitor. Both outlets conduct extensive testing of every screen they review and present their findings and recommendations in a way that’s easy to understand.

FreeSync vs G-Sync

No matter how powerful your system, it will sometimes fail to maintain a consistent framerate. In fact, you should expect frame rate fluctuations when playing graphically-intensive games like Alan Wake 2 and Cyberpunk 2077. For those moments, you want a gaming display with adaptive sync. Otherwise, you can run into screen tearing.

Adaptive sync technologies come in a few flavors. The two you’re most likely to encounter are AMD FreeSync and NVIDIA G-Sync, and each has its own set of performance tiers. With G-Sync, for instance, they are – from lowest to highest – G-Sync Compatible, G-Sync and G-Sync Ultimate.

The good news is that you don’t need to think too much about which adaptive sync technology a display supports. In the early days of the tech, it was rare to see a gaming monitor that offered both FreeSync and G-Sync since including the latter meant a manufacturer had to equip their display with a dedicated processor from NVIDIA. That changed in 2019 when the company introduced its G-Sync Compatible certification. In 2024, if a monitor supports FreeSync, it is almost certainly G-Sync Compatible, too, meaning you can enjoy tear-free gaming whether you’re using an AMD or NVIDIA GPU.

In fact, I would go so far as to say you shouldn’t make your purchasing decision based on the level of adaptive sync performance a monitor offers. As of 2024, the list of G-Sync Ultimate-certified displays is about two dozen models long, and some are a few years old now.

The best gaming monitor
Photo by Igor Bonifacic / Engadget

Inputs

Almost every gaming display on the market right now comes with at least one DisplayPort 1.4 connection, and that’s the port you will want to use to connect your new monitor to your graphics card. If you own a PS5 or Xbox Series X/S, it’s also worth looking out for monitors that come with HDMI 2.1 ports, as those will allow you to get the most out of your current generation console.

A word about HDR

As fast and responsive gaming monitors have become in recent years, there’s one area where progress has been frustratingly slow: HDR performance. The majority of gaming monitors currently on sale, including most high-end models, only meet VESA’s DisplayHDR 400 certification. As someone who owned one such monitor, let me tell you it’s not even worth turning on HDR on those screens. You will only be disappointed.

The good news is that things are getting better, albeit slowly. The release of Windows 11 did a lot to improve the state of HDR on PC, and more games are shipping with competent HDR modes, not just ones that increase the brightness of highlights. Thankfully, with more affordable mini-LED monitors, like our top pick, making their way to the market, HDR gaming is finally within reach of most PC gamers.

Gaming monitor FAQs

Are curved monitors better for gaming?

It depends on personal preference. Many manufacturers claim curved monitors offer a more immersive gaming experience due to the way the display wraps around your field of vision. However, I find the edge distortion distracting, particularly when you increase the field of view in a game.

What aspect ratio should I look for in a gaming monitor?

The vast majority of 24-, 27- and 32-inch gaming monitors feature 16:9 aspect ratio panels, and that’s been the case for many years. In fact, nearly every game made in the last two decades supports 16:9 resolutions, such as 1,920 x 1,080 and 2,560 by 1,440, and if you buy a standard-sized monitor, you won’t need to worry about letterboxing.

In the case of ultrawides, 21:9 is the most common aspect ratio, with some very wide models sporting 32:9 panels. Among games, support for 21:9 and 32:9 resolutions is far from universal, so don’t be surprised if a game doesn’t fill the entirety of your screen.

Is OLED good for gaming?

OLED monitors are great for gaming. Not only do they offer excellent motion clarity and input latency, but they’re also easily the best displays for HDR gaming. If money is no object, and you primarily use your PC for gaming, you can’t go wrong with an OLED monitor.

How much does a good gaming monitor cost?

While you could easily spend more than $1,000 to obtain the best gaming monitor on the market now, the reality is that the budget and midrange categories have never been more competitive. In 2015, I spent $500 CAD to buy a 1080p monitor with a 144Hz refresh rate and TN panel. The budget AOC model I highlight above is not only cheaper than my first gaming monitor, but it also features a faster 180Hz refresh rate and a higher contrast VA panel.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/best-gaming-monitor-140008940.html?src=rss

©

© Engadget

The best gaming monitors
❌