Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

NVIDIA announces new RTX 5090 graphics card that costs $2,000 at CES

7 January 2025 at 09:24

In tandem with (briefly) becoming the most valuable company in the world, NVIDIA announced its new, long-awaited Blackwell family of graphic cards. CEO Jensen Huang took to the CES 2025 stage to detail that the first batch of RTX 50 series GPUs will arrive in January, with pricing starting at $549 for the RTX 5070 and capping at a whopping $1,999 for the flagship RTX 5090. The mid-tier models are the $749 RTX 5070 Ti and $999 RTX 5080. Laptop versions of the desktop GPUs will be available in March, starting at $1,299 for 5070-equipped PCs.

As for specs, the RTX 5090 Founders Edition will feature 32GB of GDDR7 RAM and 21,760 CUDA cores. Depending on the game, NVIDIA says the 5090 will deliver as much as twice the relative performance, with RT-intensive titles like Alan Wake 2 and Cyberpunk 2077 seeing the largest gains. In the latter, for instance, NVIDIA shared a video that showed the game running at 242 frames per second on the 5090 compared to a relatively paltry 109 fps on the RTX 4090. 

Of course, the performance uplift consumers can expect will depend, in large part, on whether a game supports NVIDIA's new DLSS 4. The tech can generate up to three additional frames for every one frame the GPU renders with traditional rendering techniques. Looking at the performance charts NVIDIA shared, games that are limited to DLSS 3 will see a smaller performance boost. However, the good news is that older RTX GPUs will support DLSS 4, though the tech's killer feature, multi-frame generation, will be exclusive to the company's new 50 series cards. 

Okay, but what about the RTX 5070, you ask? It will boast 6,144 CUDA cores and 12GB of GDDR7 memory (I can already hear groaning in the comments section). With DLSS 4, NVIDIA claims the 5070 will be as fast as the 4090. Again, it's important to stress those gains will come courtesy of DLSS 4, and rasterization gains, if there are any, will be far more modest. As for the 5070 Ti, the company says it's up to two times faster than the 4070 Ti.    

We knew going into tonight that the 50 series family would almost certainly be power hogs, and that proved to be true. On the top end, NVIDIA recommends a 1,000-watt PSU for the 5090 due to its 575 watts of total graphics power. If there's a silver lining, it's that all the new Founders Edition cards feature two-slot designs.  

RTX 5090

RTX 5080

RTX 5070 Ti

RTX 5070

RTX 4090

Architecture

Blackwell

Blackwell

Blackwell

Blackwell

Lovelace

CUDA cores

21,760

10,752

8,960

6,144

16,384

AI TOPS

3,352

1,801

1,406

988

1,321

Tensor cores

5th Gen

5th Gen

5th Gen

5th Gen

4th Gen

RT cores

4th Gen

4th Gen

4th Gen

4th Gen

3rd Gen

VRAM

32 GB GDDR7

16 GB GDDR7

16 GB GDDR7

12 GB GDDR7

24 GB GDDR6X

Memory bandwidth

1,792 GB/sec

960 GB/sec

896 GB/sec

672 GB/sec

1,008 GB/sec

TGP

575W

360W

300W

250W

450W

NVIDIA kicked off the Blackwell portion of its CES presentation with a demo of a next-generation Assassin's Creed game featuring the most realistic ray-traced graphics the series has ever featured. "All of this, with AI, is the house that GeForce built," said Huang, wearing a new snakeskin-like jacket instead of his signature leather jacket. "Now, AI is coming home to GeForce."

Be sure to visit Engadget in the coming days as we'll have more on NVIDIA's new GPUs then. 

This article originally appeared on Engadget at https://www.engadget.com/ai/nvidia-announces-new-rtx-5090-graphics-card-that-costs-2000-at-ces-031133468.html?src=rss

©

© NVIDIA

NVIDIA Blackwell family

Anthropic agrees to work with music publishers to prevent copyright infringement

3 January 2025 at 07:47

Anthropic has partly resolved a legal disagreement that saw the AI startup draw the ire of the music industry. In October 2023, a group of music publishers, including Universal Music and ABKCO, filed a copyright infringement complaint against Anthropic. The group alleged that the company had trained its Claude AI model on at least 500 songs to which they held rights and that, when promoted, Claude could reproduce the lyrics of those tracks either partially or in full. Among the song lyrics the publishers said Anthropic had infringed on included Beyoncé’s “Halo” and “Moves Like Jagger” by Maroon 5.

In a court-approved stipulation the two sides came to on Thursday, Anthropic agreed to maintain its existing guardrails against outputs that reproduce, distribute or display copyright material owned by the publishers and implement those same measures when training its future AI models. 

At the same time, the company said it would respond “expeditiously” to any copyright concerns from the group and promised to provide written responses detailing how and when it plans to address their concerns. In cases where the company intends not to address an issue, it must clearly state its intent to do so.

“Claude isn’t designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement," an Anthropic spokesperson told Engadget. "Our decision to enter into this stipulation is consistent with those priorities. We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use."

As mentioned, Thursday’s pact doesn’t fully resolve the original disagreement between Anthropic and the group of music publishers that sued the company. The latter party is still seeking an injunction against Anthropic to prevent it from using unauthorized copies of song lyrics to train future AI models. A ruling on that matter could arrive sometime in the next few months.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-agrees-to-work-with-music-publishers-to-prevent-copyright-infringement-154742806.html?src=rss

©

© Anthropic

Anthropic's Claude AI chatbot.

The first 27-inch 4K gaming OLED monitor is here courtesy of Samsung

2 January 2025 at 07:51

Ahead of the official start of CES, Samsung has announced a trio of new Odyssey gaming monitors. Of the bunch, the G81SF is the most interesting. Samsung says it’s the first 4K, 27-inch OLED gaming monitor. The panel features a 240Hz refresh rate and 0.03ms gray-to-gray pixel response time. 

At 4K and 27 inches, pixel density clocks in at 165 pixels per inch, meaning the G81SF should produce an incredibly sharp image. As Samsung is the main supplier of QD-OLEDs, the G81SF’s panel will almost certainly make its way to other gaming monitors released this year. With CES 2025 about to kick off, some of those could be announced as early as sometime in the next few days.

If you don’t want to sacrifice motion clarity for sharpness, Samsung has you covered there too. The second new Odyssey gaming monitor the company announced, the G60SF, features a 500Hz refresh rate. Resolution is limited to 2,560 x 1,440 on this model, but both the G6 and the G8 detailed above will offer VESA True Black 400-certified HDR performance, so the G60SF will still be great for single player games and exceptional for competitive titles like Overwatch 2 and Valorant, thanks to that 500Hz refresh rate.

The Odyssey G90XF can produce a 3D image without the need for 3D glasses.
Samsung

Rounding out the new Odyssey monitors Samsung announced today is something of a curio and a CES throwback. The 27-inch G90XF has a lenticular lens attached to the front of its panel and stereo camera, meaning you can use it to watch 3D content without wearing 3D glasses. The G90XF includes AI software Samsung says can convert 2D video to 3D, but if we had to guess, the resulting footage won’t look great.

If you primarily use your computer for productivity, Samsung hasn’t forgotten you and the company’s new offerings here aren’t any less interesting. First, there’s the Smart Monitor M9 (M90SF). It features a 32-inch 4K OLED panel that offers True Black 400 HDR performance. It also comes with Samsung’s space-saving Easy Setup Stand, but what separates the M90SF from all the other monitors Samsung announced today are the couple of AI features that come included with it. The first, dubbed AI Picture Optimizer, analyzes the input signal from your PC to automatically adjust the M9’s display settings to produce the best image possible for the content you’re consuming, be that a game, movie or productivity app. The other feature can upscale lower-resolution content to 4K.

Lastly, there’s the ViewFinity S8. It’s not an OLED, but at 37 inches, it’s the largest 16:9 4K monitor Samsung has ever offered. It offers 99 percent sRGB color gamut coverage, a built-in KVM switch and 90W USB-C power delivery. It’s not the most exciting monitor in Samsung’s new lineup, but it should appeal to design professionals who want the biggest possible screen but would rather not deal with the line distortion produced by an ultrawide.

Samsung did not share pricing and availability information for any of the monitors it announced today. Expect those details to come sometime after CES.

This article originally appeared on Engadget at https://www.engadget.com/computing/the-first-27-inch-4k-gaming-oled-monitor-is-here-courtesy-of-samsung-155118244.html?src=rss

©

© Samsung

Samsung's Odyssey G8 OLED features a 4K, 27-inch OLED panel.

The best SSDs in 2025

31 December 2024 at 12:00

When it comes to boosting your system’s performance, upgrading to one of the fastest SSDs is a no-brainer. Whether you’re building a gaming PC, speeding up an older laptop or simply craving lightning-fast load times, an SSD is the way to go. Unlike traditional hard drives, SSDs rely on NAND flash storage, meaning you get faster data transfer speeds, improved reliability and a more responsive experience overall.

Modern SSDs are quite versatile, catering to everyone from gamers chasing the best performance to those hunting for the best budget options that still deliver solid speed. Pair a quality SSD with a powerful GPU, and you’re set for seamless gaming and multitasking. Plus, with advanced firmware improving efficiency and durability, SSDs are now better than ever at handling high-intensity workloads. In this buying guide, we’ll explore the best SSDs on the market, whether you’re after raw speed, affordability or a combination of the two.

Best SSDs in 2025

How we test SSDs

I’ve either tested or personally use daily every SSD recommended on this list. Out of our top picks, I bought four with my own money after doing about a dozen hours of research. Separately, Engadget Senior Reporter Jeff Dunn has also tested a handful of our recommendations, including the Crucial X9 Pro listed above.

What to look for in a PC SSD

The most affordable way to add fast storage space to a computer is with a 2.5-inch SATA drive. It’s also one of the easiest if you don’t want to worry about compatibility since almost every computer made in the last two decades will include a motherboard with Serial ATA connections. For that reason, the best SATA SSDs are an excellent choice if you want to extend the life of an older PC build. Installation is straightforward, too. Once you’ve secured the internal SSD in a drive cage, all you need to do is to connect it to your motherboard and power supply.

The one downside of SATA drives is that, in terms of responsiveness, they’re slower than their high-performance NVMe counterparts, with SATA III limiting data transfers to 600MB/s. But even the slowest SSD will be significantly faster than the best mechanical drives. And with high-capacity, 1TB SATA SSDs costing about $100, they’re a good bulk-storage option.

If your PC is newer, there’s a good chance it includes space for one or more M.2 SSDs. The form factor represents your ticket to the fastest SSDs on the market, but the tricky part is navigating all the different standards and specs involved.

M.2 drives can feature either a SATA or PCIe connection. SSDs with the latter are known as Non-Volatile Memory or NVMe drives and are significantly faster than their SATA counterparts, with Gen3 models offering sequential write speeds of up to 3,000MB/s. These drives rely on NVMe NAND technology for their superior performance and durability. You can get twice the performance with a Gen4 SSD, but you’ll need a motherboard and processor that supports the standard.

If you’re running an AMD system, that means at least a Ryzen 3000 or 5000 CPU and an X570 or B550 motherboard. With Intel, meanwhile, you’ll need at least an 11th or 12th Gen processor and a Z490, Z590 or Z690 motherboard. Keep in mind that Gen4 SSDs typically cost more than their Gen3 counterparts as well.

More expensive still are the latest Gen5 models, which offer sequential read speeds of up to 16,000MB/s. However, even if your computer supports the standard, you’re better off buying a more affordable Gen4 or Gen3 drive. At the moment, very few games and applications can take advantage of Gen3 NVMe speeds, let alone Gen4 and Gen5 speeds. What’s more, Gen5 NVMe drives can run hot, which can lead to performance and longevity issues. Your money is better spent on other components, like upgrading your GPU, for now.

As for why you would buy an M.2 SATA drive over a similarly specced 2.5-inch drive, it comes down to ease of installation. You add M.2 storage to your computer by installing the SSD directly onto the motherboard. That may sound intimidating, but in practice the process involves a single screw that you first remove to connect the drive to your computer and then retighten to secure the SSD in place. As an added bonus, there aren’t any wires involved, making cable management easier.

Note that you can install a SATA M.2 SSD into an M.2 slot with a PCIe connection, but you can’t insert an NVMe M.2 SSD into a M.2 slot with a SATA connection. Unless you want to continue using an old M.2 drive, there’s little reason to take advantage of that feature. Speaking of backward compatibility, it’s also possible to use a Gen4 drive through a PCIe 3 connection, but you won’t get any of the speed benefits of the faster NVMe.

One last thing to consider is that M.2 drives come in different physical sizes. From shortest to longest, the common options are 2230, 2242, 2260, 2280 and 22110. (The first two numbers represent width in millimeters and the latter denote the length.) For the most part, you don’t have to worry about that since 2280 is the default for many motherboards and manufacturers. Some boards can accommodate more than one size of NVMe SSD thanks to multiple standoffs. That said, check your computer’s documentation before buying a drive to ensure you’re picking up a compatible size.

If you’re buying a replacement SSD for the Steam Deck or Steam Deck OLED, things are less complicated. For Valve’s handheld, you will need a 2230 size NVMe. Simple. If you don’t want to open your Steam Deck, it’s also possible to expand its storage by installing a microSD card. Engadget has a separate guide dedicated to SD card storage, so check that out for additional buying advice.

I alluded to this earlier, but the best buying advice I can offer is don’t get too caught up about being on the bleeding edge of storage tech. The sequential read and write speeds you see manufacturers list on their drives are theoretical and real-world performance benchmark tests vary less than you think.

If your budget forces you to choose between a 1TB Gen3 NVMe and a 512GB Gen4 model, go for the higher-capacity one. From a practical standpoint, the worst thing you can do is buy a type of SSD that’s too small for needs. Drives can slow dramatically as they approach capacity, and you will probably end up purchasing one with a larger storage capacity in the future.

What to look for in portable and USB flash drives

Portable SSDs are a somewhat different beast to their internal siblings. While read and write speeds are important, they are almost secondary to how an external drive connects to your PC. You won’t get the most out of a model like the SanDisk Extreme Pro V2 without a USB 3.2 Gen 2 x 2 connection. Even among newer PCs, that’s something of a premium feature. For that reason, most people are best off buying a portable drive with a USB 3.2 Gen 2 or Thunderbolt connection. The former offers transfer speeds of up to 10Gbps. The best external hard drives also allow you to transfer data from your Windows PC to a Mac, or other device, if compatible. Be sure to consider this beforehand if you plan to use your portable drive across multiple devices.

Additionally, if you plan to take your drive on trips and commutes, it’s worthwhile to buy a model with IP-certified water and dust proofing. Some companies like Samsung offer rugged versions of their most popular drives with a high endurance rating. For additional peace of mind, 256-bit AES hardware encryption will help prevent someone from accessing your data if you ever lose or misplace your external SSD.

Some of the same features contribute to a great thumbstick drive. Our favorite picks for best budget external SSD models feature USB 3.0 connections and some form of hardware encryption.

A note on console storage

Seagate Storage Expansion
Seagate

If PC gaming isn’t your thing and you own an Xbox Series X|S or PS5, outfitting your fancy new console with the fastest possible storage is far more straightforward than doing the same on PC. With a Series X or Series S, your options are limited to options from Seagate and Western Digital. The former offers 512GB, 1TB and 2TB models, with the most affordable starting at a not-so-trivial $90. Western Digital’s Expansion Cards are less expensive, with pricing starting at $80 for the 512GB model. The good news is that both options are frequently on sale. Your best bet is to set an alert for the model you want by using a price tracker like CamelCamelCamel.

With Sony’s PlayStation 5, upgrading the console’s internal storage is slightly more involved. Instead of employing a proprietary solution, the PS5 uses NVMe storage. Thankfully, there aren’t as many potential configurations as you would find on a PC. Engadget maintains a comprehensive guide to the best SSDs for PS5; in short, your best bet is a high-capacity Gen4 drive with a built-in heatsink. Check out that guide for a full list of gaming SSD recommendations, but for a quick go-to, consider the Corsair MP600 Pro LPX I recommend above. It meets all the memory specifications for Sony’s latest console and you won’t run into any clearance issues with the heatsink. Corsair offers 500GB, 1TB, 2TB, 4TB and 8TB versions of the drive. Expect to pay about $110 for the 1TB variant and about $200 for 2TB.

For those still playing on a previous generation console, you can get slightly faster game load times from a PlayStation 4 by swapping the included hard drive to a 2.5-inch SSD, but going out of your way to do so probably isn’t worth it at this point and you’re better off saving your money for one of the new consoles.

SSD FAQs

What size SSD is best?

There is no one size fits all rule for SSDs, but we generally recommend getting at least a 1TB SSD if you’re looking to upgrade PC or game console storage, or looking to add an external drive to your toolkit. A 1TB drive will be plenty for most people who need extra storage space for photos, documents and programs. If you’re a hardcore gamer, you may want to invest in even more storage considering many high-profile titles today can take up a ton of space.

Is a 256GB SSD better than a 1TB hard drive?

The short answer is that it depends on what you need your drive for. In general, SSDs are faster and more efficient than HDDs, but HDDs are usually cheaper. We recommend springing for an SSD for most use cases today — upgrading a PC, saving important photos and documents, storing games long term, etc. But if you’re focused on getting the most amount of extra space possible (and sticking to a budget), an HDD could be a good option for you.

Does bigger SSD mean faster?

Getting a bigger SSD doesn’t always translate into a faster drive overall. A bigger SSD will provide a higher storage capacity, which means more space for storing digital files and programs. To understand how fast an SSD will be, you’ll want to look at its read/write speeds: read speeds measure how fast a drive can access information, while write speeds measure how fast the drive can save information. Most SSDs list their approximate read/write speeds in their specs, so be sure to check out those numbers before you make a purchase.

This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/best-ssds-140014262.html?src=rss

©

© Crucial / Engadget

The best SSDs

OpenAI's for-profit plan includes a public benefit corporation

27 December 2024 at 08:36

Following months of speculation, OpenAI has finally shared how it plans to become a for-profit company. In a blog post penned by its board of directors, OpenAI said Thursday it plans to transform its for-profit arm into a Public Benefit Corporation sometime in 2025. PBCs or B Corps are for-profit organizations that attempt to balance the interests of their stakeholders while making a positive impact on society.

“As we enter 2025, we will have to become more than a lab and a startup — we have to become an enduring company,” OpenAI said, adding that many of its competitors are registered as PBCs, including Anthropic and even Elon Musk’s own xAI. “[The move] would enable us to raise the necessary capital with conventional terms like others in this space.”

As part of the transformation, OpenAI’s nonprofit division would retain a stake in the for-profit unit in the form of shares “at a fair valuation determined by independent financial advisors,” but would lose direct oversight of the company. “Our plan would result in one of the best resourced non-profits in history,” claims OpenAI.

Following the reorganization, the for-profit division would be responsible for overseeing OpenAI’s “operations and business,” while the nonprofit arm would operate separately with its own leadership team and a focus on charitable efforts in health care, education and science.

OpenAI did not state whether CEO Sam Altman would receive an equity stake as part of the restructuring. Last year, OpenAI’s board of directors briefly fired Altman before bringing him back, in the process sparking the institutional crisis that led to this week’s announcement. According to some estimates, OpenAI’s for-profit arm could be worth as much as $150 billion. In 2019, OpenAI estimated it would need to raise at least $10 billion to build artificial general intelligence. In October, the company secured $6 billion in new funding.

“The hundreds⁠ of billions⁠ of⁠ dollars that major companies are now investing into AI development show what it will really take for OpenAI to continue pursuing the mission,” OpenAI said. “We once again need to raise more capital than we’d imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.”

Despite this week’s announcement, OpenAI is likely to face multiple roadblocks in implementing its plan. In addition to its ongoing legal feud with Elon Musk, Meta recently sent a letter to California’s attorney general urging him to stop OpenAI from converting to a for-profit company, saying the move would be “wrong” and “could lead to a proliferation of similar start-up ventures that are notionally charitable until they are potentially profitable.”

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-for-profit-plan-includes-a-public-benefit-corporation-163634265.html?src=rss

©

© Reuters / Reuters

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

How to use chatGPT on your iPhone

26 December 2024 at 11:03

Since the release of iOS 18.2 on December 11, ChatGPT integration has been an integral part of Apple Intelligence. Provided you own a recent iPhone, iPad or Mac, you can access OpenAI’s chatbot directly from your device, with no need to go through the ChatGPT app or web client.

What is ChatGPT?

ChatGPT is a generative AI chatbot created by OpenAI and powered by a large language machine-learning model. In addition to the capability to interact with people using natural language, ChatGPT can search the web, solve complex math and coding problems, as well as generate text, images and audio. As of the writing of this article, the current version of ChatGPT is based on OpenAI’s GPT-4o and 4o mini models.

In June 2024, Apple announced it was partnering with OpenAI to integrate ChatGPT into Apple Intelligence. While some of ChatGPT’s signature features are available directly within iOS, iPadOS and macOS, many, such as Advanced Voice Mode, can only be accessed through the ChatGPT app or the OpenAI website.

Where can you use ChatGPT on your iPhone?

A screenshot of iOS 18.2's Writing Tools menu.
Igor Bonifacic for Engadget

On iPhone, ChatGPT is primarily available through three surfaces. First, Siri can turn to ChatGPT to answer your questions. In instances where Apple’s digital assistant determines ChatGPT can help it assist you better, it will ask you for your permission to share your request with OpenAI. You can also use ChatGPT to identify places and objects through the iPhone 16’s Camera Control menu

Lastly, you can get ChatGPT's help when using Apple's new “Writing Tools.” Essentially, anytime you’re typing with the iPhone’s built-in keyboard, including in first-party apps like Notes, Mail and Messages, ChatGPT can help you compose text. Finding this feature can be a bit tricky, so here's how to access it: 

  1. Long press on a section of text to bring up iOS 18's text selection tool.   

  2. Tap Writing Tools. You may need to tap the arrow icon for the option to appear. 

  3. Select Compose.

  4. Tap Compose with ChatGPT, and write a prompt describing what you'd like ChatGPT to write for you.    

Do you need an OpenAI account to use ChatGPT on an iPhone?

No, an OpenAI account is not required to use ChatGPT on iPhone. However, if you have a paid subscription, you can use ChatGPT features on your device more often. Signing into your account will also save any requests to your ChatGPT history.

How to set up ChatGPT

If your iPhone hasn’t prompted you to enable ChatGPT already, you can manually turn on the extension by following these steps:

  1. Go to Settings.

  2. Tap Apple Intelligence & Siri.

  3. Tap ChatGPT, then select Set Up.

  4. Tap either Enable ChatGPT or Use ChatGPT with an Account. Select the latter if you have an OpenAI account.

What Apple devices offer ChatGPT integration?

An iPhone with Apple Intelligence is required to use ChatGPT. As of the writing of this article, Apple Intelligence is available on the following devices:

  • iPhone 16

  • iPhone 16 Plus

  • iPhone 16 Pro

  • iPhone 16 Pro Max

  • iPhone 15 Pro

  • iPhone 15 Pro Max

  • iPad mini with A17 Pro

  • iPad Air with M1 and later

  • iPad Pro with M1 and later

  • MacBook Air with M1 and later

  • MacBook Pro with M1 and later

  • iMac with M1 and later

  • Mac mini with M1 and later

  • Mac Studio with M1 Max and later

  • Mac Pro with M2 Ultra

This article originally appeared on Engadget at https://www.engadget.com/mobile/how-to-use-chatgpt-on-your-iphone-190317336.html?src=rss

©

© Apple

The new Writing Tools Menu in MacOS

Google's Gemini Deep Research tool is now available globally

20 December 2024 at 13:01

A little more than a week after announcing Gemini Deep Research, Google is making the tool available to more people. As of today, the feature, part of the company’s paid Gemini Advanced suite, is available in every country and language where Google offers Gemini. In practice, that means Gemini Advanced users in more than 100 countries globally can start using Deep Research right now. Previously, it was only available in English.

As a refresher, Deep Research takes advantage of Gemini 1.5 Pro’s ability to reason through “long context windows” to create comprehensive but easy-to-read reports on complex topics. Once you provide the tool a prompt, it will generate a research plan for you to approve and tweak as you see fit. After it has your go-ahead, Gemini 1.5 Pro will search the open web for information related to your query. That process can sometimes take several minutes, but once Gemini is done, you’ll have a multi-page report you can export to Google Docs for later viewing.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-deep-research-tool-is-now-available-globally-210151873.html?src=rss

©

© Google

Gemini Deep Research

OpenAI's next-generation o3 model will arrive early next year

20 December 2024 at 11:17

After nearly two weeks of announcements, OpenAI capped off its 12 Days of OpenAI livestream series with a preview of its next-generation frontier model. “Out of respect for friends at Telefónica (owner of the O2 cellular network in Europe), and in the grand tradition of OpenAI being really, truly bad at names, it’s called o3,” OpenAI CEO Sam Altman told those watching the announcement on YouTube.

The new model isn’t ready for public use just yet. Instead, OpenAI is first making o3 available to researchers who want help with safety testing. OpenAI also announced the existence of o3-mini. Altman said the company plans to launch that model “around the end of January,” with o3 following “shortly after that.”

As you might expect, o3 offers improved performance over its predecessor, but just how much better it is than o1 is the headline feature here. For example, when put through this year's American Invitational Mathematics Examination, o3 achieved an accuracy score of 96.7 percent. By contrast, o1 earned a more modest 83.3 percent rating. “What this signifies is that o3 often misses just one question,” said Mark Chen, senior vice president of research at OpenAI. In fact, o3 did so well on the usual suite of benchmarks OpenAI puts its models through that the company had to find more challenging tests to benchmark it against.   

An ARC AGI test.
ARC AGI

One of those is ARC-AGI, a benchmark that tests an AI algorithm's ability to intuite and learn on the spot. According to the test's creator, the non-profit ARC Prize, an AI system that could successfully beat ARC-AGI would represent "an important milestone toward artificial general intelligence." Since its debut in 2019, no AI model has beaten ARC-AGI. The test consists of input-output questions that most people can figure out intuitively. For instance, in the example above, the correct answer would be to create squares out of the four polyominos using dark blue blocks.       

On its low-compute setting, o3 scored 75.7 percent on the test. With additional processing power, the model achieved a rating of 87.5 percent. "Human performance is comparable at 85 percent threshold, so being above this is a major milestone," according to Greg Kamradt, president of ARC Prize Foundation.

A graph comparing o3-mini's performance against o1, and the cost of that performance.
OpenAI

OpenAI also showed off o3-mini. The new model uses OpenAI's recently announced Adaptive Thinking Time API to offer three different reasoning modes: Low, Medium and High. In practice, this allows users to adjust how long the software "thinks" about a problem before delivering an answer. As you can see from the above graph, o3-mini can achieve results comparable to OpenAI's current o1 reasoning model, but at a fraction of the compute cost. As mentioned, o3-mini will arrive for public use ahead of o3. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-next-generation-o3-model-will-arrive-early-next-year-191707632.html?src=rss

©

© REUTERS / Reuters

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI brings ChatGPT to WhatsApp

18 December 2024 at 10:46

ChatGPT is now available on WhatsApp. Starting today, if you add 1 (800) CHAT-GPT to your contacts — that's 1 (800) 242-8478 — you can start using the chatbot over Meta's messaging app. In this iteration, ChatGPT is limited to text-only input, so there's no Advanced Voice Mode or visual input on offer, but you still get all the smarts of the o1-mini model

What's more, over WhatsApp ChatGPT is available everywhere OpenAI offers its chatbot, with no account necessary. OpenAI is working on a way to authenticate existing users over WhatApp, though the company did not share a timeline for when that feature might launch. It's worth noting Meta offers its own chatbot in WhatsApp.   

Separately, OpenAI is launching a ChatGPT hotline in the US. Once again, the number for that is 1 (800) 242-8478. As can probably imagine, the toll-free number works with any phone, be it a smartphone or old flip phone. OpenAI will offer 15 minutes of free ChatGPT usage through the hotline, though you can log into your account to get more time. 

"We’re only just getting started on making ChatGPT more accessible to everyone," said Kevin Weil, chief product officer at OpenAI, during the company's most recent 12 Days of OpenAI livestream. According to Weil, the two features were born from a recent hack week the company held. Other recent livestreams have seen OpenAI make ChatGPT Search available to all free users and bring its Sora video generation out of private preview.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-brings-chatgpt-to-whatsapp-184653703.html?src=rss

©

© OpenAI

Three OpenAI employees demo ChatGPT's WhatsApp integration.

The best cheap phones for 2025

It may be fashionable to spend $1,000 on the latest flagship smartphone when they first get released, but it's not necessary. You don't even have to spend $500 today to get a decent handset, whether it’s a refurbished iPhone or an affordable Android phone, as there are plenty of options as low as $160 that could fit your needs.

But navigating the budget phone market can be tricky; options that look good on paper may not be in practice, and some handsets will end up costing you more when you consider many come with restrictive storage. While we at Engadget spend most of our time testing and reviewing mid- to high-end handsets, we've tested a number of the latest budget-friendly phones on the market to see which are actually worth your money.

What to look for in a cheap phone

For this guide, our top picks cost between $100 and $300. Anything less and you might as well go buy a dumb phone or high-end calculator instead. Since they’re meant to be more affordable than flagship phones and even midrange handsets, budget smartphones involve compromises; the cheaper a device, the lower your expectations around specs, performance and experience should be. For that reason, the best advice I can give is to spend as much as you can afford. In this price range, even $50 or $100 more can get you a dramatically better product.

Second, you should know what you want most from a phone. When buying a budget smartphone, you may need to sacrifice a decent main camera for long battery life, or trade a high-resolution display for a faster CPU. That’s just what comes with the territory, but knowing your priorities will make it easier to find the right phone.

It’s also worth noting some features can be hard to find on cheap handsets. For instance, you won’t need to search far for a device with all-day battery life — but if you want a great camera phone with excellent camera quality, you’re better off shelling out for one of the recommendations in our midrange smartphone guide, which all come in at $600 or less. Wireless charging and waterproofing also aren’t easy to find in this price range and forget about the fastest chipset. On the bright side, all our recommendations come with headphone jacks, so you won’t need to get wireless headphones.

iOS is also off the table, since the $400 Apple iPhone SE is the most affordable iPhone in the lineup. That leaves Android OS as the only option. Thankfully today, there’s little to complain about Google’s OS – and you may even prefer it to iOS. Lastly, keep in mind most Android manufacturers typically offer far less robust software features and support for their budget devices. In some cases, your new phone may only receive one major software update and a year or two of security patches beyond that. That applies to the OnePlus and Motorola recommendations on our list.

If you’d like to keep your phone for as long as possible, Samsung has the best software policy of any Android manufacturer in the budget space, offering four years of security updates on all of its devices. That said, if software support (or device longevity overall) is your main focus, consider spending a bit more the $500 Google Pixel 7a, which is our favorite midrange smartphone and has planned software updates through mid-2026.

Best cheap phones

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/best-cheap-phones-130017793.html?src=rss

©

© Engadget

The best cheap phones

ChatGPT Search is now available to everyone

16 December 2024 at 10:44

If you've been waiting patiently to try ChatGPT Search, you won't have to wait much longer. After rolling out to paid subscribers this fall, OpenAI announced Monday it's making the tool available to everyone, no Plus or Pro membership necessary

At that point, all you need before you can start using ChatGPT Search is an OpenAI account. Once you're logged in, and if your query calls for it, ChatGPT will automatically search the web for the latest information to answer your question. You can also force it to search the web, thanks to a handy new icon located right in the prompt bar. OpenAI has also added the option to make ChatGPT Search your browser's default search engine.    

At the same time, OpenAI is integrating ChatGPT Search and Advanced Voice mode together. As you might have guessed, the former allows ChatGPT's audio persona to search the web for answers to your questions and answer them in a natural, conversational way. For example, say you're traveling to a different city for vacation. You could ask ChatGPT what the weather will be like once you arrive, with the Search functionality built-in, the chatbot can answer that question with the most up-to-date information. 

To facilitate this functionality, OpenAI says it has partnered with leading news and data providers. As a result, you'll also see widgets for stocks, sports scores, the weather and more. Basically, ChatGPT Search is becoming a full-fledged Google competitor before our eyes.     

OpenAI announced the expanded availability during its most recent "12 Days of OpenAI" livestream. In previous live streams, the company announced the general availability of Sora and ChatGPT Pro, a new $200 subscription for its chatbot. With four more days to go, it's hard to see the company topping that announcement, but at this point, OpenAI likely has a surprise or two up its sleeve.

Correction 12/1 7/2024: A previous version of this article incorrectly stated OpenAI would roll out ChatGPT Search "over the coming months." The tool is now available to all logged-in users. We regret the error. 

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-is-getting-ready-to-roll-its-search-tool-out-to-everyone-184442971.html?src=rss

©

© OpenAI

A mouse pointer hovers over the "Search" icon in ChatGPT's prompt bar.

Google's new AI video model sucks less at physics

16 December 2024 at 09:00

Google may have only recently begun rolling out its Veo generative AI to enterprise customers, but the company is not wasting any time getting a new version of the video tool out to early testers. On Monday, Google announced a preview of Veo 2. According to the company, Veo 2 “understands the language of cinematography.” In practice, that means you can reference a specific genre of film, cinematic effect or lens when prompting the model.

Additionally, Google says the new model has a better understanding of real-world physics and human movement. Correctly modeling humans in motion is something all generative models struggle to do. So the company’s claim that Veo 2 is better when it comes to both of those trouble points is notable. Of course, the samples the company provided aren’t enough to know for sure; the true test of Veo 2’s capabilities will come when someone prompts it to generate a video of a gymnast's routine. Oh, and speaking of things video models struggle with, Google says Veo will produce artifacts like extra fingers “less frequently.”

A sample image of a squirrel Google's Imagen 3 generated.
Google

Separately, Google is rolling out improvements to Imagen 3. Of its text-to-image model, the company says the latest version generates brighter and better-composed images. Additionally, it can render more diverse art styles with greater accuracy. At the same time, it’s also better at following prompts more faithfully. Prompt adherence was an issue I highlighted when the company made Imagen 3 available to Google Cloud customers earlier this month, so if nothing else, Google is aware of the areas where its AI models need work.

Veo 2 will gradually roll out to Google Labs users in the US. For now, Google will limit testers to generating up to eight seconds of footage at 720p. For context, Sora can generate up to 20 seconds of 1080p footage, though doing so requires a $200 per month ChatGPT Pro subscription. As for the latest enhancements to Imagen 3, those are available to Google Labs users in more than 100 countries through ImageFX.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-new-ai-video-model-sucks-less-at-physics-170041204.html?src=rss

©

© Google

Google's Veo 2 generated a Pixar-like animation of a young girl in a kitchen.

Ragebound is a new Ninja Gaiden game from the team behind Blasphemous

12 December 2024 at 17:56

Resurrecting a beloved gaming series like Ninja Gaiden is always a tricky proposition. Anyone who might have worked on the franchise in its heyday has likely moved on to other projects or left the industry entirely. But judging by the talent working on Ninja Gaiden Ragebound, the new series entry revealed at the Game Awards, I think it's safe to say the franchise is in safe hands. That's because Ragebound unites two companies who know a thing or two about making quality games. 

The Game Kitchen — the Spanish studio behind Blasphemous and its excellent sequel, Blasphemous 2 — is developing the game, with Dotemu (Teenage Mutant Ninja Turtles: Shredder’s Revenge and Streets of Rage 4) on publishing duties. 

Right away, you can see the influence of The Game Kitchen. The studio's signature pixel art style looks gorgeous in the back half of the reveal trailer, and it looks like the game will reward tight, coordinated play from players. As for the story, it's set during the events of the NES version of Ninja Gaiden and stars a new protagonist, Kenji Mozu. It's up to him to save Hayabusa Village while Ryu is away in America.  

Ninja Gaiden Ragebound will arrive in the summer of 2025 on PC, Nintendo Switch, PlayStation 5, PlayStation 4, Xbox Series X/S and Xbox One.  

This article originally appeared on Engadget at https://www.engadget.com/gaming/ragebound-is-a-new-ninja-gaiden-game-from-the-team-behind-blasphemous-015621718.html?src=rss

©

© The Game Kitchen

Ninja Gaiden Ragebound protagonist Kenji Mozu faces off against a winged demo in a burning dojo.

The first Witcher 4 trailer sees Ciri kicking butt

12 December 2024 at 17:41

Well, let's be honest: I don't think any of us expected to see CD Projekt Red preview The Witcher 4 anytime soon, and yet the studio did just that, sharing a lengthy cinematic trailer for the upcoming sequel at the Game Awards. Even if there's no gameplay footage to be found, fans of the series will love what they see. 

That's because the clip reveals the protagonist of the game, and it's none other than Ciri, the adopted daughter of everyone's favorite witcher, Geralt of Rivia. Thematically, the clip is similar to The Witcher 3's excellent "Killing Monsters" trailer. Ciri arrives to save a young woman from a horrible monster, only for human ignorance and superstition to undo her good deed.

CD Projekt did not share a release date for The Witcher 4, nor did the studio say what platforms the game would arrive on. However, it has previously said it was making the game in Unreal Engine 5, and if you look hard enough, a footnote at the bottom says the trailer was pre-rendered in UE5 on an unannounced "NVIDIA GeForce RTX GPU." 

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-first-witcher-4-trailer-sees-ciri-kicking-butt-014137326.html?src=rss

©

© CD Project Red

Ciri walks toward a campfire.

MasterClass On Call gives you on-demand access to AI facsimiles of its experts

11 December 2024 at 13:50

MasterClass is expanding beyond pre-recorded video lessons to offer on-demand mentorship from some of its most popular celebrity instructors. And if you’re wondering how the company has gotten some of the busiest people on the planet to field your questions, guess what? The answer is generative AI.

On Wednesday, MasterClass debuted On Call, a new web and iOS app that allows people to talk with AI versions of its instructors. As of today, On Call is limited to two personas representing the expertise of former FBI hostage negotiator Chris Voss and University of Berkeley neuroscientist Dr. Matt Walker. In the future, MasterClass says it will offer many more personas, with Gordon Ramsay, Mark Cuban, Bill Nye and LeVar Burton among some of the more notable experts sharing their voices and knowledge in this way.

“This isn’t just another generic AI chatbot pulling data from the internet,” David Rogier, the CEO of MasterClass, said on X. “We’ve built this with our experts — training the AI on proprietary data sets (e.g. unpublished notes, private research, their lessons, emails, [and] expertise they’ve never shared before).”

Per Inc., MasterClass signed deals with each On Call instructor to license their voice and expertise. Judging from the sample voice clips MasterClass has up on its website, the interactions aren’t as polished as the one shown in the ad the company shared on social media. In particular, the “voice” of Chris Voss sounds robotic and not natural at all. On Call is also a standalone product with a separate subscription from the company’s regular offering. To use On Call, users will need to pay $10 per month or $84 annually.

This article originally appeared on Engadget at https://www.engadget.com/ai/masterclass-on-call-gives-you-on-demand-access-to-ai-facsimiles-of-its-experts-215022938.html?src=rss

©

© MasterClass

A screenshot of MasterClass' new On Call app.

Jarvis, Google's web-browsing AI, is now officially known as Project Mariner

11 December 2024 at 11:16

Earlier today, Google debuted Gemini 2.0. The company says its new machine learning model won’t just enhance its existing products and services. It will also power entirely new experiences. To that point, Google previewed Project Mariner, an AI agent that can navigate within a web browser. Mariner is an experimental Chrome extension that is currently available to select “trusted testers.”

As you can see from the video Google shared, the pitch for Mariner is a tool that can automate certain rote tasks. In the demo, Mariner assists Google’s Jaclyn Konzelmann with finding the contact information of four outdoor companies.

Clearly, there’s more work Google needs to do before the software is ready for public use. Notice that Konzelmann is very specific when prompting Mariner, instructing the agent to “memorize” and “remember” parts of her instructions. It also takes Mariner close to 12 minutes to complete the task given to it.

“As a research prototype, it’s able to understand and reason across information in your browser screen, including pixels and web elements like text, code, images and forms,” Google says of Mariner.

If Project Mariner sounds familiar, it’s because The Information reported in October that Google was working on something called Project Jarvis. The publication described it as a “computer-using agent” that Google designed to assist with tasks like booking flights. In November, an early version of Jarvis was briefly available on the Chrome Web Store. A person familiar with the matter confirmed that Jarvis and Mariner are the same project.

The confirmation of Mariner’s existence comes after Anthropic introduced a similar but more expansive feature for its Claude AI, which the company says can “use a wide range of standard tools and software programs designed for people.” That tool is currently available in public beta.

This article originally appeared on Engadget at https://www.engadget.com/ai/jarvis-googles-web-browsing-ai-is-now-officially-known-as-project-mariner-191603929.html?src=rss

©

© Google

Project Mariner is an experimental Chrome extension that can help you navigate websites.

Gemini 2.0 is Google's most capable AI model yet and available to preview today

11 December 2024 at 09:03

The battle for AI supremacy is heating up. Almost exactly a week after OpenAI made its o1 model available to the public, Google today is offering a preview of its next-generation Gemini 2.0 model. In a blog post attributed to Google CEO Sundar Pichai, the company says 2.0 is its most capable model yet, with the algorithm offering native support for image and audio output. “It will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” says Pichai.

Google is doing something different with Gemini 2.0. Rather than starting today’s preview by first offering its most advanced version of the model, Gemini 2.0 Pro, the search giant is instead kicking things off with 2.0 Flash. As of today, the more efficient (and affordable) model is available to all Gemini users. If you want to try it yourself, you can enable Gemini 2.0 from the dropdown menu in the Gemini web client, with availability within the mobile app coming soon.

Moving forward, Google says its main focus is adding 2.0’s smarts to Search (no surprise there), beginning with AI Overviews. According to the company, the new model will allow the feature to tackle more complex and involved questions, including ones involving multi-step math and coding problems. At the same time, following a broad expansion in October, Google plans to make AI Overviews available in more languages and countries.

Looking forward, Gemini 2.0 is already powering enhancements to some of Google’s more moonshot AI applications, including Project Astra, the multi-modal AI agent the company previewed at I/O 2024. Thanks to the new model, Google says the latest version of Astra can converse in multiple languages and even switch between them on the fly. It can also “remember” things for longer, offers improved latency, and can access tools like Google Lens and Maps.

As you might expect, Gemini 2.0 Flash offers significantly better performance than its predecessor. For instance, it earned a 63 percent score on HiddenMath, a benchmark that tests the ability of AI models to complete competition-level math problems. By contrast, Gemini 1.5 Flash earned a score of 47.2 percent on that same test. But the more interesting thing here is that the experimental version of Gemini 2.0 even beats Gemini 1.5 Pro in many areas; in fact, according to data Google shared, the only domains where it lags behind are in long-context understanding and automatic speech translation.

It’s for that reason that Google is keeping the older model around, at least for a little while longer. Alongside today's announcement of Gemini 2.0, the company also debuted Deep Research, a new tool that uses Gemini 1.5 Pro's long-context capabilities to write comprehensive reports on complicated subjects.  

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-20-is-googles-most-capable-ai-model-yet-and-available-to-preview-today-170329180.html?src=rss

©

© Google

A graphic of Google's Gemini 2.0 model.

Google's Gemini Deep Research tool is here to answer your most complicated questions

11 December 2024 at 07:43

When Google debuted Gemini 1.5 Pro in February, the company touted the model’s ability to reason through what it called “long context windows.” It said, for example, the algorithm could provide details about a 402-page Apollo 11 mission transcript. Now, Google is giving people a practical way to take advantage of those capabilities with a tool called Deep Research. Starting today, Gemini Advanced users can use Deep Research to create comprehensive but easy-to-read reports on complex topics.

Aarush Selvan, a senior product manager on the Gemini team, gave Engadget a preview of the tool. At first glance, it looks to work like any other AI chatbot. All interactions start with a prompt. In the demo I saw, Selvan asked Gemini to help him find scholarship programs for students who want to enter public service after school. But things diverge from there. Before answering a query, Gemini first produces a multi-step research plan for the user to approve.

For example, say you want Gemini to provide you with a report on heat pumps. In the planning stage, you could tell the AI agent to prioritize information on government rebates and subsidies or omit those details altogether. Once you give Gemini the go-ahead, it will then scour the open web for information related to your query. This process can take a few minutes. In user testing, Selvan said Google found most people were happy to wait for Gemini to do its thing since the reports the agent produces through Deep Research are so detailed.

In the example of the scholarship question, the tool produced a multi-page report complete with charts. Throughout, there were citations with links to all of the sources Gemini used. I didn’t get a chance to read over the reports in detail, but they appeared to be more accurate than some of Google’s less helpful and flattering AI Overviews.  

According to Selvan, Deep Research uses some of the same signals Google Search does to determine authority. That said, sourcing is definitely “a product of the query.” The more complicated a question you ask of the agent, the more likely it is to produce a useful answer since its research is bound to lead it to more authoritative sources. You can export a report to Google Docs once you're happy with Gemini's work.

If you want to try Deep Research for yourself, you’ll need to sign up for Google’s One AI Premium Plan, which includes access to Gemini Advanced. The plan costs $20 per month following a one-month free trial. It's also only available in English at the moment. 

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-deep-research-tool-is-here-to-answer-your-most-complicated-questions-154354424.html?src=rss

©

© Google

A bearded man looks at a tablet, examining a series of charts.

iOS 18.2 is here with Apple Intelligence image generation features in tow

11 December 2024 at 05:00

Apple has begun rolling iOS 18.2 and iPadOS 18.2 to iPhones and iPads. The updates bring with them major enhancements to the company’s suite of AI features, and are likely the final software releases Apple has planned for 2024. More Apple Intelligence features are available through macOS 15.2. However, note access to all of the AI features mentioned below is limited to users in the US, Australia, Canada, New Zealand, South Africa and the UK for now, with support additionally limited to devices with their language set to English. 

Provided you own an iPhone 15 Pro, 16 or 16 Pro, one of the highlights of iOS 18.2 is Image Playground, which is available both as a standalone app and Messages extension. If you go through the latter, the software will generate image suggestions based on the contents of your conversations. Naturally, you can also write your own prompts. It’s also possible to use a photo from your iPhone’s camera roll as a starting point. However, one limitation of Image Playground is that it can’t produce photorealistic images of people. That’s by design so that the resulting images don’t cause confusion. You can also import any pictures you generate with Image Playground to Freeform, Pages and Keynote.  

Another new feature, Genmoji, allows you to create custom emoji. From your iPhone’s emoji keyboard, tap the new Genmoji button and then enter a description of the character you want to make. Apple Intelligence will generate a few different options, which you can swipe through to select the one you want to send. It’s also possible to use pictures of your friends as the starting point for a Genmoji.

The new update also brings enhancements to Siri and Writing Tools, both of which can now call on ChatGPT for assistance. For example, if you ask the digital assistant to create an itinerary or workout plan for you, it will ask for your permission to use ChatGPT to complete the task. You don’t need a ChatGPT account to use the chatbot in this way, though based on information from the iOS 18.2 beta, there will be a daily limit on how many queries iPhone users can send through to OpenAI’s servers.

Those are just some of the more notable Apple Intelligence features arriving with iOS 18.2 and iPadOS 18.2. If you don’t own a recent iPhone or iPad, the good news is that both releases offer more than just new AI tools. One nifty addition is the inclusion of new AirTag features that allow you to share the location of your lost item trackers with friends and airlines. If you’re a News+ subscriber, you also get access to daily Sodoku puzzles. Also new to iOS 18.2 is a feature Apple removed with iOS 16. A new menu item in the operating system’s Settings app allows you to add volume controls to the lock screen.

If you don’t see a notification to download iOS 18.2 on your iPhone and iPadOS 18.2 on your iPad, you can manually check for the updates by opening the Settings app on your device and navigating to “General,” then “Software Update.” The same goes for macOS — just open the System Settings app, navigate to "Software Update" and start the download. 

If you live outside of one of the countries mentioned at the top, support for additional countries and languages, including Chinese, French, German, Italian, Japanese, Korean, Spanish and Portuguese, will roll out throughout next year, with an initial update slated for April. 

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/ios-182-is-here-with-apple-intelligence-image-generation-features-in-tow-130029173.html?src=rss

©

© Apple

Renders of two iPhones are shown against a white background. One of the phones displays a yellow smiley face emoji with cucumber slices on its eyes, while they other shows an AI-generated image of a cat wearing a chef hat

Sony's WH-1000XM5 headphones are back on sale for $100 off

10 December 2024 at 09:45

The Thanksgiving holiday might have come and gone, but one of the best pair of wireless headphones you can buy right now are back to their Black Friday price. Amazon has discounted Sony’s excellent WH-1000XM5 headphones. All four colorways — black, midnight blue, silver and smoky pink — are currently $298, or 25 percent off their usual $400 price.

At this point, the WH-1000XM5 likely need no introduction, but for the uninitiated, they strike a nearly perfect balance between features, performance and price; in fact, they’re the Bluetooth headphones Billy Steele, Engadget’s resident audio guru, recommends for most people

With the Sony WH-1000XM5, Sony redesigned its already excellent 1000X line to make the new model more comfortable. The company also improved the XM4’s already superb active noise cancelation capabilities, adding four additional ANC mics. That enhancement makes the WH5 even better at blocking out background noise, including human voices.

Other notable features include 30-hour battery life, clear and crisp sound and a combination of handy physical and touch controls. The one downside of the XM5 are that they cost more than Sony’s previous flagship Bluetooth headphones. Thankfully, this sale addresses that fault nicely.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/sonys-wh-1000xm5-headphones-are-back-on-sale-for-100-off-174551119.html?src=rss

©

© Billy Steele/Engadget

WH-1000XM5 review
❌
❌