Lenovoβs Roll-Out Display Laptop Is So Pleasingly Weird
The latest leaks suggest Lenovo is serious about bundling as much screen as it can into its portable machines.
In high school, as in tech, superlatives are important. Or maybe they just feel important in the moment. With the breakneck pace of the AI computing infrastructure buildout, it's becoming increasingly difficult to keep track of who has the biggest, fastest, or most powerful supercomputer β especially when multiple companies claim the title at once.
"We delivered the world's largest and fastest AI supercomputer, scaling up to 65,000 Nvidia H200 GPUs," Oracle CEO Safra Catz and Chairman, CTO, echoed by Founder Larry Ellison on the company's Monday earnings call.
In late October, Nvidia proclaimed xAI's Colossus as the "World's Largest AI Supercomputer," after Elon Musk's firm reportedly built a computing cluster with 100,000 Nvidia graphics processing units in a matter of weeks. The plan is to expand to 1 million GPUs next, according to the Greater Memphis Chamber of Commerce (where the supercomputer is located.)
It used to be simpler. "Supercomputers" were most commonly found in research settings. Naturally, there's an official list ranking supercomputers. Until recently the world's most powerful supercomputer was named El Capitan. Housed at the Lawrence Livermore National Laboratory in California 11 million CPUs and GPUs from Nvidia-rival AMD add up to 1.742 Exaflops of computing capacity. (One exaflop is equal to one quintillion, or a billion billions, operations per second.)
"The biggest computers don't get put on the list," Dylan Patel, chief analyst at Semianalysis, told BI. "Your competitor shouldn't know exactly what you have," he continued. The 65,000-GPU supercluster Oracle executives were praising can reach up to 65 exaflops, according to the company.
It's safe to assume, Patel said, that Nvidia's largest customers, Meta, Microsoft, and xAI also have the largest, most powerful clusters. Nvidia CFO Colette Cress said 200 fresh exaflops of Nvidia computing would be online by the end of this year β across nine different supercomputers β on Nvidia's May earnings call.
Going forward, it's going to be harder to determine whose clusters are the biggest at any given moment β and even harder to tell whose are the most powerful β no matter how much CEOs may brag.
On Monday's call, Ellison was asked, if the size of these gigantic clusters is actually generating better model performance.
He said larger clusters and faster GPUs are elements that speed up model training. Another is networking it all together. "So the GPU clusters aren't sitting there waiting for the data," Ellison said Monday.
Thus, the number of GPUs in a cluster isn't the only factor in the computing power calculation. Networking and programming are important too. "Exaflops" are a result of the whole package so unless companies provide them, experts can only estimate.
What's certain is that more advanced models β the kind that consider their own thinking and check their work before answering queries β require more compute than their relatives of earlier generations. So training increasingly impressive models may indeed require an arms race of sorts.
But an enormous AI arsenal doesn't automatically lead to better or more useful tools.
Sri Ambati, CEO of open-source AI platform H2O.ai, said cloud providers may want to flex their cluster size for sales reasons, but given some (albeit slow) diversification of AI hardware and the rise of smaller, more efficient models, cluster size isn't the end all be all.
Power efficiency too, is a hugely important indicator for AI computing since energy is an enormous operational expense in AI. But it gets lost in the measuring contest.
Nvidia declined to comment. Oracle did not respond to a request for comment in time for publication.
Have a tip or an insight to share? Contact Emma at [email protected] or use the secure messaging app Signal: 443-333-9088.
Google unveiled a new chip that it says reduces errors and vastly outperforms standard benchmarks in quantum computing.
The company said the new chip, called Willow, can perform a standard benchmark computation in under five minutes. The same task would take the current fastest supercomputers 10 septillion years, longer than the universe had existed.
In an X post on December 9, Google CEO Sundar Pichai said the chip cracked "a 30-year challenge in the field."
"We see Willow as an important step in our journey to build a useful quantum computer with practical applications in areas like drug discovery, fusion energy, battery design + more," he said in a follow-up post.
The new chip won praise from other leading tech figures, including Elon Musk and Sam Altman. Altman, the CEO of OpenAI, reposted the announcement congratulating the company on the development, while Musk replied to Pachai's post saying, "Wow."
Google's development represents a key milestone in the decadeslong race to build quantum computers that are accurate enough to have practical applications.
Quantum computers use quantum mechanics to solve problems faster than traditional computing. Qubits βΒ the unit of information in quantum computing β are unpredictable and have high error margins.
In the past, the more qubits a chip has the more errors appear. This has been an outstanding challenge in the field since the 1990s.
To show progress, quantum computers must demonstrate they are "below threshold," which means they can drive errors down while scaling up the number of qubits.
Google published an experiment in the science journal Nature on Monday showing the new potential of the Willow chip. The study demonstrated that the more qubits are scaled up in the Willow chip, the lower the rate of error. Google also said errors in their new chip can be corrected as they occur.
The director of Google's Quantum AI lab, Michael Cuthbert, told the BBC that commercial applications for a quantum computing chip would still not be available before 2030, at the earliest.
Experts have praised the company's efforts as a major breakthrough in the field.
"This work shows a truly remarkable technological breakthrough," Chao-Yang Lu, a quantum physicist at the University of Science and Technology of China in Shanghai, told Nature.
Current quantum computers on the market are too small and make too many errors to be used for commercial gain. However, Google's recent development has demonstrated a significant reduction in the error rate can be achieved with increased scale.
In a blog post, Google also praised Willow's performance on the random circuit sampling benchmark, a method of testing the performance of quantum computers, as "astonishing."
"It performed a computation in under five minutes that would take one of today's fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it's 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe," the company said in the post.
The Steam Link was a little box ahead of its time. It streamed games from a PC to a TV, ran 1,500 0f them natively, offered a strange (if somewhat lovable) little controller, and essentially required a great network, Ethernet cables, and a good deal of fiddling.
Valve quietly discontinued the Steam Link gear in November 2018, but it didn't give up. These days, a Steam Link app can be found on most platforms, and Valve's sustained effort to move Linux-based (i.e., non-Windows-controlled) gaming forward has paid real dividends. If you still want a dedicated device to stream Steam games, however? A Raspberry Pi 5 (with some help from Valve) can be a Substitute Steam Link.
As detailed in the Raspberry Pi blog, there were previously means of getting Steam Link working on Raspberry Pi devices, but the platform's move away from proprietary Broadcom librariesβand from X to Wayland display systemsβrequired "a different approach." Sam Lantinga from Valve worked with the Raspberry Pi team on optimizing for the Raspberry Pi 5 hardware. As of Steam Link 1.3.13 for the little board, Raspberry Pi 5 units could support up to 1080p at 144 frames per second (FPS) on the H.264 protocol and 4k at 60 FPS or 1080p at 240 FPS, presuming your primary gaming computer and network can support that.
"Am I the first person to discover this?" is a tricky question when it comes to classic Macs, some of the most pored-over devices on the planet. But there's a lot to suggest that user paul.gaastra, on the 68kMLA vintage Mac forum, has been right for more than a decade: One of the capacitors on the Apple mid-'90s Mac LC III was installed backward due to faulty silkscreen printing on the board.
It seems unlikely that Apple will issue a factory recall for the LC IIIβor the related LC III+, or Performa models 450, 460, 466, or 467 with the same board design. The "pizza box" models, sold from 1993β1996, came with a standard 90-day warranty, and most of them probably ran without issue. It's when people try to fix up these boards and replace the capacitors, in what is generally a good practice (re-capping), that they run into trouble.
The Macintosh LC III, forerunner to a bunch of computers with a single misaligned capacitor. Credit: Akbkuku / Wikimedia CommonsDoug Brown took part in the original 2013 forum discussion, and has seen it pop up elsewhere. Now, having "bought a Performa 450 complete with its original leaky capacitors," he can double-check Apple's board layout 30 years later and detail it all in a blog post (seen originally at the Adafruit blog). He confirms what a bunch of multimeter-wielding types long suspected: Apple put the plus where the minus should be.