❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 6 June 2025Main stream

Startup puts a logical qubit in a single piece of hardware

Everyone in quantum computing agrees that error correction will be the key to doing a broad range of useful calculations. But early every company in the field seems to have a different vision of how best to get there. Almost all of their plans share a key feature: some variation on logical qubits built by linking together multiple hardware qubits.

A key exception is Nord Quantique, which aims to dramatically cut the amount of hardware needed to support an error-corrected quantum computer. It does this by putting enough quantum states into a single piece of hardware, allowing each of those pieces to hold an error-corrected qubit. Last week, the company shared results showing that it could make hardware that used photons at two different frequencies to successfully identify every case where a logical qubit lost its state.

That still doesn't provide error correction, and they didn't use the logical qubit to perform operations. But it's an important validation of the company's approach.

Read full article

Comments

Β© Nord Quantique

Before yesterdayMain stream

Beyond qubits: Meet the qutrit (and ququart)

The world of computers is dominated by binary. Silicon transistors are either conducting or they're not, and so we've developed a whole world of math and logical operations around those binary capabilities. And, for the most part, quantum computing has been developing along similar lines, using qubits that, when measured, will be found in one of two states.

In some cases, the use of binary values is a feature of the object being used to hold the qubit. For example, a technology called dual-rail qubits takes its value from which of two linked resonators holds a photon. But there are many other quantum objects that have access to far more than two statesβ€”think of something like all the possible energy states an electron could occupy when orbiting an atom. We can use things like this as qubits by only relying on the lowest two energy levels. But there's nothing stopping us from using more than two.

In Wednesday's issue of Nature, researchers describe creating qudits, the generic term for systems that hold quantum informationβ€”it's short for quantum digits. Using a system that can be in three or four possible states (qutrits and ququarts, respectively), they demonstrate the first error correction of higher-order quantum memory.

Read full article

Comments

Β© spawns / Getty Images

Quantum hardware may be a good match for AI

Concerns about AI's energy use have a lot of people looking into ways to cut down on its power requirements. Many of these focus on hardware and software approaches that are pretty straightforward extensions of existing technologies. But a few technologies are much farther out there. One that's definitely in the latter category? Quantum computing.

In some ways, quantum hardware is a better match for some of the math that underlies AI than more traditional hardware. While the current quantum hardware is a bit too error-prone for the more elaborate AI models currently in use, researchers are starting to put the pieces in place to run AI models when the hardware is ready. This week, a couple of commercial interests are releasing a draft of a paper describing how to get classical image data into a quantum processor (actually, two different processors) and perform a basic AI image classification.

All of which gives us a great opportunity to discuss why quantum AI may be more than just hype.

Read full article

Comments

Β© Jason Marz/Getty Images

D-Wave quantum annealers solve problems classical algorithms struggle with

Right now, quantum computers are small and error-prone compared to where they'll likely be in a few years. Even within those limitations, however, there have been regular claims that the hardware can perform in ways that are impossible to match with classical computation (one of the more recent examples coming just last year). In most cases to date, however, those claims were quickly followed by some tuning and optimization of classical algorithms that boosted their performance, making them competitive once again.

Today, we have a new entry into the claims departmentβ€”or rather a new claim by an old entry. D-Wave is a company that makes quantum annealers, specialized hardware that is most effective when applied to a class of optimization problems. The new work shows that the hardware can track the behavior of a quantum system called an Ising model far more efficiently than any of the current state-of-the-art classical algorithms.

Knowing what will likely come next, however, the team behind the work writes, "We hope and expect that our results will inspire novel numerical techniques for quantum simulation."

Read full article

Comments

Β© D-Wave

Amazon uses quantum β€œcat states” with error correction

26 February 2025 at 11:15

Following up on Microsoft's announcement of a qubit based on completely new physics, Amazon is publishing a paper describing a very different take on quantum computing hardware. The system mixes two different types of qubit hardware to improve the stability of the quantum information they hold. The idea is that one type of qubit is resistant to errors, while the second can be used for implementing an error-correction code that catches the problems that do happen.

While there have been more effective demonstrations of error correction in the past, a number of companies are betting that Amazon's general approach is the best route to getting logical qubits that are capable of complex algorithms. So, in that sense, it's an important proof of principle.

Herding cats

The basic idea behind Amazon's approach is to use one type of qubit to hold data and a second to enable error correction. The data qubit is extremely resistant to one type of error, but prone to a second. Those errors are where the second type of qubit comes in; it's used to run an error-correction code that's effective at picking up the problems the data qubits are prone to. Combined, the two are hoped to allow error correction to be handled by far fewer hardware qubits.

Read full article

Comments

Β© ALFRED PASIEKA/SCIENCE PHOTO LIBRARY

Microsoft demonstrates working qubits based on exotic physics

19 February 2025 at 08:32

On Wednesday, Microsoft released an update on its efforts to build quantum computing hardware based on the physics of quasiparticles that have largely been the domain of theorists. The information coincides with the release of a paper in Nature that provides evidence that Microsoft's hardware can actually measure the behavior of a specific hypothesized quasiparticle.

Separately from that, the company announced that it has built hardware that uses these quasiparticles as the foundation for a new type of qubit, one that Microsoft is betting will allow it to overcome the advantages of companies that have been producing qubits for years.

The zero mode

Quasiparticles are collections of particles (and, in some cases, field lines) that can be described mathematically as if they were a single particle with properties that are distinct from their constituents. The best-known of these are probably the Cooper pairs that electrons form in superconducting materials.

Read full article

Comments

Β© John Brecher for Microsoft

AI used to design a multi-step enzyme that can digest some plastics

14 February 2025 at 08:47

Enzymes are amazing catalysts. These proteins are made of nothing more than a handful of Earth-abundant elements, and they promote a vast array of reactions, convert chemical energy to physical motion, and act with remarkable specificity. In many cases, we have struggled to find non-enzymatic catalysts that can drive some of the same chemical reactions.

Unfortunately, there isn't an enzyme for many reactions we would sorely like to catalyzeβ€”things like digesting plastics or incorporating carbon dioxide into more complex molecules. We've had a few successes using directed evolution to create useful variations of existing enzymes, but efforts to broaden the scope of what enzymes can do have been limited.

With the advent of AI-driven protein design, however, we can now potentially design things that are unlike anything found in nature. A new paper today describes a success in making a brand-new enzyme with the potential to digest plastics. But it also shows how even a simple enzyme may have an extremely complex mechanismβ€”and one that's hard to tackle, even with the latest AI tools.

Read full article

Comments

Β© LAGUNA DESIGN

Quantum teleportation used to distribute a calculation

5 February 2025 at 12:42

Performing complex algorithms on quantum computers will eventually require access to tens of thousands of hardware qubits. For most of the technologies being developed, this creates a problem: It's difficult to create hardware that can hold that many qubits. As a result, people are looking at various ideas of how we might link processors together in order to have them function as a single computational unit (a challenge that has obviously been solved for classical computers).

In today's issue of Nature, a team at Oxford University describes using quantum teleportation to link two pieces of quantum hardware that were located about 2 meters apart, meaning they could easily have been in different rooms entirely. Once linked, the two pieces of hardware could be treated as a single quantum computer, allowing simple algorithms to be performed that involved operations on both sides of the 2-meter gap.

Quantum teleportation is... different

Our idea of teleportation has been heavily shaped by Star Trek, where people disappear from one location while simultaneously appearing elsewhere. Quantum teleportation doesn't work like that. Instead, you need to pre-position quantum objects at both the source and receiving ends of the teleport and entangle them. Once that's done, it's possible to perform a series of actions that force the recipient to adopt the quantum state of the source. The process of performing this teleportation involves a measurement of the source object, which destroys its quantum state even as it appears at the distant site, so it does share that feature with the popular conception of teleportation.

Read full article

Comments

Β© D. Slichter/NIST

To help AIs understand the world, researchers put them in a robot

Large language models like ChatGPT display conversational skills, but the problem is they don’t really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word β€œhot” means because we’ve been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limitedβ€”it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

β€œThe inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language,” says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

Read full article

Comments

Β© Thomas Vogel

Researchers optimize simulations of molecules on quantum computers

24 January 2025 at 07:51

One of the most frequently asked questions about quantum computers is a simple one: When will they be useful?

If you talk to people in the field, you'll generally get a response in the form of another question: useful for what? Quantum computing can be applied to a large range of problems, some of them considerably more complex than others. Utility will come for some of the simpler problems first, but further hardware progress is needed before we can begin tackling some of the more complex ones.

One that should be easiest to solve involves modeling the behavior of some simple catalysts. The electrons of these catalysts, which are critical for their chemical activity, obey the rules of quantum mechanics, which makes it relatively easy to explore them with a quantum computer.

Read full article

Comments

Β© Douglas Sacha

Researchers use AI to design proteins that block snake venom toxins

15 January 2025 at 08:00

It has been a few years since AI began successfully tackling the challenge of predicting the three-dimensional structure of proteins, complex molecules that are essential for all life. Next-generation tools are now available, and the Nobel Prizes have been handed out. But people not involved in biology can be forgiven for asking whether any of it can actually make a difference.

A nice example of how the tools can be put to use is being released in Nature on Wednesday. A team that includes the University of Washington's David Baker, who picked up his Nobel in Stockholm last month, used software tools to design completely new proteins that are able to inhibit some of the toxins in snake venom. While not entirely successful, the work shows how the new software tools can let researchers tackle challenges that would otherwise be difficult or impossible.

Blocking venom

Snake venom includes a complicated mix of toxins, most of them proteins, that engage in a multi-front assault on anything unfortunate enough to get bitten. Right now, the primary treatment is to use a mix of antibodies that bind to these toxins, produced by injecting sub-lethal amounts of venom proteins into animals. But antivenon treatments tend to require refrigeration, and even then, they have a short shelf life. Ensuring a steady supply also means regularly injecting new animals and purifying more antibodies from them.

Read full article

Comments

Β© Paul Starosta

Meta takes us a step closer to Star Trek’s universal translator

In 2023, AI researchers at Meta interviewed 34 native Spanish and Mandarin speakers who lived in the US but didn’t speak English. The goal was to find out what people who constantly rely on translation in their day-to-day activities expect from an AI translation tool. What those participants wanted was basically a Star Trek universal translator or the Babel Fish from the Hitchhiker’s Guide to the Galaxy: an AI that could not only translate speech to speech in real time across multiple languages, but also preserve their voice, tone, mannerisms, and emotions. So, Meta assembled a team of over 50 people and got busy building it.

What this team came up with was a next-gen translation system called Seamless. The first building block of this system is described in Wednesday’s issue of Nature; it can translate speech among 36Β differentΒ languages.

Language data problems

AI translation systems today are mostly focused on text, because huge amounts of text are available in a wide range of languages thanks to digitization and the Internet. Institutions like the United Nations or European Parliament routinely translate all their proceedings into the languages of all their member states, which means there are enormous databases comprising aligned documents prepared by professional human translators. You just needed to feed those huge, aligned text corpora into neural nets (or hidden Markov models before neural nets became all the rage) and you ended up with a reasonably good machine translation system. But there were two problems with that.

Read full article

Comments

Β© Liao Pan/China News Service via Getty Images

Getting an all-optical AI to handle non-linear math

A standard digital camera used in a car for stuff like emergency braking has a perceptual latency of a hair above 20 milliseconds. That’s just the time needed for a camera to transform the photons hitting its aperture into electrical charges using either CMOS or CCD sensors. It doesn’t count the further milliseconds needed to send that information to an onboard computer or process it there.

A team of MIT researchers figured that if you had a chip that could process photons directly, you could skip the entire digitization step and perform calculations with the photons themselves, which has the potential to be mind-bogglingly faster.

β€œWe’re focused on a very specific metric here, which is latency. We aim for applications where what matters the most is how fast you can produce a solution. That’s why we are interested in systems where we’re able to do all the computations optically,” says Saumil Bandyopadhyay, an MIT researcher. The team implemented a complete deep neural network on a photonic chip, achieving a latency of 410 picoseconds. To put that in perspective, Bandyopadhyay’s chip could process the entire neural net it had onboard around 58 times within a single tick of the 4 GHz clock on a standard CPU.

Read full article

Comments

Β© MIT

It’s remarkably easy to inject new medical misinformation into LLMs

It's pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional "poisoning" of an LLM during training, it also has implications for the body of misinformation that's already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Read full article

Comments

Β© Just_Super

Google gets an error-corrected quantum bit to be stable for an hour

9 December 2024 at 10:25

On Monday, Nature released a paper from Google's quantum computing team that provides a key demonstration of the potential of quantum error correction. Thanks to an improved processor, Google's team found that increasing the number of hardware qubits dedicated to an error-corrected logical qubit led to an exponential increase in performance. By the time the entire 105-qubit processor was dedicated to hosting a single error-corrected qubit, the system was stable for an average of an hour.

In fact, Google told Ars that errors on this single logical qubit were rare enough that it was difficult to study them. The work provides a significant validation that quantum error correction is likely to be capable of supporting the execution of complex algorithms that might require hours to execute.

A new fab

Google is making a number of announcements in association with the paper's release (an earlier version of the paper has been up on the arXiv since August). One of those is that the company is committed enough to its quantum computing efforts that it has built its own fabrication facility for its superconducting processors.

Read full article

Comments

Β© Google

Google’s DeepMind tackles weather forecasting, with great performance

4 December 2024 at 09:06

By some measures, AI systems are now competitive with traditional computing methods for generating weather forecasts. Because their training penalizes errors, however, the forecasts tend to get "blurry"β€”as you move further ahead in time, the models make fewer specific predictions since those are more likely to be wrong. As a result, you start to see things like storm tracks broadening and the storms themselves losing clearly defined edges.

But using AI is still extremely tempting because the alternative is a computational atmospheric circulation model, which is extremely compute-intensive. Still, it's highly successful, with the ensemble model from the European Centre for Medium-Range Weather Forecasts considered the best in class.

In a paper being released today, Google's DeepMind claims its new AI system manages to outperform the European model on forecasts out to at least a week and often beyond. DeepMind's system, called GenCast, merges some computational approaches used by atmospheric scientists with a diffusion model, commonly used in generative AI. The result is a system that maintains high resolution while cutting the computational cost significantly.

Read full article

Comments

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework

28 November 2024 at 04:00

One year ago, I didn’t know how to bake bread. I just knew how to follow a recipe.

If everything went perfectly, I could turn out something plain but palatable. But should anything changeβ€”temperature, timing, flour, Mercury being in Scorpioβ€”I’d turn out a partly poofy pancake. I presented my partly poofy pancakes to people, and they were polite, but those platters were not particularly palatable.

During a group vacation last year, a friend made fresh sourdough loaves every day, and we devoured it. He gladly shared his knowledge, his starter, and his go-to recipe. I took it home, tried it out, and made a naturally leavened, artisanal pancake.

Read full article

Comments

Β© Kevin Purdy

Qubit that makes most errors obvious now available to customers

20 November 2024 at 12:58

We're nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.

The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.

While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.

Read full article

Comments

Β© Quantum Circuits

Microsoft and Atom Computing combine for quantum error correction demo

19 November 2024 at 13:00

In September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.

Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing's hardware. It didn't take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.

Atoms and errors

While we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000β€”something we're unlikely to ever be able to do.

Read full article

Comments

Β© Atom Computing

❌
❌