Normal view

There are new articles available, click to refresh the page.
Yesterday — 21 May 2025Main stream

Scientists figure out how the brain forms emotional connections

Whenever something bad happens to us, brain systems responsible for mediating emotions kick in to prevent it from happening again. When we get stung by a wasp, the association between pain and wasps is encoded in the region of the brain called the amygdala, which connects simple stimuli with basic emotions.

But the brain does more than simple associations; it also encodes lots of other stimuli that are less directly connected with the harmful event—things like the place where we got stung or the wasps’ nest in a nearby tree. These are combined into complex emotional models of potentially threatening circumstances.

Till now, we didn’t know exactly how these models are built. But we’re beginning to understand how it’s done.

Read full article

Comments

© fotografixx

Before yesterdayMain stream

Dutch scientists built a brainless soft robot that runs on air 

Most robots rely on complex control systems, AI-powered or otherwise, that govern their movement. These centralized electronic brains need time to react to changes in their environment and produce movements that are often awkwardly, well, robotic.

It doesn’t have to be that way. A team of Dutch scientists at the FOM Institute for Molecular and Atomic Physics (AMOLF) in Amsterdam built a new kind of robot that can run, go over obstacles, and even swim, all driven only by the flow of air. And it does all that with no brain at all.

Read full article

Comments

© AMOLF

New material may help us build Predator-style thermal vision specs

Military-grade infrared vision goggles use detectors made of mercury cadmium telluride, a semiconducting material that’s particularly sensitive to infrared radiation. Unfortunately, you need to keep detectors that use this material extremely cool—roughly at liquid nitrogen temperatures—for them to work. “Their cooling systems are very bulky and very heavy,” says Xinyuan Zhang, an MIT researcher and the lead author of a new study that looked for alternative IR-sensitive materials.

Added weight was a sacrifice the manufacturers of high-end night-vision systems were mostly willing to make because cooling-free alternatives offered much worse performance. To fix this, the MIT researchers developed a new ultra-thin material that can sense infrared radiation without any cooling and outperforms cooled detectors at the same time. And they want to use it to turn thermal vision goggles into thermal vision spectacles.

Staying cool

Cooling-free infrared detectors have been around since before World War II and mostly relied on pyroelectric materials like tourmaline that change their temperature upon absorbing infrared radiation. This temperature change, in turn, generates an electric current that can be measured to get a readout from the detector. Although these materials worked, they had their issues. Operating at room temperature caused a lot of random atomic motion in the pyroelectric material, which introduced electrical noise that made it difficult to detect faint infrared signals.

Read full article

Comments

© Glasshouse Images

Rover finds hints of an ancient Martian carbon cycle

Mars has not always been a seemingly lifeless red desert. We have evidence that billions of years ago it had a warm, habitable climate with liquid water in lakes and flowing rivers, which is somewhat confusing, given that Mars is much farther from the Sun than the Earth and that the Sun was much less bright back then. “In order for Mars to be warm enough to host liquid water, there must have been a lot of carbon dioxide in the atmosphere,” says Benjamin Tutolo, a researcher at the University of Calgary. “The question we’ve been asking for at least 30 years was where the record of all this carbon is.”

Tutolo led a new study of rock samples collected by the Curiosity rover that might have answered this question.

The tallest sediment stack

The mystery of Mars’ missing carbon stems from two seemingly conflicting results. On the one hand, we have already found dried riverbeds and lakes on the surface of Mars, so we know there must have been liquid water on its surface at some point. To account for the presence of this water, every Martian climate model we have run indicates that huge amounts of atmospheric carbon were needed to provide a sufficient greenhouse effect to keep the surface temperature above freezing. But the data we were getting from satellite observations of Mars found much less carbon in the Martian soil than those climate models would suggest. “So, either the models were incorrect—and there’s no good reason to believe that—or there really was lots of carbon in the Martian atmosphere,” Tutolo says.

Read full article

Comments

© NASA/JPL-Caltech/ESA/DLR/FU Berlin/MSSS

Scientists made a stretchable lithium battery you can bend, cut, or stab

The Li-ion batteries that power everything from smartphones to electric cars are usually packed in rigid, sealed enclosures that prevent stresses from damaging their components and keep air from coming into contact with their flammable and toxic electrolytes. It’s hard to use batteries like this in soft robots or wearables, so a team of scientists at the University California, Berkeley built a flexible, non-toxic, jelly-like battery that could survive bending, twisting, and even cutting with a razor.

While flexible batteries using hydrogel electrolytes have been achieved before, they came with significant drawbacks. “All such batteries could [only] operate [for] a short time, sometimes a few hours, sometimes a few days,” says Liwei Lin, a mechanical engineering professor at UC Berkeley and senior author of the study. The battery built by his team endured 500 complete charge cycles—about as many as the batteries in most smartphones are designed for.

Power in water

“Current-day batteries require a rigid package because the electrolyte they use is explosive, and one of the things we wanted to make was a battery that would be safe to operate without this rigid package,” Lin told Ars. Unfortunately, flexible packaging made of polymers or other stretchable materials can be easily penetrated by air or water, which will react with standard electrolytes, generating lots of heat, potentially resulting in fires and explosions. This is why, in 2017, scientists started to experiment with quasi-solid-state hydrogel electrolytes.

Read full article

Comments

We have the first video of a plant cell wall being built

Plant cells are surrounded by an intricately structured protective coat called the cell wall. It’s built of cellulose microfibrils intertwined with polysaccharides like hemicellulose or pectin. We have known what plant cells look like without their walls, and we know what they look like when the walls are fully assembled, but we’ve never seen the wall-building process in action. “We knew the starting point and the finishing point, but had no idea what happens in between,” says Eric Lam, a plant biologist at Rutgers University. He’s a co-author of the study that caught wall-building plant cells in action for the first time. And once we saw how the cell wall building worked, it looked nothing like how we drew that in biology handbooks.

Camera-shy builders

Plant cells without walls, known as protoplasts, are very fragile, and it has been difficult to keep them alive under a microscope for the several hours needed for them to build walls. Plant cells are also very light-sensitive, and most microscopy techniques require pointing a strong light source at them to get good imagery.

Then there was the issue of tracking their progress. “Cellulose is not fluorescent, so you can’t see it with traditional microscopy,” says Shishir Chundawat, a biologist at Rutgers. “That was one of the biggest issues in the past.” The only way you can see it is if you attach a fluorescent marker to it. Unfortunately, the markers typically used to label cellulose were either bound to other compounds or were toxic to the plant cells. Given their fragility and light sensitivity, the cells simply couldn’t survive very long with toxic markers as well.

Read full article

Comments

© Hyun Huh et al.

Bonobos’ calls may be the closest thing to animal language we’ve seen

Bonobos, great apes related to us and chimpanzees that live in the Republic of Congo, communicate with vocal calls including peeps, hoots, yelps, grunts, and whistles. Now, a team of Swiss scientists led by Melissa Berthet, an evolutionary anthropologist at the University of Zurich, discovered bonobos can combine these basic sounds into larger semantic structures. In these communications, meaning is something more than just a sum of individual calls—a trait known as non-trivial compositionality, which we once thought was uniquely human.

To do this, Berthet and her colleagues built a database of 700 bonobo calls and deciphered them using methods drawn from distributional semantics, the methodology we’ve relied on in reconstructing long-lost languages like Etruscan or Rongorongo. For the first time, we have a glimpse into what bonobos mean when they call to each other in the wild.

Context is everything

The key idea behind distributional semantics is that when words appear in similar contexts, they tend to have similar meanings. To decipher an unknown language, you need to collect a large corpus of words and turn those words into vectors—mathematical representations that let you place them in a multidimensional semantic space. The second thing you need is context data, which tells you the circumstances in which these words were used (that gets vectorized, too). When you map your word vectors onto context vectors in this multidimensional space, what usually happens is that words with similar meaning end up close to each other. Berthet and her colleagues wanted to apply the same trick to bonobos’ calls. That seemed straightforward at first glance, but proved painfully hard to execute.

Read full article

Comments

© USO

Can we make AI less power-hungry? These researchers are working on it.

At the beginning of November 2024, the US Federal Energy Regulatory Commission (FERC) rejected Amazon’s request to buy an additional 180 megawatts of power directly from the Susquehanna nuclear power plant for a data center located nearby. The rejection was due to the argument that buying power directly instead of getting it through the grid like everyone else works against the interests of other users.

Demand for power in the US has been flat for nearly 20 years. “But now we’re seeing load forecasts shooting up. Depending on [what] numbers you want to accept, they’re either skyrocketing or they’re just rapidly increasing,” said Mark Christie, a FERC commissioner.

Part of the surge in demand comes from data centers, and their increasing thirst for power comes in part from running increasingly sophisticated AI models. As with all world-shaping developments, what set this trend into motion was vision—quite literally.

Read full article

Comments

© Igor Borisenko/Getty Images

A “biohybrid” robotic hand built using real human muscle cells

Biohybrid robots work by combining biological components like muscles, plant material, and even fungi with non-biological materials. While we are pretty good at making the non-biological parts work, we’ve always had a problem with keeping the organic components alive and well. This is why machines driven by biological muscles have always been rather small and simple—up to a couple centimeters long and typically with only a single actuating joint.

“Scaling up biohybrid robots has been difficult due to the weak contractile force of lab-grown muscles, the risk of necrosis in thick muscle tissues, and the challenge of integrating biological actuators with artificial structures,” says Shoji Takeuchi, a professor at the Tokyo University, Japan. Takeuchi led a research team that built a full-size, 18 centimeter-long biohybrid human-like hand with all five fingers driven by lab-grown human muscles.

Keeping the muscles alive

Out of all the roadblocks that keep us from building large-scale biohybrid robots, necrosis has probably been the most difficult to overcome. Growing muscles in a lab usually means a liquid medium to supply nutrients and oxygen to muscle cells seeded on petri dishes or applied to gel scaffoldings. Since these cultured muscles are small and ideally flat, nutrients and oxygen from the medium can easily reach every cell in the growing culture.

Read full article

Comments

© Andriy Onufriyenko

Small charges in water spray can trigger the formation of key biochemicals

We know Earth formed roughly 4.54 billion years ago and that the first single cell lifeforms were present roughly 1 billion years after that. What we don’t know is what triggered the process that turned our planet from a barren ball of rock into a world hosting amazing abundance of lifeforms. “We’re trying to understand how do you go from non-life to life. Now I think we have made a real contribution to solving this mystery,” says Richard Zare, a Stanford University chemistry professor. Zare is the senior author of the recent study into a previously unknown electrochemical process that might have helped supply the raw materials needed for life.

Zare’s team demonstrated the existence of micro-lightning, very small electricity discharges that occur between tiny droplets of water spray. When triggered in a mixture of gases made to replicate the atmosphere on early Earth, these micro-lightnings produced chemical compounds used by present-day life, like glycine, uracil, and urea, along with chemical precursors like cyanoacetylene, and hydrogen cyanide. “I’m not saying it’s the only way this could happen—I wasn’t there,” Zare acknowledged. “But it’s a new plausible mechanism that gives us building blocks of life.”

Lightning in the bulb

Scientific research into the beginnings of life on Earth started in the 1920s with Aleksander Oparin and J.B.S. Haldane, scientists who independently proposed that life on Earth could have arisen through a process of gradual chemical evolution. In their view, inorganic molecules might have reacted due to energy from the Sun or lightning strikes to form life’s building blocks, like amino acids. Those building blocks, Oparin and Haldane argued, could have accumulated in the oceans, making a “primordial soup.”

Read full article

Comments

© Little Dinosaur

In one dog breed, selection for utility may have selected for obesity

Labrador retrievers are common pets, but they also work as service dogs, aiding people with sight or hearing impairments. Unfortunately, the breed is particularly prone to getting overweight, and this tendency apparently is more severe in Labradors purpose-bred for service. To figure out the reasons behind this, researchers at Cambridge University investigated potential obesity genes in Labrador retrievers’ DNA.

It turned out increased obesity risk in Labradors was linked to the same genes and mechanisms that cause obesity in humans. These gene variants were more common in purpose-bred dogs we carefully selected, generation after generation, to maximize the results of the demanding training programs service animals must go through.

We thought we were picking the smartest Labradors to become guide dogs. But we might have been picking the ones that just wanted the snacks given as rewards the most.

Read full article

Comments

© Justin Paget

A small microbial ecosystem has formed on the International Space Station

Astronauts on the International Space Station often suffer from various immune system dysfunctions, including allergies and skin rashes, even though they go through rigorous screening and are probably among the healthiest people on (or at least near) Earth. “It’s hard to pinpoint the exact causes for a lot of these symptoms, but we believe microbiome disruptions that happen in their bodies and in their environment up there could be playing an important part”, says Rodolfo Salido Benitez, a bioengineering researcher at the University of California, San Diego who co-authored the largest study on the ISS microbiome to date.

After analyzing over 800 samples collected by astronauts in multiple modules of the United States Orbital Segment in the ISS, Benitez and his team concluded the microbial and chemical environment on the station closely resembled the one found at COVID-19 isolation wards during the height of the pandemic. And that may be less than ideal for keeping people healthy.

Swabbing the space decks

Monitoring microbial life on the ISS is an ongoing effort, and studies of this sort have been done before, although at a much smaller scale. “Previous studies used a low number of samples that could not identify all microbial and chemical factors present up there,” said Nina Zhao, a researcher at the UCSD and co-author of the study.

Read full article

Comments

© NASA

We’ve figured out the basics of a shape-shifting, T-1000-style material

The T-1000 in Terminator 2 could change shape at will, morph its hands into blades or turn parts of its body into a fluid to move through metal bars. “I saw this movie when I was a child—it was like, 'Wow, can you imagine,' I thought, 'being able to do this?'” says Otger Campàs, a professor at Max Planck Institute of Molecular Biology and Genetics in Dresden, Germany. “Now I work on embryos. And what we saw in The Terminator actually happens in an embryo. This kind of shape shifting is what an embryo does.”

Campàs and his team drew inspiration from processes called fluidization and convergent extension—mechanisms that cells in embryos use to coordinate their behavior when forming tissues and organs in a developing organism. The team built a robotic collective where each robotic unit behaved like an embryonic cell. As a collective, the robots behaved like a material that could change shape and switch between solid and liquid states, just like the T-1000.

Real-world and sci-fi alloys

The T-1000 was a marvel to behold, but the movie gave no clues as to how it worked. This is why Campàs and his colleagues looked for clues elsewhere. Similar shape-shifting properties have been observed in embryos when you watch their development sped up using time-lapse imaging. “Tissues in embryos can switch between solid and fluid states to shape the organs. We were thinking how we could engineer robots that would do the same,” Campàs says.

Read full article

Comments

© Image courtesy of UC Santa Barbara

If it moves, it’s probably alive: Searching for life on other planets

12 February 2025 at 04:20

The search for extraterrestrial life has always been a key motivator of space exploration. But if we were to search Mars, Titan, or the subsurface oceans of Europa or Enceladus, it seems like all we can reasonably hope to find is extremophile microbes. And microbes, just a few microns long and wide, will be difficult to identify if we’re relying on robots working with limited human supervision and without all the fancy life-detecting gear we have here on Earth.

To solve that problem, a team of German researchers at the Technical University in Berlin figured that, instead of having a robot looking for microbes, it would be easier and cheaper to make the microbes come to the robot. The only ingredient they were lacking was the right bait.

Looking for movement

Most ideas we have for life detection on space mission rely on looking for chemical traces of life, such as various metabolites. Most recent missions, the Perseverance rover included, weren’t equipped with any specialized life-detecting instruments. “On Mars, the focus was on looking for signs of possible ancient life—fossils or other traces of microbes,” says Max Riekeles, an astrobiologist at the Technical University Berlin. “The last real in-situ life detection missions were performed by Viking landers, which is quite a while back already,”

Read full article

Comments

To help AIs understand the world, researchers put them in a robot

Large language models like ChatGPT display conversational skills, but the problem is they don’t really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word “hot” means because we’ve been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limited—it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

“The inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language,” says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

Read full article

Comments

© Thomas Vogel

Sleeping pills stop the brain’s system for cleaning out waste

Our bodies rely on their lymphatic system to drain excessive fluids and remove waste from tissues, feeding those back into the blood stream. It’s a complex yet efficient cleaning mechanism that works in every organ except the brain. “When cells are active, they produce waste metabolites, and this also happens in the brain. Since there are no lymphatic vessels in the brain, the question was what was it that cleaned the brain,” Natalie Hauglund, a neuroscientist at Oxford University who led a recent study on the brain-clearing mechanism, told Ars.

Earlier studies done mostly on mice discovered that the brain had a system that flushed its tissues with cerebrospinal fluid, which carried away waste products in a process called glymphatic clearance. “Scientists noticed that this only happened during sleep, but it was unknown what it was about sleep that initiated this cleaning process,” Hauglund explains.

Her study found the glymphatic clearance was mediated by a hormone called norepinephrine and happened almost exclusively during the NREM sleep phase. But it only worked when sleep was natural. Anesthesia and sleeping pills shut this process down nearly completely.

Read full article

Comments

© https://www.gettyimages.com/detail/photo/sleeping-pills-in-bedroom-royalty-free-image/819748064?

Meta takes us a step closer to Star Trek’s universal translator

In 2023, AI researchers at Meta interviewed 34 native Spanish and Mandarin speakers who lived in the US but didn’t speak English. The goal was to find out what people who constantly rely on translation in their day-to-day activities expect from an AI translation tool. What those participants wanted was basically a Star Trek universal translator or the Babel Fish from the Hitchhiker’s Guide to the Galaxy: an AI that could not only translate speech to speech in real time across multiple languages, but also preserve their voice, tone, mannerisms, and emotions. So, Meta assembled a team of over 50 people and got busy building it.

What this team came up with was a next-gen translation system called Seamless. The first building block of this system is described in Wednesday’s issue of Nature; it can translate speech among 36 different languages.

Language data problems

AI translation systems today are mostly focused on text, because huge amounts of text are available in a wide range of languages thanks to digitization and the Internet. Institutions like the United Nations or European Parliament routinely translate all their proceedings into the languages of all their member states, which means there are enormous databases comprising aligned documents prepared by professional human translators. You just needed to feed those huge, aligned text corpora into neural nets (or hidden Markov models before neural nets became all the rage) and you ended up with a reasonably good machine translation system. But there were two problems with that.

Read full article

Comments

© Liao Pan/China News Service via Getty Images

Getting an all-optical AI to handle non-linear math

A standard digital camera used in a car for stuff like emergency braking has a perceptual latency of a hair above 20 milliseconds. That’s just the time needed for a camera to transform the photons hitting its aperture into electrical charges using either CMOS or CCD sensors. It doesn’t count the further milliseconds needed to send that information to an onboard computer or process it there.

A team of MIT researchers figured that if you had a chip that could process photons directly, you could skip the entire digitization step and perform calculations with the photons themselves, which has the potential to be mind-bogglingly faster.

“We’re focused on a very specific metric here, which is latency. We aim for applications where what matters the most is how fast you can produce a solution. That’s why we are interested in systems where we’re able to do all the computations optically,” says Saumil Bandyopadhyay, an MIT researcher. The team implemented a complete deep neural network on a photonic chip, achieving a latency of 410 picoseconds. To put that in perspective, Bandyopadhyay’s chip could process the entire neural net it had onboard around 58 times within a single tick of the 4 GHz clock on a standard CPU.

Read full article

Comments

© MIT

Using 2D materials on chips without destroying the wiring

Silicon chip manufacturers like Intel and TSMC are constantly outdoing themselves to make ever smaller features, but they are getting closer to the physical limits of silicon.

“We already have very, very high density in silicon-based architectures where silicon performance degrades sharply,” said Ki Seok Kim, a scientist working at the Massachusetts Institute of Technology’s Research Laboratory of Electronics.

One way around this problem is to replace silicon with graphene-like 2D materials that maintain their semiconducting properties even at a single-atom scale. Another way is building 3D chips, which squeeze more transistors into the same area without making transistors smaller. Kim’s team did both, building a 3D chip out of vertically stacked 2D semiconductors.

Read full article

Comments

© Kwisky

Magnetic shape-shifting surface can move stuff without grasping it 

27 December 2024 at 05:45

When you want to move an object from one place to another, you usually grab it with your hands or a robotic arm. But what if you want to move something you cannot touch without damaging or disrupting it, like a droplet of liquid? A solution proposed by a team of scientists at the North Carolina State University is a metamaterial that can change shape in response to magnetic fields.

This material had to be easily deformable to change shape, yet at the same time stiff enough to bear loads. “That seemed contradictory—how do you make something that is stiff and deformable at once?” says Jie Yin, a mechanical metamaterials researcher at NC State. His team did it with ferromagnetic elastomers, kirigami cuts, balloons, and magnets.

Refreshable Braille display

“There is not much research on using magnets to manipulate non-magnetic objects. It is very, very hard,” says Yinding Chi, another NC State researcher and lead author of the study. The idea Chi and his colleagues came up with could be compared to a refreshable Braille display. They imagined a surface dotted with domes that could rise, turn, or depress on demand, allowing it to dynamically form relief-like images or move in a pattern similar to waves in the ocean. Objects would then move on these surfaces like they were carried by waves. “This way, you can move various objects without using grippers,” Yin says.

Read full article

Comments

❌
❌