Best known for its Qooba cat pillow, Yukai Engineering has made a name for itself with some of the strangest little robots around. Who could forget, for example, Amagami Ham Ham, whose sole purpose is to gnaw on fingers, offering a βsomewhat pleasing sensation.β At CES 2025, Yukai unveiled its latest, Mirumi, and it follows [β¦]
There is rarely time to write about every cool science paper that comes our way; many worthy candidates sadly fall through the cracks over the course of the year. But as 2024 comes to a close, we've gathered ten of our favorite such papers at the intersection of science and culture as a special treat, covering a broad range of topics: from reenacting Bronze Age spear combat and applying network theory to the music of Johann Sebastian Bach, to Spider-Man inspired web-slinging tech and a mathematical connection between a turbulent phase transition and your morning cup of coffee. Enjoy!
Reenacting Bronze Age spear combat
An experiment with experienced fighters who spar freely using different styles.
Credit:
Valerio Gentile/CC BY
The European Bronze Age saw the rise of institutionalized warfare, evidenced by the many spearheads and similar weaponry archaeologists have unearthed. But how might these artifacts be used in actual combat? Dutch researchers decided to find out by constructing replicas of Bronze Age shields and spears and using them in realistic combat scenarios. They described their findings in an October paper published in the Journal of Archaeological Science.
There have been a couple of prior experimental studies on bronze spears, but per Valerio Gentile (now at the University of Gottingen) and coauthors, practical research to date has been quite narrow in scope, focusing on throwing weapons against static shields. Coauthors C.J. van Dijk of the National Military Museum in the Netherlands and independent researcher O. Ter Mors each had more than a decade of experience teaching traditional martial arts, specializing in medieval polearms and one-handed weapons. So they were ideal candidates for testing the replica spears and shields.
Shutterstock added gen AI to its stock-content library to generate $104 million in revenue.
The company has partnered with tech giants including Meta, Amazon, Apple, OpenAI, and Nvidia.
This article is part of "CXO AI Playbook" β straight talk from business leaders on how they're testing and using AI.
Shutterstock, founded in 2003 and based in New York, is a global leader in licensed digital content. It offers stock photos, videos, and music to creative professionals and enterprises.
In late 2022, Shutterstock made a strategic decision to embrace generative AI, becoming one of the first stock-content providers to integrate the tech into its platform.
Dade Orgeron, the vice president of innovation at Shutterstock, leads the company's artificial-intelligence initiatives. During his tenure, Shutterstock has transitioned from a traditional stock-content provider into one that provides several generative-AI services.
While Shutterstock's generative-AI offerings are focused on images, the company has an application programming interface for generating 3D models and plans to offer video generation.
Situation analysis: What problem was the company trying to solve?
When the first mainstream image-generation models, such as Dall-E, Stable Diffusion, and Midjourney, were released in late 2022, Shutterstock recognized generative AI's potential to disrupt its business.
"It would be silly for me to say that we didn't see generative AI as a potential threat," Orgeron said. "I think we were fortunate at the beginning to realize that it was more of an opportunity."
He said Shutterstock embraced the technology ahead of many of its customers. He recalled attending CES in 2023 and said that many creative professionals there were unaware of generative AI and the impact it could have on the industry.
Orgeron said that many industry leaders he encountered had the misconception that generative AI would "come in and take everything from everyone." But that perspective felt pessimistic, he added. But Shutterstock recognized early that AI-powered prompting "was design," Orgeron told Business Insider.
Key staff and stakeholders
Orgeron's position as vice president of innovation made him responsible for guiding the company's generative-AI strategy and development.
However, the move toward generative AI was preceded by earlier acquisitions. Orgeron himself joined the company in 2021 as part of its acquisition of TurboSquid, a company focused on 3D assets.
Shutterstock also acquired three AI companies that same year: Pattern89, Datasine, and Shotzr. While they primarily used AI for data analytics, Orgeron said the expertise Shutterstock gained from these acquisitions helped it move aggressively on generative AI.
Externally, Shutterstock established partnerships with major tech companies including Meta, Alphabet, Amazon, Apple, OpenAI, Nvidia, and Reka. For example, Shutterstock's partnership with Nvidia enabled its generative 3D service.
AI in action
Shutterstock's approach to AI integration focused on the user experience.
Orgeron said the company's debut in image generation was "probably the easiest-to-use solution at that time," with a simple web interface that made AI image generation accessible to creative professionals unfamiliar with the technology.
That stood in contrast to competitors such as Midjourney and Stable Diffusion, which, at the time Shutterstock launched its service in January 2023, had a basic user interface. Midjourney, for instance, was initially available only through Discord, an online chat service more often used to communicate in multiplayer games.
This focus on accessibility set the stage for Shutterstock.AI, the company's dedicated AI-powered image-generation platform. While Shutterstock designed the tool's front end and integrated it into its online offerings, the images it generates rely on a combination of internally trained AI models and solutions from external partners.
Shutterstock.AI, like other image generators, lets customers request their desired image with a text prompt and then choose a specific image style, such as a watercolor painting or a photo taken with a fish-eye lens.
However, unlike many competitors, Shutterstock uses information about user interactions to decide on the most appropriate model to meet the prompt and style request. Orgeron said Shutterstock's various models provide an edge over other prominent image-generation services, which often rely on a single model.
But generative AI posed risks to Shutterstock's core business and to the photographers who contribute to the company's library. To curb this, Orgeron said, all of its AI models, whether internal or from partners, are trained exclusively on Shutterstock's legally owned data. The company also established a contributor fund to compensate content creators whose work was used in the models' training.
Orgeron said initial interest in Shutterstock.AI came from individual creators and small businesses. Enterprise customers followed more cautiously, taking time to address legal concerns and establish internal AI policies before adopting the tech. However, Orgeron said, enterprise interest has accelerated as companies recognize AI's competitive advantages.
Did it work, and how did leaders know?
Paul Hennessy, the CEO of Shutterstock, said in June the company earned $104 million in annual revenue from AI licensing agreements in 2023. He also projected that this revenue could reach up to $250 million annually by 2027.
Looking ahead, Shutterstock hopes to expand AI into its video and 3D offerings. The company's generative 3D API is in beta. While it doesn't offer an AI video-generation service yet, Orgeron said Shutterstock plans to launch a service soon. "The video front is where everyone is excited right now, and we are as well," he said. "For example, we see tremendous opportunity in being able to convert imagery into videos."
The company also sees value in AI beyond revenue figures. Orgeron said Shutterstock is expanding its partnerships, which now include many of the biggest names in Silicon Valley. In some cases, partners allow Shutterstock to use their tech to build new services; in others, they license data from Shutterstock to train AI.
"We're partnered with Nvidia, with Meta, with HP. These are great companies, and we're working closely with them," he said. "It's another measure to let us know we're on the right track."
A 3D-printable EEG electrode e-tattoo. Credit: University of Texas at Austin.
Epidermal electronics attached to the skin via temporary tattoos (e-tattoos) have been around for more than a decade, but they have their limitations, most notably that they don't function well on curved and/or hairy surfaces. Scientists have now developed special conductive inks that can be printed right onto a person's scalp to measure brain waves, even if they have hair. According to a new paper published in the journal Cell Biomaterials, this could one day enable mobile EEG monitoring outside a clinical setting, among other potential applications.
EEGs are a well-established, non-invasive method for recording the electrical activity of the brain, a crucial diagnostic tool for monitoring such conditions as epilepsy, sleep disorders, and brain injuries. It's also an important tool in many aspects of neuroscience research, including the ongoing development of brain-computer interfaces (BCIs). But there are issues. Subjects must wear uncomfortable caps that aren't designed to handle the variation in people's' head shapes, so a clinician must painstakingly map out the electrode positions on a given patient's headβa time-consuming process. And the gel used to apply the electrodes dries out and loses conductivity within a couple of hours, limiting how long one can make recordings.
By contrast, e-tattoos connect to skin without adhesives, are practically unnoticeable, and are typically attached via temporary tattoo, allowing electrical measurements (and other measurements, such as temperature and strain) using ultra-thin polymers with embedded circuit elements. They can measure heartbeats on the chest (ECG), muscle contractions in the leg (EMG), stress levels, and alpha waves through the forehead (EEG), for example.
Most airplanes in the world have vertical tails or rudders to prevent Dutch roll instabilities, a combination of yawing and sideways motions with rolling that looks a bit like the movements of a skater. Unfortunately, a vertical tail adds weight and generates drag, which reduces fuel efficiency in passenger airliners. It also increases the radar signature, which is something you want to keep as low as possible in a military aircraft.
In the B-2 stealth bomber, one of the very few rudderless airplanes, Dutch roll instabilities are dealt with using drag flaps positioned at the tips of its wings, which can split and open to make one wing generate more drag than the other and thus laterally stabilize the machine. βBut it is not really an efficient way to solve this problem,β says David Lentink, an aerospace engineer and a biologist at the University of Groningen, Netherlands. βThe efficient way is solving it by generating lift instead of drag. This is something birds do.β
Lentink led the study aimed at better understanding birdsβ rudderless flight mechanics.