Reading view

There are new articles available, click to refresh the page.

Drone takes out Super Scooper fighting Los Angeles wildfires

fire in palisades
A Super Scooper drops ocean water on the Palisades fire. | Brian van der Brug / Los Angeles Times via Getty Images

An aircraft helping to fight wildfires that are raging across Los Angeles was struck by a civilian drone on Thursday. The collision damaged the wing of the aircraft — a CL-415 “Super Scooper” capable of scooping up 1,600 gallons of ocean water to drop onto nearby blazes — according to a statement by the LA County Fire Department posted on X, putting it out of service until it can be repaired.

Cal Fire spokesman Chris Thomas told The New York Times that grounding the aircraft will likely set back local firefighting efforts. Super Scoopers can typically refill in about five minutes. But even if it takes ten, that’s six water drops that are lost each hour according to Thomas. “So whose house is not going to get that water to protect it?” The Federal Aviation Administration (FAA) says the Super Scooper landed safely after the drone impact, and that the incident is now under investigation.

Temporary flight restrictions have been implemented in the Los Angeles area that prohibit drones and other aircraft from flying without FAA authorization in an effort to protect firefighting efforts.

According to LA County Fire Chief Anthony Marrone, the drone was not assigned to help tackle the Palisades fires, and was destroyed in the collision. Marrone told the LA Times that the FBI is now planning to implement so-called “aerial armor” in the area to prevent further interference from drones.

Several people online have violated the FAA-enforced flight restrictions, posting viral drone photos and video footage across social media showing the devastation from what appears to be prohibited airspace. Fire response agencies are often forced to ground their own aircraft to avoid collisions when dummies fly drones near wildfires for online clout.

“It’s a federal crime, punishable by up to 12 months in prison, to interfere with firefighting efforts on public lands,” the FAA said in a statement. “Additionally, the FAA can impose a civil penalty of up to $75,000 against any drone pilot who interferes with wildfire suppression, law enforcement or emergency response operations. The FAA treats these violations seriously and immediately considers swift enforcement action for these offenses.”

Venu Sports shuts down before it ever launches

Vector collage of the Venu Sports logo.
Image: Cath Virginia / The Verge

Venu, the live sports streaming service from ESPN, Fox, and Warner Bros. Discovery, isn’t happening. In a statement on Friday, the three companies announced the decision “not to move forward with the contemplated joint venture.”

Here’s a joint statement from ESPN, Fox, and Warner Bros. Discovery:

After careful consideration, we have collectively agreed to discontinue the Venu Sports joint venture and not launch the streaming service. In an ever-changing marketplace, we determined that it was best to meet the evolving demands of sports fans by focusing on existing products and distribution channels. We are proud of the work that has been done on Venu to date and grateful to the Venu staff, whom we will support through this transition period.

ESPN, Fox, and Warner Bros. Discovery first announced Venu last year, and it was supposed to launch in the fall of 2024. But then the service hit a legal roadblock: an antitrust lawsuit from the live TV streaming service Fubo, accusing the trio of engaging in “a years-long campaign to block Fubo’s innovative sports-first streaming business” due to restrictive sports licensing agreements. Lawmakers also asked regulators to investigate Venu and its potential to become a monopoly in televised sports.

Last August, a federal judge sided with Fubo and temporarily blocked Venu’s launch. Things seemed to settle last week when Disney agreed to merge Hulu + Live TV with Fubo, leading Fubo to drop its lawsuit. However, DirecTV and EchoStar, both of which raised concerns about the launch of Venu, weren’t happy about Fubo’s decision to settle.

Fubo declined to comment.

Developing...

The coolest laptops of CES 2025

A rear view of the Lenovo ThinkBook Plus Gen 6 with its rollable display extended.
Lenovo’s rollable laptop stole the show, but there are a bunch of upcoming models I’m excited to test when the time comes. | Photo by Antonio G. Di Benedetto / The Verge

The new CPUs, GPUs, and laptops announced at CES this week set the tone for Windows computers in the year to come — and so far, 2025 is looking pretty promising. There are a bunch of new notebooks I’m excited to test out when they come around, many of which are gaming-focused since the launch of Nvidia’s RTX 50-series cards is ushering in an onslaught of graphics-heavy refreshes and upgrades.

There are many new laptops coming from Dell, Alienware, Asus, Acer, Lenovo, MSI, and Razer. Many may just boil down to chip bumps and slight refreshes, but there are some that are betting big on new ideas, thinness, raw power, and over-the-top accouterments. Here are the ones I’m most excited for.

Lenovo ThinkBook Plus Gen 6

Photo by Antonio G. Di Benedetto / The Verge

I’ve already written and said a lot about Lenovo’s concept-turned-buyable-product that is the ThinkBook Plus Gen 6. It’s the coolest laptop we saw. It’s our outright best in show for CES 2025. And it’s also possible when it comes time to review one later in the year that the challenges of Lenovo trying to graft software functionality for its rollable display onto Windows may be a bridge too far.

But the...

Read the full story at The Verge.

Brelyon Ultra Reality Extend immersive monitor delivers Vision Pro-like experience without a headset

The greatest attraction for me about Vision Pro is being able to have huge virtual monitors for both work and entertainment, but the downside is the discomfort of wearing one for an extended time. The Brelyon Ultra Reality Extend is a funky new monitor tech that aims to solve this problem.

You don’t have to wear anything – just sit in front of a monitor, which can project a virtual display as wide as 122 inches from a much smaller physical display …

more…

X's new parody labels won't fix its impersonation problem

X is further aiming to clamp down on impersonation by rolling out a label for parody accounts to help make them distinct from the real deal. Users will now start seeing the label on posts as well as profile pages.

The company says that the goal of the label is to improve transparency, but there's a fatal flaw in how X is going about that. As it stands, the label is not yet mandatory. And as TechCrunch notes, operators of parody accounts have to apply it manually (by going to the "your account" section" in settings, then to "account information" and enabling “Parody, commentary and fan account” option).

"We’re rolling out profile labels for parody accounts to clearly distinguish these types of accounts and their content on our platform. We designed these labels to increase transparency and to ensure that users are not deceived into thinking such accounts belong to the entity being parodied," X wrote in an announcement. "Parody labels will be applied to both posts and accounts on X to clearly demonstrate the source of the content you’re seeing. We’ll share details soon on when the label will become mandatory for parody accounts."

We’re rolling out profile labels for parody accounts to clearly distinguish these types of accounts and their content on our platform. We designed these labels to increase transparency and to ensure that users are not deceived into thinking such accounts belong to the entity…

— Safety (@Safety) January 10, 2025

The company added that parody accounts still have to adhere to the platform's rules, including those related to authenticity. "Parody, Fan, and Commentary (PCF) labels are selected by people on X to indicate that the account depicts another person, group, or organization in their profile to discuss, satirize, or share information about that entity," the label's description reads. "This label distinguishes these accounts to ensure they do not cause confusion for others or incorrectly imply any affiliation."

Since X isn't applying the label to accounts itself (seemingly relying on the community to flag impersonators rather than take a more active approach to moderation) and the fact it isn't mandatory yet, it's unlikely to meaningfully target the problem of impersonation. 

Scammers who impersonate, say, X owner Elon Musk in an attempt to squeeze some bitcoin out of other users won't exactly be inclined to put the label on their accounts. And those who simply don't care about having their account banned by imitating a legitimate news outlet, brand or celebrity to spread misinformation are unlikely to either. It's almost as if the entire concept of authenticity on X has been a mess ever since the company allowed anyone to buy a blue checkmark for their profile.

This article originally appeared on Engadget at https://www.engadget.com/social-media/xs-new-parody-labels-wont-fix-its-impersonation-problem-134514427.html?src=rss

©

© Carlos Barria / reuters

Twitter's new logo is seen projected on the corporate headquarters building in downtown San Francisco, California, U.S. July 23, 2023. REUTERS/Carlos Barria

Tesla finally launches the refreshed 2025 Model Y in the Asia-Pacific region

Tesla has quietly unveiled its facelifted Model Y with new styling that will help it keep up with rivals like Kia and Volvo. Though currently only available in the Asia Pacific region, the refreshed "Juniper" model is likely to appear stateside in the coming months. That was the case with the revised Model 3, which first appeared in Asia in September 2023 and went on sale in the US in January the following year

The new Model Y retains the gawky proportions of its predecessor, but looks sleeker thanks to smoothed out front and rear ends. The smaller headlights bookend a slim lightbar across the front, with a similar treatment for the taillights. In the case of the lights, the new design language is more aligned with the Cybertruck than the Model 3. 

Tesla finally launches the refreshed 2025 Model Y in the Asia Pacific region
Tesla

Many interior treatments on the Model Y are similar to the Model 3, with one notable exception. Like the Model 3, it has new ventilated seats, a rear-seat display and a light strip that wraps around much of the vehicle. However, the new steering wheel lacks the turn signal buttons found on the Model 3 — instead, the Juniper Model Y uses a stalk like its predecessor. Tesla may have done that to keep it competitive with rivals, particularly in China where it's up against juggernaut rival BYD. 

Tesla is offering rear-wheel drive and long-range all-wheel drive versions in Australia, but no performance option for now. It's promising up to 342 miles (551 km) of range by the WLTP cycle on the long-range model, or around 307 miles by US EPA standards. However, US models could have different battery specs and thus different range numbers.

Tesla finally launches the refreshed 2025 Model Y in the Asia Pacific region
Tesla

The new model arrives in good time for Tesla. In 2024, the company saw its first drop in vehicle deliveries since 2012, even though it improved in its key market, China. The redesigned Model Y will start shipping there in March 2025 and is likely to arrive elsewhere in several months, though the company has yet to nail down a date for US deliveries. 

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/tesla-finally-launches-the-refreshed-2025-model-y-in-the-asia-pacific-region-133010038.html?src=rss

©

© Tesla

Tesla finally launches the refreshed 2025 Model Y in China

Production House R+C Expands as Agency Brass, Hires Pierre Lipton as CCO

Production company R+C has rebranded as creative agency Brass and hired former McCann and 360i creative leader Pierre Lipton as chief creative officer. By expanding as an agency, Brass aims to work directly with clients to affect strategy and the creative process from the beginning, CEO Sean Gately told ADWEEK. Whereas some agencies have opened...

WhatsApp really hopes you want to talk with AI bots

WhatsApp logo on a green, black, and white background
Illustration: The Verge

Meta’s popular messaging app WhatsApp is testing a new design that gives prominent space to a suite of AI chatbots. The design, currently only accessible through the app’s Android beta, adds a dedicated tab for AIs on the app’s homescreen.

WABetaInfo spotted the change, which devotes one of WhatsApp’s four tabs solely to its AI features. It includes a selection of “Popular AI characters” to talk to, along with others organized by subject matter. Other AI-powered features within WhatsApp include AI-generated images and stickers and a search tool using Meta AI.

These AI tools and chatbots aren’t new to WhatsApp, though they’re only available in the US and certain other countries, and a limited selection of languages. They’re currently accessed through the app’s primary Chats tab, but this update looks to give them more prominence.

A screenshot showing a new AI tab in a beta version of WhatsApp, with a list of AI characters to talk to Image: WABetaInfo
The exciting AIs that WhatsApp wants you to chat with.

The app is also experimenting with expanding the range of AI bots by adding the option to create personalized AI characters, which WABetaInfo found in a separate beta update today. Meta already offers the ability to create custom AI bots, but only through an AI Studio on the Instagram website. Adding the option directly into an app is a significant simplification of the process.

The new AI tab replaces the existing Communities tab, though that functionality isn’t going anywhere. A previous beta version earlier this week introduced a “streamlined” version of Community creation within the Chats tab.

The WhatsApp beta is available through Google Play, though tester numbers are limited and the option to join is currently unavailable. We don’t know if or when the AI tab will be added to the app’s live build, but the change is likely to be limited only to those countries where the AI features are already available.

Samsung isn't talking about Eclipsa Audio at CES 2025

Before CES 2025 kicked off in Las Vegas, Samsung announced that its spatial audio collaboration with Google would be available on its 2025 TVs and soundbars. Finer details on the platform were noticeably absent from that announcement, with the company only noting that the 3D Eclipsa Audio would be available this year for YouTube content creators. There was also the general explanation that the platform would enable creators "to adjust audio data such as the location and intensity of sounds, along with spatial reflections, to create an immersive three-dimensional sound experience," according to the press release.

If that sounds like Dolby Atmos to you, that's what I assume Samsung and Google are trying to replicate here. And if that's the case, if Samsung really wants its own immersive audio standard, there's a backstory worth revisiting here. In 2023, Samsung and Google first revealed their spatial audio ambitions. At the time, Samsung said its research division had been working on 3D audio since 2020 and the first fruits of the collaboration was the open-source Immersive Audio Model and Formats (IAMF) adopted by the Alliance for Open Media (AOM) in October 2023. 

There's also the fact that Samsung doesn't offer Dolby Vision on its TVs. Instead, the company uses HDR10+, an open-source and royalty-free platform for encoding HDR metadata. And in that 2023 audio announcement, Samsung Research's WooHyun Nam explained that 3D sound technology needed to be open to everyone too. “Providing a complete open-source framework for 3D audio, from creation to delivery and playback, will allow for even more diverse audio content experiences in the future," he said.

Samsung currently supports Dolby Atmos on its soundbars, including its flagship Q990 series and the newly announced QS700F. It sounds like the company no longer wants to pay to license Atmos from Dolby. And in order to still offer immersive 3D audio on its products, this collaboration with Google aims to build the alternative. It's worth noting that AOM counts Amazon, Apple and Netflix among its members, in addition to Google, Samsung and others. The group's AV1 video format was introduced in 2018 and is now used across Netflix, YouTube, Twitch and other sites.

Samsung's Q990F soundbar now comes with a smaller subwoofer.
Samsung's Q990F soundbar
Billy Steele for Engadget

The bizarre thing about all of this is that no one from Samsung wants to talk about Eclipsa Audio. I attended multiple events and product demos that the company hosted this week and the response when I asked about it was either "we haven't been told anything" or "let me see if I can find someone who can talk about it." The latter, of course, never manifested a "someone" or a follow-up. I even asked for a rep to tell me if the company wasn't ready to discuss details and never heard back on that either. 

The most detailed explanation I've seen this week came from Arm, which is apparently also working on the development of Eclipsa Audio alongside Samsung and Google. The chip designer said that Eclipsa is a multi-channel audio surround sound format that's built on IMAF. Vertical and horizontal channels will create the immersive sound, with the goal of making movies, music and television shows more compelling in your living room. Again, that's exactly what Dolby Atmos already does. 

Arm further explained that Eclipsa Audio can automatically adjust sound based on the scene and that there will be a degree of customization for users. The bitstream can contain up to 28 input channels that can be fixed (instruments or microphones) or dynamic (vehicles in movie scenes), with support for LPCM, AAC, FLAC and Opus codecs. Binaural rendering is also available for earbuds and headphones, and the new tech will be available to content creators using consumer devices in their workflow. 

So far, Samsung and Google have only listed YouTube as the platform or service where Eclipsa Audio content will be available. If the duo truly wants to compete with Dolby Atmos, that list needs to expand quickly. Plus, Dolby already has the brand recognition and wide adoption in both the audio and home theater categories for Atmos. It's even available in cars

Samsung said in its pre-CES announcement that it and Google would work with the Telecommunications Technology Association (TTA) to develop a certification program for devices that support Eclipsa Audio. So, it seems like serious groundwork has been laid to get this technology on devices, starting with Samsung's own 2025 TVs and soundbars. But, as we saw with Sony 360 Reality Audio and the early days of Dolby Atmos Music, it can take time to build out a compelling library of content. That means Samsung will likely have to keep reminding us that Eclipsa Audio is a thing, even when it doesn't have much more to say. 

This article originally appeared on Engadget at https://www.engadget.com/home/home-theater/samsung-isnt-talking-about-eclipsa-audio-at-ces-2025-130041782.html?src=rss

©

© Samsung

Samsung's QS700F can sit on a shelf or be mounted flat on a wall.

Gumloop, founded in a bedroom in Vancouver, lets users automate tasks with drag-and-drop modules

Developers Max Brodeur-Urbas and Rahul Behal think that AI has the potential to automate lots of business-relevant tasks, but that many of the AI-powered automation tools on the market today are unreliable and costly. Part of the problem is that users expect too much of AI, Brodeur-Urbas told TechCrunch — for instance, they assume that […]

© 2024 TechCrunch. All rights reserved. For personal use only.

The smart glasses era is here — I got a first look

Pair of XREAL smart glasses lit up in a futuristic way.
Smart glasses were everywhere on the show floor this year. | Photo by Antonio G. Di Benedetto / The Verge

At CES, the next generation of eyewear was everywhere. It’s just no one seems to agree on why we want it or what the best approach is.

It’s the second day of CES, and I’m waiting in a line to see my 10th pair of smart glasses. I honestly don’t know what to expect: I’ve seen glorified sunglasses with dubious ChatGPT clones. I’ve sidled up to several booths where the glasses were almost carbon copy clones of the pairs a booth over. I’ve seen all manner of “displays” tacked onto the lenses: some washed out, others so tedious to calibrate as to make me walk away.

So when I slipped on the Rokid Glasses, I felt my brows raise. I could see what looked like a mini desktop. I swiped the arm and a horizontal list of apps appeared. Green writing appeared in front of me a bit like a monitor in The Matrix. A Rokid staffer began speaking to me in Chinese, and despite the surrounding din, I could see a text translation of what she was saying float in front of me. After a brief conversation — she asked whether I ate lunch, she hadn’t — she prompted me to try taking a picture. The display shifted to what looked like a camera’s viewfinder. I hit the multifunction button. An animation flashed. On her phone, I saw the picture I took.

“Holy crap,” I thought. “So this is what the Ray-Ban Meta smart glasses would be like with a display.” And then — “If this is possible, why doesn’t it have one yet?”

The three types of smart glasses

It seems that everyone is still trying to figure out what makes the perfect pair of smart glasses. I must have tried out 20 pairs over the course of the last week, but they all seemed to fall into one of three different buckets in how they balanced wearability and functionality.

The first bucket is the simple and stylish glasses. The more stylish and comfortable smart glasses are, the fewer features they tend to have. But for this group, that’s often a good thing.

Take the unassuming Nuance Audio. These smart glasses — made by EssilorLuxottica, Meta’s partner in making the Ray-Ban Meta eyewear — discreetly function as over-the-counter hearing aids. When you wear them, you can dampen some of the noise around you as well as amplify the voice of the person you’re speaking to. This would sound like science fiction if I hadn’t tried it myself.

But at a glance, you’d never know the Nuance Audio glasses can alter how you hear the world — and that’s precisely the point. They look like any pair of stylish glasses and come in two colors and three shapes. By “hiding” their smarts in a normal-looking pair of glasses, they’re essentially helping to reduce the discomfort some people feel when wearing visible hearing aids. It’s not flashy, but it’s a precise and clear use case.

The Chamelo glasses take a similar tack. The “smart” part of these electrochromic sunglasses can, depending on the model, change the color or tint with the swipe of a finger. Some models also have Bluetooth audio. Chamelo’s glasses aren’t new, and at CES, they weren’t suddenly adding in AI assistants, displays, or anything wild. This year’s update? Adding support for prescriptions so more people can use the device.

Neither of these glasses is trying to reinvent the wheel. They saw a simple problem worth fixing and decided to fix it. Nothing more, nothing less.

The face screens

On the other end of the spectrum, you’ll find longtime CES veterans Xreal and Vuzix.

When I arrive at Xreal’s booth, it’s jam-packed. There’s a station where people wear Xreal glasses as they “drive” in a BMW. (The car doesn’t move, but you can pretend you’re moving the wheel and tilt your head on a race course.) I don a pair of last year’s Xreal Air 2 Ultra glasses while seated at a desk with only a keyboard in front of me. The Air 2 Ultra are a bit like chunky sunglasses, with miniature screens hovering beneath the lenses. From afar, they look pretty normal. Up close, you can feel their bulk — and on the face, they protrude further than looks natural.

Inside the glasses, I see football players on a football field, information popping up over their heads. The virtual display switches to a panoramic video with avatars of friends watching alongside me. In another window, I’m prompted to type in a description of a fictional creature. I pick “monstrously fat cat with unicorn wings” and lo, it appears. I can pinch and pull with my hands to make it even bigger. The more recently launched Xreal One are also here, though it admittedly gets hard to tell which pair of Xreal glasses is which while elbowing past other eager onlookers.

Shot of XREAL booth display Photo by Antonio G. Di Benedetto / The Verge
Xreal’s booth was jam-packed throughout the show.

When I mosey on over to Vuzix’s booth, it’s less packed, but that’s likely because folks are gawping at a bizarre karaoke contest a few booths over. I, on the other hand, am wearing a pair of the company’s latest Ultralite Pro glasses. The glasses look a bit clunkier, but when you put them on, you can see an array of rainbow lights that culminate in a 3D display. I’m looking at a picture of nature, and there’s actual depth.

You’d be hard-pressed to find someone who’d wear glasses like these walking down the street. They look like glasses, sure, but they can also be bulky and sometimes have cords dangling for battery packs. These glasses show hints of what augmented reality is capable of — but they aren’t meant to be things you wear all day, every day.

The spyglasses

This divide between form and function isn’t new. What’s new is that there are far more smart glasses that lie somewhere in the middle. And they have some funky ideas.

Sharge’s Loomos.AI glasses, for example, look similar to the Meta glasses except they use ChatGPT and can shoot 4K photos and 1080p videos. They also add a bizarre neckband battery to account for the massive battery drain. Rayneo was back with smaller, more refined X3 Pro AR glasses. I could list dozens more, but to be frank, they were mostly iterations of the Meta glasses.

Close up of Rokid Glasses’ display Photo by Antonio G. Di Benedetto / The Verge
The Rokid Glasses can do a lot of what the Ray-Ban Meta smart glasses can do, but with a heads-up display.

Of the myriad smart glasses I saw, three stood out: Halliday, Even Realities G1, and the Rokid Glasses. All three feature a discreet design, with a hidden green monochrome heads-up display. Halliday projects its single display from the frame by shining a green light into your eye; the other two feature microetched displays on both lenses that are nigh invisible when viewed from the front. (All three companies told me they use green light because it’s easiest on the eyes, has the best contrast, and is less likely to get washed out in bright ambient lighting.)

There are slight hardware differences between all three, but in my demos, it was clear that, philosophically, they’re much more geared toward all-day productivity. They have AI assistants, can be used as teleprompters, and offer live translation. The Rokid Glasses even have a 12MP camera for taking photos and video.

Close up of Halliday smart glasses Photo by Victoria Song / The Verge
Halliday’s glasses are a bit different as they feature a teeny projector that beams the display into your eye.

In this vision of the smart glasses revolution, these devices are more like all-day companions that help you use your phone less. The display is something that’s only occasionally glanced at when it’s relevant and is done mostly in a productivity context. They offer more smarts than the very use-specific Chamelo and Nuance Audio glasses, but they offer more practicality (and wearability) to the average person than what Xreal and Vuzix are pursuing.

The smart glasses era

The more I talk to the people behind these products, the more it becomes clear that everyone believes smart glasses are the future. It’s also apparent that no one agrees on the best way to get to that future.

“We’ve chosen to optimize for something that is, we think, a great feature geared towards the actual use case of glasses,” says Chamelo CEO and cofounder Reid Covington. “You’re wearing them to see. You’re wearing them to block out light. A lot of the more forward-looking smart glasses have interesting features, but they’re not optimized for, you know, actual usability.”

But even among companies pursuing simpler smart glasses, function isn’t always the reason they choose more discreet or stylish designs. Smart glasses are “something that you need to feel are part of yourself,” says Davide D’Alena, global marketing director for Nuance Audio. Function is nice, but doing all the things isn’t worth it if you have to wear something hideous on your face. “For us, it’s just not enough to put out an ugly product, even if it’s working perfectly from a functional point of view. It must be something that is also a self-expression.”

Meanwhile, some longtime veterans in the space contend it isn’t a choice of form and function. It’s a split between AR and AI.

“I actually see two different directions going forward. One is AR glasses which will handle a lot of the XR content. The other one will be the AI glasses as a major kind of all-day wearable smart glasses,” says Chi Xu, Xreal founder and CEO. Xu says that everything will converge at some point — though we’ll be waiting a good while before it does. Right now, it’s a matter of every option being developed at once as companies try to figure out the best way to draw people in.

Xu isn’t wrong. While some companies like EssilorLuxottica and Chamelo are committed to one approach, others are happy to dabble. Rokid, for example, may have come out with AI-first smart glasses this year, but its array of more Xreal-like AR glasses was actually the bigger portion of its booth. Meta, apparently, is working on glasses with a display, too, targeting later this year — my colleague Alex Heath reports that the company will add its own twist to the formula by shipping a neural wristband that can be used to control them.

But for all the fragmentation, every company I spoke to said the same thing: they’ve seen renewed interest in this space within the last year and a half — and with that comes investors aplenty with deep wallets. The vast majority emphasized how rapid advancements in technology and AI have made things possible today that were impossible just two or three years ago. And every single one said that interest from the general public, not just first adopters, is also higher than in previous years.

This, they all say, proves that smart glasses are inevitable. It’s just a matter of getting everyone else to see the vision. And that’s sort of the problem. With smart glasses, you have to see it to believe it.

❌